CN113766085A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN113766085A
CN113766085A CN202110535077.9A CN202110535077A CN113766085A CN 113766085 A CN113766085 A CN 113766085A CN 202110535077 A CN202110535077 A CN 202110535077A CN 113766085 A CN113766085 A CN 113766085A
Authority
CN
China
Prior art keywords
image
image block
processed
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110535077.9A
Other languages
Chinese (zh)
Other versions
CN113766085B (en
Inventor
杨伟明
王少鸣
郭润增
唐惠忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110535077.9A priority Critical patent/CN113766085B/en
Publication of CN113766085A publication Critical patent/CN113766085A/en
Application granted granted Critical
Publication of CN113766085B publication Critical patent/CN113766085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • H04N1/448Rendering the image unintelligible, e.g. scrambling
    • H04N1/4486Rendering the image unintelligible, e.g. scrambling using digital data encryption

Abstract

The embodiment of the application provides an image processing method, and relates to the technical field of artificial intelligence. According to the image processing method, after the image to be processed is obtained, the image to be processed is divided to obtain at least one image block. And further, acquiring target information of each image block, wherein the target information of each image block comprises target characteristics. And further encrypting the target information and the information to be processed of each image block to obtain encrypted data of the image to be processed. Therefore, according to the technical scheme, the characteristics of the image to be processed are used as the safety factors, so that the safety factors of the image to be processed and the image to be processed are tightly coupled, the decoding difficulty is high, and the safety of the image to be processed can be improved.

Description

Image processing method and related device
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image processing method and a related device.
Background
As applications support more and more functionality, more and more usage scenarios, such as payment scenarios, are available for applications to adapt. Data (such as face images) related to a payment scene have security requirements. In order to ensure the safety of the face image, after the application program collects the face image, the face image is encrypted and stored.
In the conventional image encryption method, the data to be encrypted is encrypted by introducing parameters with a certain rule as security factors. In this way, once the rule of obtaining the security factor is mastered by an attacker, the image or the encryption mechanism is extremely easy to attack. It can be seen that the conventional image encryption method is relatively poor in security.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related device, which can solve the problem of poor security of a conventional encryption method.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
acquiring an image to be processed;
dividing an image to be processed to obtain at least one image block;
acquiring target information of each image block in at least one image block, wherein the target information of each image block comprises target characteristics;
and encrypting the target information and the information to be processed of each image block in at least one image block to obtain encrypted data of the image to be processed, wherein the information to be processed comprises the image to be processed or at least one image block.
In a second aspect, an embodiment of the present application provides an image processing method, including:
acquiring encrypted data corresponding to an image to be processed, wherein the encrypted data is obtained by processing according to the image processing method of the first aspect;
decrypting the encrypted data to obtain information to be verified and at least one first target information, wherein the information to be verified is an image to be verified or at least one image block of the image to be verified;
acquiring second target information of each image block in at least one image block;
if the first target information and the second target information corresponding to each image block are the same, the verification result is that the image to be processed is not tampered;
and if the first target information and the second target information corresponding to any image block in at least one image block are different, the verification result is that the image to be processed is tampered.
In a third aspect, an embodiment of the present application further provides an image processing apparatus, including:
the acquisition module is used for acquiring an image to be processed;
the dividing module is used for dividing the image to be processed to obtain at least one image block;
the acquisition module is further used for acquiring target information of each image block in at least one image block, wherein the target information of each image block comprises target characteristics;
the encryption module is used for encrypting the target information and the information to be processed of each image block in at least one image block to obtain encrypted data of the image to be processed, wherein the information to be processed comprises the image to be processed or at least one image block.
In one possible implementation manner, the device further comprises a feature extraction module,
the characteristic extraction module is used for extracting the initial characteristics of each image block in at least one image block;
the acquisition module is further used for determining the matching degree of the initial features of the image blocks and the plurality of reference features for each image block, and taking the reference feature with the highest matching degree as the target feature of the image block; the plurality of reference features are derived based on image features of a plurality of sample image blocks, the plurality of sample image blocks being derived by dividing a plurality of sample images, the sample images including sample objects.
In one possible implementation, the target information of each image block further includes a target confidence value; and aiming at any image block, the target confidence value represents the matching degree of the initial characteristic of the image block and the target characteristic of the image block.
In a possible implementation manner, the encryption module is further configured to encrypt the target information of each image block and each image block in at least one image block if the image to be processed includes the target object;
the acquisition module is further used for acquiring a target confidence value of each image block, and the target confidence values represent the matching degrees of the initial features of the image blocks and the target features of the image blocks for any image block; and if the number of the target confidence values which are larger than the preset threshold value in the target confidence values meets the condition, determining that the image to be processed comprises the target object.
In one possible implementation manner, the device further comprises a clustering module,
the acquisition module is further used for acquiring a plurality of sample images, wherein the plurality of sample images comprise at least one image containing a sample object and at least one image not containing the sample object;
the clustering module is used for clustering the plurality of sample images to obtain a target image set, and the target image set is an image set comprising sample objects;
the dividing module is further used for respectively dividing each target image in the target image set to obtain a plurality of sample image blocks;
the characteristic extraction module is further configured to extract a characteristic of each sample image block of the plurality of sample image blocks, and use the characteristic of each sample image block as a plurality of reference characteristics.
In one possible implementation manner, the device further comprises a sending module,
and the sending module is used for sending the encrypted data to a receiving end of the image to be processed so that the receiving end can verify whether the image to be processed is tampered or not according to the target information contained in the encrypted data.
In one possible implementation, the apparatus further includes:
the characteristic extraction module is also used for extracting first characteristics of the image to be processed, wherein the first characteristics comprise a first number of first characteristic values;
a dividing module, further configured to equally divide the first feature into a second number of sub-features, wherein each sub-feature comprises at least two first feature values;
the calculation module is used for calculating the mean value of at least two first characteristic values contained in each sub-characteristic corresponding to each sub-characteristic, taking the mean value as a second characteristic value, and taking the characteristic formed by a second number of second characteristic values as the second characteristic of the image to be processed;
and the dividing module is further used for dividing the image corresponding to the second characteristic to obtain at least one image block.
In a fourth aspect, an embodiment of the present application further provides an image processing apparatus, including:
the acquisition module is used for acquiring encrypted data corresponding to the image to be processed, and the encrypted data is obtained by processing according to the image processing method of the first aspect;
the decryption module is used for decrypting the encrypted data to obtain information to be verified and at least one first target information, wherein the information to be verified is an image to be verified or at least one image block of the image to be verified;
the acquisition module is further used for acquiring second target information of each image block in at least one image block;
the verification module is used for judging whether the image to be processed is tampered or not according to the verification result if the first target information and the second target information corresponding to each image block are the same; and the verification result is that the image to be processed is tampered if the first target information corresponding to any image block in the at least one image block is different from the second target information.
In one possible implementation, the apparatus further includes:
the acquisition module is also used for acquiring a service processing request;
the calling module is used for calling the image acquisition equipment to acquire a face image in response to the service processing request, wherein the face image is an image to be processed;
and the processing module is used for processing the service corresponding to the service processing request if the verification result indicates that the face image is not tampered, and feeding back information of failure of the request if the verification result indicates that the face image is tampered.
In a fifth aspect, an embodiment of the present application provides a sending device, which includes a processor and a memory, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the image processing method according to the first aspect.
In a sixth aspect, the present application provides a receiving device, which includes a processor and a memory, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the image processing method according to the second aspect.
In a seventh aspect, this application embodiment provides a computer-readable storage medium, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the image processing method according to the first aspect or the second aspect.
In an eighth aspect, the present application provides a computer program product, where the computer program product includes computer program code, and when the computer program code runs on a computer, the computer is caused to implement the image processing method according to the first aspect or the second aspect.
As can be seen from the above description, the technical solution of the embodiment of the present application has the following advantages:
after the image to be processed is obtained, at least one image block is obtained by dividing the image to be processed. And then, acquiring the target information of each image block in at least one image block, and further encrypting the target information and the information to be processed of each image block in at least one image block to obtain encrypted data of the image to be processed. Wherein the object information of each image block comprises an object feature. The information to be processed comprises an image to be processed or at least one image block. It can be seen that in the technical solution of the embodiment of the present application, at least the target feature of each image block is used as a security factor, and the target feature of each image block is encrypted together with the image to be processed or at least one image block. Because the target characteristics of the image block are related to the image block, according to the technical scheme of the embodiment of the application, the safety factor is tightly coupled with the image block, and a value taking rule does not exist, so that the image block is not easy to decipher, and the safety of image encryption can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below. It should be understood that other figures may be derived from these figures by those of ordinary skill in the art without inventive exercise.
Fig. 1 is a schematic diagram of an exemplary architecture of an image processing system 10 provided in an embodiment of the present application;
fig. 2A is a schematic diagram illustrating an exemplary method flow of an image processing method 100 according to an embodiment of the present disclosure;
fig. 2B is an exemplary signaling interaction diagram of an image processing method 200 according to an embodiment of the present disclosure;
fig. 2C is a schematic diagram illustrating an exemplary method flow of the image processing method 300 according to an embodiment of the present disclosure;
fig. 3 is a schematic view of an exemplary scene for dividing an image to be processed to obtain an image block according to an embodiment of the present application;
fig. 4A is a schematic view of an exemplary scene of a corresponding relationship between an image block and target information according to an embodiment of the present application;
fig. 4B is another exemplary scene schematic diagram of a corresponding relationship between an image block and target information according to an embodiment of the present application;
FIG. 5 is a flow diagram of an exemplary method for configuring a reference signature library provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an exemplary scene for acquiring a face image according to an embodiment of the present application;
fig. 7 is a signaling interaction diagram of a face image processing method according to an embodiment of the present application;
fig. 8A is a schematic diagram illustrating an exemplary composition of an image processing apparatus 80 according to an embodiment of the present application;
fig. 8B is an exemplary structural diagram of a sending device 81 provided in an embodiment of the present application;
fig. 9A is a schematic diagram illustrating an exemplary composition of an image processing apparatus 90 according to an embodiment of the present application;
fig. 9B is an exemplary structural diagram of a receiving device 91 provided in the embodiment of the present application.
Detailed Description
The following describes technical solutions of the embodiments of the present application with reference to the drawings in the embodiments of the present application.
The terminology used in the following examples of the present application is for the purpose of describing particular embodiments and is not intended to be limiting of the technical solutions of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that although the terms first, second, etc. may be used in the following embodiments to describe a class of objects, the objects should not be limited to these terms. These terms are used to distinguish between particular objects of the class of objects. For example, the terms first, second, etc. are employed in the following embodiments to describe the target information, but the target information is not limited to these terms. These terms are only used to distinguish object information in different scenes of the image to be processed. The following embodiments may adopt the terms first, second, etc. to describe other class objects in the same way, and are not described herein again.
The embodiment of the application relates to the technical field of image processing, and discloses a method for acquiring relevant characteristics of an image to be processed based on Artificial Intelligence (AI), and further encrypting and verifying the image by taking the relevant characteristics of the data to be processed as a safety factor. According to the technical scheme, the related characteristics of the image to be processed are used as the safety factors, so that the safety of the image after encryption can be improved.
The following describes related art related to embodiments of the present application.
AI is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The AI technology is a comprehensive subject, and relates to the field of extensive technology, both hardware level technology and software level technology. AI base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operating/interactive systems, mechatronics, and the like. The AI software Technology mainly includes Computer Vision (CV) Technology, Speech processing (Speech Technology) Technology, Natural Language Processing (NLP) Technology, and Machine Learning (ML)/deep Learning.
The technical scheme mainly relates to the CV technology, which is a science for researching how to enable a machine to see, and further refers to replacing human eyes with a camera and a computer to perform machine vision such as identification, tracking and measurement on a target and further perform graphic processing, so that the computer is processed into an image which is more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes image processing (including image encryption and the like), image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction and other technologies, and also includes common biometric identification technologies such as face recognition and the like.
Machine Learning (ML) is a multi-field interdisciplinary subject, and relates to a plurality of subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The embodiment of the application aims to use the image characteristics to unlock the terminal equipment, log in an application program account, use application program payment and other services. Correspondingly, the image processing technology related to the embodiment of the application comprises the step of encrypting the image to be used. After receiving the service request, determining whether to trigger the relevant service to be processed by verifying whether the image contained in the service request meets the condition. The image related to the embodiment of the application can comprise a face image of a user and the like.
Referring to fig. 1, fig. 1 illustrates an exemplary image processing system 10. The image processing system 10 includes: a terminal device 11 and a server 12.
The terminal device 11 may be implemented as an electronic device such as a mobile phone, a tablet Computer, a game console, a wearable device, a PC (Personal Computer), and the like. Optionally, the terminal device 11 supports the capturing of an image of a user, such as a face image of the user, and supports the unlocking of the terminal device 11 using the captured image as an image for security verification. Optionally, the terminal device 11 supports a function of encrypting the acquired image. Optionally, the terminal device 11 may further send the encrypted data to the server 12, so that the server 12 verifies whether the image is tampered during the encryption process. Optionally, an application is installed and run in the terminal device 11. The application program can be an independent application program, and can be directly operated in an operating system without depending on other application programs, such as mobile phone APPs (applications), including an instant messaging APP, a bank APP, a payment APP and the like. Optionally, some applications installed and running in the terminal device 11 support the function of the user to perform secure login through the authentication image. The installation of the running payment APP in the terminal device 11 supports the function of the user to perform secure payment through the verification image.
The server 12 may be a server or a cloud platform that provides computing resources for the terminal device 11, or a server or a cloud platform that provides computing resources for the application program. Taking an application as an example, the server 12 may be configured to maintain operation data of the application, process logic related to configuration and parameters of the application, and provide cloud services such as database, computation, storage, network service, security service, big data and artificial intelligence platform for the operation of the application. For example, the server 12 may maintain an image encryption algorithm model, data related to image encryption and authentication, execution logic for image encryption and authentication, and so on. The server 12 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers.
Illustratively, the terminal device 11 and the server 12 can communicate with each other through the network 13. The network 13 may be a wired network or a wireless network, which is not limited in this embodiment of the present application. Alternatively, after the terminal device 11 acquires the face image of the user, the features related to the face image may be transmitted to the server 12 via the network 13, and the data serving as the security factor may be received from the server 12 via the network 13. Optionally, the terminal device 11 may also send encrypted data containing the image to be authenticated to the server 12 via the network 13. The server 12 verifies whether the image to be verified is tampered with by the security factor, and further, sends the verification result to the terminal device 11 through the network 13.
It is understood that the terminal device 11 and the server 12 in the image processing system 10 are schematic diagrams of logical functional layers, and in practical implementation, at least one terminal device entity and at least one server device entity may be included in the image processing system. And are not limited herein.
The "security factor" referred to in the embodiments of the present application refers to a parameter for being encrypted together with an image to be encrypted, and may include image-related features.
The "image-related feature" referred to in the embodiments of the present application includes features of a plurality of image blocks obtained by dividing the image. Optionally, the features of each image block in the plurality of image blocks may be features of sample objects included in sample image blocks corresponding to the image block, where the sample image blocks are obtained by dividing a plurality of sample images, and the sample images include the sample objects. Optionally, the feature of each image block in the plurality of image blocks may be a feature of a target object included in the image block, where the target object is, for example, a human face.
A common image encryption method in the field takes a timestamp and a counter as security factors of an image to encrypt the image, the timestamp and the counter are parameters with certain rules, once the value rules of the timestamp and the counter are cracked, the encrypted image has the risk of being decoded, and in the encryption process, the image has the risk of being tampered. It can be seen that there is a security risk using such data as a security factor for image encryption.
Based on this, an embodiment of the present application provides an image processing method and a related device, in the image processing method of the embodiment of the present application, after an image to be processed is obtained, a target feature of each image block in at least one image block corresponding to the image to be processed is used as a security factor, and the target feature of each image block and the image to be processed or at least one image block are encrypted. Therefore, the security factor of the image to be processed is tightly coupled with the characteristics of the image to be processed, so that the security factor is not easy to decipher, and the security of the image to be processed can be improved.
The technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be described below through descriptions of several exemplary embodiments.
Referring to fig. 2A, an embodiment of the present application provides an image processing method 100 (hereinafter referred to as the method 100). Alternatively, the method 100 is implemented, for example, as an image encryption method. The present embodiment takes as an example that the image processing method is applied to a terminal device, which may be the terminal device 11 shown in fig. 1. The method 100 includes the steps of:
in step S101, the terminal device acquires an image to be processed.
Alternatively, the image to be processed may be a face image of the user.
In actual implementation, the terminal device may call an image acquisition device to acquire a face video entered by a user, where the face video includes a set of face images of the user at various angles. The image to be processed may be any face image in the set of face images. The image capturing device is, for example, a camera (camera).
In some embodiments, the image capture device may be a device integrated into the terminal device. In other embodiments, the image capturing device may be a device independent from the terminal device, and is connected to the terminal device through an interface of the terminal device to transmit the captured image to the terminal device.
In step S102, the terminal device divides the image to be processed to obtain at least one image block.
In some embodiments, the terminal device divides the image to be processed into at least two consecutive and non-overlapping image blocks (blocks) according to a preset resolution, so that the size of each image block is the preset resolution. The preset resolution can be flexibly set according to requirements, for example, the preset resolution is 60 pixels (P) × 60P. And are not limited herein.
For example, referring to fig. 3, fig. 3 illustrates an exemplary scene diagram of dividing the to-be-processed image into at least one image block, where the size of the to-be-processed image 30 in fig. 3 is, for example, 5400P, and the terminal device divides the to-be-processed image 30 according to a resolution of 60P to obtain 9 continuous and non-overlapping image blocks. Each solid line box, illustrated in fig. 3, represents an image block.
In a possible implementation manner, the dimension of the feature included in each image block obtained by directly dividing the image to be processed may be more than the dimension of the feature required to be used in the technical scenario. Based on this, in order to reduce the amount of calculation and make the features required by the technical scene more prominent, the terminal device may perform dimension reduction on the features of the image to be processed in the process of dividing the image to be processed to obtain at least one image block.
Optionally, the terminal device may extract a first feature of the image to be processed, where the first feature includes a first number of first feature values. The terminal device then equally divides the first feature into a second number of sub-features, each sub-feature comprising at least two first feature values. And corresponding to each sub-feature, the terminal equipment calculates the average value of at least two first feature values contained in the sub-feature, takes the average value as a second feature value, and takes the feature formed by the second number of second feature values as the second feature of the image to be processed. In this way, the terminal device reduces the dimension to a first number of features and to a second number of features. Then, the terminal device divides the image corresponding to the second characteristic to obtain at least one image block.
Optionally, the terminal device may perform the foregoing dimension reduction operation by using a dimension reduction algorithm. The dimension reduction algorithm may be implemented, for example, as a piece-wise aggregation approximation (PAA) to perform dimension reduction processing on the features of the image to be processed.
Taking PAA as an example, the first feature is denoted by c ═ { c1, c2βRefers to the β -th characteristic value in c, β ═ 1, 2, 3 … … n. The second characteristic is expressed as
Figure BDA0003069495900000111
Figure BDA0003069495900000112
Figure BDA0003069495900000113
Means that
Figure BDA0003069495900000114
The α -th feature value, α ═ 1, 2, 3 … … m. n represents the first number, and m represents the second number. Wherein the content of the first and second substances,
Figure BDA0003069495900000115
for example, according to an algorithm
Figure BDA0003069495900000116
And (5) realizing.
It can be seen that the PAA reduces the dimension of the n-dimensional feature into the m-dimensional feature by calculating the average value of the features of each segment, which can minimize the distortion degree occurring in the dimension reduction process. In addition, in this example, the value of m may be flexibly set according to requirements, so that the dimension reduction processing of the to-be-processed images of different sizes can be adapted under the condition of ensuring the minimum distortion degree of the to-be-processed images, and the flexibility is good.
Alternatively, the first feature and the second feature may be implemented as Histogram of Oriented Gradient (hog) features.
In other embodiments, the terminal device may not divide the image to be processed, but may perform the subsequent encryption operation by using the image to be processed as an image block.
In step S103, the terminal device obtains object information of each image block of the at least one image block, where the object information of each image block includes an object feature.
In some embodiments, the target feature of each image block is a reference feature with the highest matching degree with the initial feature of the corresponding image block, among the preset multiple reference features. The initial features of the image blocks are based on features extracted from the corresponding image blocks. The plurality of reference features are derived based on image features of the plurality of sample image blocks. The plurality of sample image blocks are obtained by dividing a plurality of sample images, the sample images including sample objects. Alternatively, the sample object may be a face image. In other embodiments, the target feature of each image block is a feature of a target object included in the corresponding image block, and the target object is, for example, a human face.
Optionally, the following describes a manner of obtaining the multiple reference features by taking an example that the terminal device obtains the multiple reference features through deep learning. The terminal device may acquire a plurality of sample images including at least one image containing a sample object and at least one image not containing a sample object. Then, the terminal device clusters the plurality of sample images to obtain a target image set, wherein the target image set is an image set including the sample object. Furthermore, the terminal device divides each target image in the target image set to obtain a plurality of sample image blocks, extracts the features of each sample image block in the plurality of sample image blocks, and takes the features of each sample image block as a plurality of reference features.
Illustratively, the terminal device may obtain the target image set by performing unsupervised clustering on a plurality of sample images. Unsupervised clustering is for example implemented as k-means (kmeans) clustering. The embodiments of the present application do not limit this.
Optionally, taking an example that a plurality of reference features are stored in the terminal device, the terminal device obtains the target feature of each image block in at least one image block by: the terminal equipment extracts the initial features of each image block in the at least one image block. And then, for each image block, determining the matching degree of the initial features of the image block and a plurality of reference features, and taking the reference feature with the highest matching degree as the target feature of the image block.
For example, referring to FIG. 3 again, the initial characteristic of the ith tile in the 9 tiles in FIG. 3 is, for example, ki, which belongs to { k1, k2, … …, k9 }. The jth reference feature is, for example, xj, which belongs to: { x1, x2, … …, xn }. After obtaining ki, the terminal device determines the matching degree pj of ki and each x in xj, wherein pj belongs to { p1, p2, … …, pn }, respectively. Further, for ki, the terminal selects x corresponding to p with the largest value among pj as the target feature of ki. The target feature x ' i of the ith of the 9 image blocks belongs to { x ' 1, x ' 2, … …, x ' 9}, and x ' i belongs to xj. In this example, x' i represents the object information of the i-th image block among the 9 image blocks.
Alternatively, the initial features and the target features of the embodiments of the present application may be implemented as hog features.
It should be noted that, for each image block, the matching degree of the initial feature of the image block and any reference feature may be characterized by a Distance, where the Distance includes a Euclidean Distance (Euclidean Distance) or a cosine Distance. The larger the Euclidean distance is, the more matched the initial feature and the corresponding reference feature is, and the smaller the Euclidean distance is, the more unmatched the initial feature and the corresponding reference feature is. The smaller the cosine distance, the more matched the initial feature is to the corresponding reference feature, and the larger the cosine distance, the more unmatched the initial feature is to the corresponding reference feature. In another possible implementation manner, the matching degree of the initial feature of the image block and any reference feature may be characterized by a confidence value. The greater the confidence value, the more matched the initial feature is to the corresponding reference feature, and the smaller the confidence value, the more unmatched the initial feature is to the corresponding reference feature.
Optionally, the target information of each image block may further include a target confidence value, and the target confidence value represents a matching degree between the initial feature of the image block and the target feature of the image block.
For example, referring to fig. 3 again, the target information of the ith image block in the 9 image blocks illustrated in fig. 3 is (x 'i, p' i), where p 'i refers to the confidence values of ki and x' i, and is the maximum confidence value of the confidence values of each x in ki and xj. (x 'i, p' i) belongs to { (x '1, p' 1), (x '2, p' 2), … …, (x '9, p' 9) }.
In other embodiments, after obtaining the plurality of reference features, the terminal device may send the plurality of reference features to the server, so that the server performs operations such as image verification according to the plurality of reference features.
In step S104, the terminal device encrypts the target information and the to-be-processed information of each image block in the at least one image block to obtain encrypted data of the to-be-processed image.
Optionally, the terminal device encrypts the target information and the to-be-processed information of each image block by using a pre-deployed encryption algorithm. The encryption algorithm includes a symmetric encryption algorithm or an asymmetric encryption algorithm, and the like, which is not limited in this embodiment of the application.
In some embodiments, the information to be processed is implemented as an image to be processed. In this example, the information to be encrypted includes the image to be processed, the target information of each image block, and other security factors, and the encryption format is, for example, [ (the image to be processed) (the target information of each image block) (the other security factors) ].
In other embodiments, the information to be processed is implemented as at least one image block. In the present example, the information to be encrypted includes each image block, object information of each image block, and other security factors, and the encryption format is, for example, [ (each image block) (object information of each image block) (other security factors) ], or [ (image block 1) (object information of image block 1) (image block 2) (object information of image block 2) … … (image block m) (object information of image block m) (other security factors) ], m being an integer greater than 1.
Optionally, the other security factors may include parameters such as time stamps and counters.
It is to be understood that the foregoing encryption format is only a schematic description, and is intended to illustrate the content of the information to be encrypted, and does not limit the embodiments of the present application. In actual implementation, the combination mode and the position relationship of each item of information to be encrypted may be in other forms, which is not limited in the embodiment of the present application.
It should be noted that in some possible embodiments, the image acquired by the terminal device may not be an image containing the target object, but an image containing other objects, for example, the image acquired by the terminal device is not a human face image, but an animal image. In view of this, in order to ensure that the encrypted image is an image including the target object, the terminal device may encrypt the target information of each image block of the at least one image block and each image block in a scene where it is determined that the target object is included in the image to be processed. If the terminal device determines that the target object is not included in the image to be processed, the terminal device may not perform the encryption operation in step S104, and display a reminding message to the user to indicate to the user that the acquired image is illegal (i.e., does not include the target object).
For example, the terminal device may determine whether the target object is included in the image to be processed according to the target confidence values of the respective image blocks. Optionally, if the number of target confidence values greater than the predetermined threshold in each target confidence value satisfies a condition, it is determined that the image to be processed includes the target object. Otherwise, if the number of the target confidence values larger than the preset threshold value in the target confidence values does not meet the condition, determining that the target object is not included in the image to be processed. The predetermined threshold is, for example, 0.9, which is not limited in this embodiment of the application.
In some embodiments, the number of target confidence values that are greater than the predetermined threshold in each target confidence value satisfies the condition, which may be implemented as: the number of target confidence values greater than the predetermined threshold is greater than the first preset value. The first preset value is related to the total number of the at least one image block, for example, the total number of the at least one image block is 9, the first preset value may be 6, for example, the total number of the at least one image block is 100, and the first preset value may be 80, for example. In other embodiments, the number of target confidence values that are greater than the predetermined threshold in each target confidence value satisfies the condition, which may be implemented as: the number of target confidence values greater than the predetermined threshold value, in proportion to the total number of the at least one image block, reaches a second preset value. The second preset value may be, for example, 0.8, which is not limited in this embodiment of the application.
According to the description of the foregoing embodiments, the target confidence value characterizes a matching degree of the initial features of the image block and the target features of the image block, the target features are reference features containing sample objects, the sample objects indicate the target objects, that is, the target confidence value may characterize a possibility that the image block contains the target objects, and the higher the target confidence value is, the higher the possibility that the corresponding image block contains the target objects is. As can be seen, if the number of target confidence values greater than the predetermined threshold in each target confidence value satisfies the condition, it indicates that the number of image blocks including the target object in at least one image block satisfies the condition, and further indicates that the corresponding to-be-processed image includes the target object.
By adopting the implementation mode, the terminal equipment can screen out legal images (namely, images containing target objects), and then the legal images and corresponding target information are encrypted. In this way, the overhead of cryptographic calculation can be reduced.
In summary, in the embodiment of the present application, after acquiring the to-be-processed image, the terminal device obtains at least one image block by dividing the to-be-processed image. And then, acquiring the target information of each image block in at least one image block, and further encrypting the target information and the information to be processed of each image block in at least one image block to obtain encrypted data of the image to be processed. Wherein the object information of each image block comprises an object feature. The information to be processed comprises an image to be processed or at least one image block. It can be seen that in the technical solution of the embodiment of the present application, at least the target feature of each image block is used as a security factor, and the target feature of each image block is encrypted together with the image to be processed or at least one image block. Because the target characteristics of the image block are related to the image block, according to the technical scheme of the embodiment of the application, the safety factor is tightly coupled with the image block, and a value taking rule does not exist, so that the image block is not easy to decipher, and the safety of image encryption can be improved.
The method 100 is but one implementation of an embodiment of the present application. In another embodiment of the present application, the terminal device and the server may perform an encryption operation on the image to be processed through signaling interaction. The following describes an image processing method according to an embodiment of the present application, with reference to a signaling interaction process between a terminal device and a server.
Referring to fig. 2B, fig. 2B illustrates an exemplary image processing method 200 (hereinafter referred to as method 200). Alternatively, the method 200 is implemented, for example, as an image encryption method. In this embodiment, the server stores the plurality of reference features. The server may be the server 12 shown in fig. 1. The method 200 includes the steps of:
in step S201, the terminal device acquires an image to be processed.
In step S202, the terminal device divides the image to be processed into at least one image block.
For details of the implementation manners of step S201 and step S202, reference may be made to the implementation manners of step S101 and step S102 in the method 100, and details are not described here again.
In step S203, the terminal device acquires an initial feature of each image block of the at least one image block.
The implementation of the initial feature of each image block is described in detail in the method 100, and is not described herein again.
In step S204, the terminal device sends the initial features of each image block of the at least one image block to the server.
In connection with the image processing system 10 illustrated in fig. 1, the terminal device may send the initial characteristics of the respective image blocks to the server over the network.
In step S205, the server obtains a target confidence value corresponding to each image block in the at least one image block according to the initial features of each image block in the at least one image block.
Optionally, for the initial feature of each image block, the server may determine confidence values of the initial feature and a plurality of reference features, and use the maximum confidence value as the target confidence value of the corresponding image block.
In step S206, if it is determined that the target object is included in the to-be-processed image, the server sends target information of each image block to the terminal device.
Optionally, the server may determine whether the image to be processed includes the target object according to the target confidence values of the respective image blocks. For example, the server may determine whether the target object is included in the image to be processed according to whether the number of target confidence values greater than a predetermined threshold among the respective target confidence values satisfies a condition.
The server determines whether the image to be processed includes the implementation manner of the target object according to the number of the target confidence values greater than the predetermined threshold, and is similar to the implementation manner of the terminal device determining whether the image to be processed includes the target object according to the number of the target confidence values greater than the predetermined threshold, for details, see the relevant description in the method 100, which is not repeated herein.
And if the image to be processed comprises the target object, the server sends the target information of each image block to the terminal equipment. If it is determined that the image to be processed does not include the target object, the server may send a reminding message to the terminal device, so that the terminal device presents information that the acquired image is illegal (i.e., does not include the target object) to the user.
It should be noted that, in some embodiments, the target information related to this example includes a target feature corresponding to each image block, and the target feature refers to a reference feature corresponding to a target confidence value of a corresponding image block. In other embodiments, the object information in accordance with this example includes an object feature and an object confidence value corresponding to each image block.
In step S207, the terminal device encrypts the target information and the to-be-processed information of each image block in the at least one image block to obtain encrypted data of the to-be-processed image.
For details of the implementation manner of step S207, reference may be made to the implementation manner of step S104 in the method 100, and details are not described here again.
It should be noted that the method 200 is only a schematic illustration of the interaction between the terminal device and the server. In another implementation manner, the terminal device may send the image to be processed to the server after acquiring the image to be processed, so that the server divides the image to be processed, and acquires the initial features of each image block. The embodiment of the present application does not detail this implementation process.
By adopting the implementation mode, the server executes part of the calculation process in the image encryption process, thereby reducing the expense of the terminal equipment.
Optionally, in the process of encrypting the image to be processed by the terminal device, the image to be processed may also be tampered. Based on this, on the basis of the method 100 or the method 200, the embodiment of the present application further includes a process of image verification. Illustratively, the terminal device (i.e., the sending end) encrypts data and sends the data to the server (i.e., the receiving end) of the image to be processed, so that the receiving end can verify whether the image to be processed is tampered or not according to the target information contained in the encrypted data.
Referring to fig. 2C, an embodiment of the present application provides an image processing method 300 (hereinafter referred to as the method 300). Alternatively, method 300 is implemented, for example, as an image verification method. The embodiment takes the application of the image authentication method to a server, which may be the server 12 shown in fig. 1 as an example. The method 300 includes the steps of:
in step S301, the server acquires encrypted data corresponding to the image to be processed.
Optionally, the encrypted data is obtained by processing the image to be processed according to the method 100 or the method 200.
In step S302, the server decrypts the encrypted data to obtain the information to be verified and the at least one first target information.
The server side deploys a decryption algorithm, and the decryption algorithm corresponds to an encryption algorithm deployed on the terminal equipment side. For example, if the encryption algorithm deployed on the terminal device side is an asymmetric encryption algorithm, the decryption algorithm of the corresponding asymmetric encryption algorithm is deployed on the server side, and the key associated with the corresponding asymmetric encryption algorithm is pre-stored.
Optionally, the information to be verified is the image to be verified or at least one image block of the image to be verified. With reference to the description of the related embodiment in the method 100 or the method 200, if the terminal device encrypts the image to be processed, in this example, the information to be verified is the image to be verified; if the terminal device encrypts at least one image block of the image to be processed, in this example, the information to be verified is at least one image block of the image to be verified.
The at least one piece of first target information corresponds to at least one image block of the image to be verified one by one and is obtained in the process that the terminal equipment executes encryption. Referring to fig. 4A, fig. 4A illustrates an exemplary correspondence relationship between at least one image block and at least one first target information. As shown in fig. 4A, the image to be verified 40 includes 9 image blocks, each solid-line square in fig. 4A represents one image block, the identifier "d 1 i" in each solid-line square represents the first target information of the corresponding image block, and i is any integer from 1 to 9.
In step S303, the server acquires second target information of each image block of the at least one image block. If the first target information and the second target information corresponding to each image block are the same, executing step S304, and verifying that the to-be-processed image is not tampered; if the first target information and the second target information corresponding to any image block in at least one image block are different, step S305 is executed, and the verification result is that the image to be processed is tampered.
When the information to be verified is an image to be verified, the server divides the image to be verified to obtain at least one image block of the image to be verified, and then extracts the initial features of each image block, and obtains second target information of each image block according to the initial features and the multiple reference features of each image block.
It should be noted that the resolution of the image blocks obtained by dividing the image to be processed by the server is the same as the resolution of the image blocks obtained by dividing the image to be processed by the terminal device, for example, both are 60P × 60P. In this way, the content corresponding to the initial feature of each image block in the verification process can be ensured to correspond to the content corresponding to the initial feature of the image block in the corresponding area in the encryption process, so that the accuracy of the verification result can be ensured.
When the information to be verified is at least one image block of the image to be verified, the server extracts the initial features of each image block, and obtains second target information of each image block according to the initial features and the multiple reference features of each image block.
Optionally, the implementation process of extracting the initial features of each image block by the server and obtaining the second target information of each image block according to the initial features and the multiple reference features of each image block may refer to an embodiment of a relevant implementation process in the method 100 or the method 200, and details are not repeated here.
The at least one piece of second target information corresponds to the at least one image block of the image to be verified one by one, and correspondingly, the at least one piece of second target information corresponds to the at least one piece of first target information one by one. Referring to FIG. 4B, the symbol "d" of each solid line square in FIG. 4B2i"denotes second object information of the corresponding image block. That is, each image block of the image to be verified 40 corresponds to "d1i"and" d2i”。
For any image block, the first target information and the second target information of the image block are obtained according to the initial features of the image block, and the initial features are extracted based on the content of the image block. Based on this, if the first target information and the second target information corresponding to the image block are the same, i.e. "d1i=d2i"the initial characteristic of the image block in the encryption process is explained to be the same as the initial characteristic of the image block in the verification process, that is, the content of the image block in the encryption process is explained to be the same as the content of the image block in the verification process, and the content of the image block is proved to be not tampered. Similarly, if the first target information and the second target information corresponding to the image block are different, that is, the image block is determined to be a non-uniform image blockd1i≠d2iThe initial characteristic of the image block in the encryption process is different from the initial characteristic of the image block in the verification process, namely the content of the image block in the encryption process is different from the content of the image block in the verification process, and the fact that the content of the image block is tampered in the encryption process is proved. Based on this, if the first target information and the second target information corresponding to each image block are the same, the server obtains a verification result that the image to be processed is not tampered; and if the first target information corresponding to any image block in at least one image block is different from the second target information, the server obtains a verification result that the image to be processed is tampered.
Furthermore, it is noted that if the first target information is implemented as the first target feature, the second target information is implemented as the second target feature. The first target information is the same as the second target information, which means that the first target characteristic is the same as the second target characteristic; the first target information and the second target information are different, which means that the first target characteristic is different from the second target characteristic. If the first target information is implemented as the first target feature and the first target confidence value, then the second target information is implemented as the second target feature and the second target confidence value. The first target information and the second target information are the same, which means that the first target characteristic and the second target characteristic are the same, and the first target confidence value and the second target confidence value are the same; the first target information and the second target information are different, which means that the first target characteristic is different from the second target characteristic.
Therefore, by adopting the implementation mode, the server obtains the second target information of at least one image block of the image to be verified after obtaining the encrypted data corresponding to the image to be processed, and then verifies whether the image to be processed is tampered by comparing whether the first target information and the second target information corresponding to each image block are the same. The first target information is target information corresponding to at least one image block of the image to be processed in the process of encrypting the image to be processed. The target information is associated with the characteristics of the image block, and is used as a security factor for encryption and verification, so that the security factor is tightly coupled with the image characteristics, a value rule does not exist, the decoding is not easy to occur, and the security of image encryption can be improved.
Optionally, after acquiring the service processing request, the server generally responds to the service processing request and invokes the image acquisition device of the terminal device to acquire the image to be processed. Based on the above, if the verification result is that the image to be processed is not tampered, the server processes the service corresponding to the service processing request; if the verification result is that the image to be processed is tampered, the server can feed back information of failure request to the terminal device. Optionally, the image to be processed is, for example, a face image. The business processing request can comprise a face brushing payment request, a face brushing unlocking request, a face brushing login request and the like.
The methods 100 to 300 are all descriptions of the technical solutions of the present application from the perspective of executing operations by a device, and the following describes an image processing method according to an embodiment of the present application with reference to an application scenario.
Illustratively, the following relates to a face-brushing payment scene, for example, in this example, the image to be processed is a human face image, the terminal device is a smartphone, and the server is a cloud server, for example.
Before performing the face brushing payment of the present case, a reference feature library of the face sample image may be configured. Furthermore, after the smart phone receives the target face image input by the user, the target face image is encrypted based on the reference feature library. After the request for face-brushing payment is obtained, the cloud server verifies whether the face image corresponding to the encrypted data is the target face image or not based on the reference feature library, and determines whether to process payment business or not based on a verification result.
Referring to fig. 5, fig. 5 is a flow chart illustrating a method for configuring a reference feature library of a face sample image. Optionally, a plurality of sample images are obtained, where the plurality of sample images include at least one image including a face sample image and at least one image not including a face sample image. And extracting the hog features of the corresponding images corresponding to each image in the sample images. And then, performing kmeans clustering on the plurality of sample images based on the hog characteristics to obtain a face sample image set. And further, dividing each image in the face sample image set respectively to obtain 5000 image blocks containing the features of the face samples. And extracting the hog features of the 5000 image blocks to obtain 5000 reference features, wherein the 5000 reference features form a reference feature library.
Optionally, the implementation process illustrated in fig. 5 may be executed at the smartphone end or at the cloud server end. If the reference feature library is executed at the smart phone end, the smart phone can send the reference feature library to the cloud server.
Further, the smart phone receives a payment operation of the user and sends a face-brushing payment request to the cloud server. The cloud server triggers the smart phone to collect a target face image of the user. The smartphone acquires a scene of a target face image of a user, which may be as shown in fig. 6, where the face image illustrated in fig. 6 is the target face image. Further, the smart phone and the cloud server encrypt and verify the target face image.
Referring to fig. 7, fig. 7 is a signaling interaction diagram illustrating a face image processing method. The method comprises the steps that after a target face image is collected by a smart phone, a first hog characteristic of the target face image is obtained. The smart phone carries out segmentation aggregation on the first hog features through PAA to obtain second hog features of the target face image, then divides an image corresponding to the second hog features into a plurality of image blocks to extract the hog features of each image block, and then sends the hog features of each image block to the cloud server.
The cloud server obtains reference features with the highest degree of matching with the hog features of each image block and confidence values of the reference features and the hog features from the 5000 reference features respectively. Then, for example, the number of confidence values greater than 0.95 is taken to 90%. And then, the cloud server sends the reference feature with the highest matching degree and the corresponding confidence value corresponding to each image block to the smart phone.
And the smart phone encrypts the acquired image, the received reference characteristic and the corresponding confidence value to obtain encrypted data. And then, the smart phone sends the encrypted data to the cloud server.
And the cloud server decrypts the decrypted data to obtain the image to be verified and the at least one reference characteristic and the at least one confidence value during encryption. And then, dividing the image to be verified to obtain at least one image block. And then, the cloud server acquires the hog features of each image block, so that the reference feature with the highest matching degree of each image block is acquired from 5000 reference features according to the hog features of the image blocks. And then, comparing the reference characteristic before encryption corresponding to each image block with the obtained reference characteristic. If the two reference characteristics corresponding to each image block are the same, the cloud server determines that the target face image is not tampered in the encryption process, and then payment operation is triggered. If the two reference characteristics corresponding to any image block are different, the cloud server determines that the target face image is tampered in the encryption process, and then sends a payment failure reminding message to the terminal device.
It is to be understood that fig. 5 to 7 are only schematic illustrations and do not limit the embodiments of the present application. In practical implementation, the image to be processed may also be other images, and the image processing process may also include more or fewer processing steps. In addition, the devices shown in fig. 5 to 7 may also be other devices, for example, the terminal device may also be a tablet computer, and the server may also be an application server. The embodiments of the present application do not limit this.
In summary, after the to-be-processed image is acquired, at least one image block is obtained by dividing the to-be-processed image. And then, acquiring the target information of each image block in at least one image block, and further encrypting the target information and the information to be processed of each image block in at least one image block to obtain encrypted data of the image to be processed. Wherein the object information of each image block comprises an object feature. The information to be processed comprises an image to be processed or at least one image block. It can be seen that in the technical solution of the embodiment of the present application, at least the target feature of each image block is used as a security factor, and the target feature of each image block is encrypted together with the image to be processed or at least one image block. Because the target characteristics of the image block are related to the image block, according to the technical scheme of the embodiment of the application, the safety factor is tightly coupled with the image block, and a value taking rule does not exist, so that the image block is not easy to decipher, and the safety of image encryption can be improved.
The foregoing embodiments describe various embodiments of the image processing method provided in the embodiments of the present application in terms of operations performed by each device, such as dividing image blocks, acquiring target information of each image block, and the like. It should be understood that, in correspondence to the processing steps of dividing the image blocks, acquiring the target information of each image block, and the like, the embodiments of the present application may implement the above functions in a form of hardware or a combination of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
For example, if the above implementation steps implement the corresponding functions through software modules. As shown in fig. 8A, an image processing apparatus 80 is provided, and the image processing apparatus 80 may include an acquisition module 801, a division module 802, and an encryption module 803. The image processing apparatus 80 may be used to perform part or all of the operations of the terminal device in fig. 2A to 7.
For example: the acquisition module 801 may be used to acquire an image to be processed. The dividing module 802 may be configured to divide the image to be processed into at least one image block. The obtaining module 801 may further be configured to obtain target information of each image block of the at least one image block, where the target information of each image block includes a target feature. The encryption module 803 may be configured to encrypt target information and to-be-processed information of each image block in the at least one image block to obtain encrypted data of the to-be-processed image, where the to-be-processed information includes the to-be-processed image or the at least one image block.
Therefore, after obtaining the image to be processed, the image processing apparatus 80 provided in the embodiment of the present application uses the target feature of each image block in the at least one image block corresponding to the image to be processed as a security factor, and encrypts the target feature of each image block and the image to be processed or the at least one image block. Therefore, the security factor of the image to be processed is tightly coupled with the characteristics of the image to be processed, so that the security factor is not easy to decipher, and the security of the image to be processed can be improved.
Optionally, the image processing apparatus 80 further includes a feature extraction module, and the feature extraction module may be configured to extract initial features of each image block in the at least one image block. In this example, the obtaining module 801 is further configured to determine, for each image block, matching degrees of the initial features of the image block and a plurality of reference features, and use the reference feature with the highest matching degree as a target feature of the image block; the plurality of reference features are derived based on image features of a plurality of sample image blocks, the plurality of sample image blocks being derived by dividing a plurality of sample images, the sample images including sample objects.
Optionally, the target information of each image block further includes a target confidence value; and aiming at any image block, the target confidence value represents the matching degree of the initial characteristic of the image block and the target characteristic of the image block.
Optionally, the encrypting module 803 is further configured to encrypt the object information and each image block in at least one image block if the to-be-processed image includes the object. In this example, the obtaining module 801 is further configured to obtain a target confidence value of each image block, and for any image block, the target confidence value represents a matching degree between an initial feature of the image block and a target feature of the image block; and if the number of the target confidence values which are larger than the preset threshold value in the target confidence values meets the condition, determining that the image to be processed comprises the target object.
Optionally, the obtaining module 801 is further configured to obtain a plurality of sample images, where the plurality of sample images includes at least one image containing a sample object and at least one image not containing a sample object. The image processing apparatus 80 further comprises a clustering module, which may be configured to cluster the plurality of sample images to obtain a target image set, where the target image set is an image set including the sample object. In this example, the dividing module 802 is further configured to divide each target image in the target image set to obtain a plurality of sample image blocks; the characteristic extraction module is further configured to extract a characteristic of each sample image block of the plurality of sample image blocks, and use the characteristic of each sample image block as a plurality of reference characteristics.
Optionally, the image processing apparatus 80 further includes a sending module, configured to send the encrypted data to a receiving end of the to-be-processed image, so that the receiving end verifies whether the to-be-processed image is tampered according to the target information included in the encrypted data.
Optionally, the image processing apparatus 80 further comprises a calculation module. In this example, the feature extraction module is further configured to extract a first feature of the image to be processed, where the first feature includes a first number of first feature values. The partitioning module 802 is further configured to divide the first feature equally into a second number of sub-features, wherein each sub-feature comprises at least two first feature values. The calculating module may be configured to calculate, corresponding to each sub-feature, an average value of at least two first feature values included in the sub-feature, use the average value as a second feature value, and use a feature composed of a second number of second feature values as a second feature of the image to be processed. The dividing module 802 is further configured to divide the image corresponding to the second feature to obtain at least one image block.
It is understood that the above division of the modules is only a division of logical functions, and in actual implementation, the above functions of the modules may be integrated into a hardware entity, for example, the function of the obtaining module 801 may be integrated into a transceiver, the function of the dividing module 802 and the function of the encrypting module 803 may be integrated into a processor, and so on.
Referring to fig. 8B, fig. 8B provides a transmitting device 81, the transmitting device 81 comprising a processor 811, a transceiver 812, an image collector 813 and a memory 814, which are connected and communicate via a communication bus 815. The image collector 813 can be used to obtain an image to be processed. Transceiver 812 may be used for transceiving features and the like with a server. The memory 814 is used for storing an application program and data generated during image processing, and when the application program is called, the processor 811 is caused to perform a part or all of the operations of the terminal device in fig. 2A to 7 described above.
For a specific implementation process, refer to the description related to the terminal device in fig. 2A to fig. 7, which is not described herein again.
Accordingly, as shown in fig. 9A, an image processing apparatus 90 is provided, and the image processing apparatus 90 may include an acquisition module 901, a decryption module 902, and a verification module 903. The image processing apparatus 90 can be used to perform part or all of the operations of the servers in fig. 2B to 5 and fig. 7.
For example, the obtaining module 901 may be configured to obtain encrypted data corresponding to an image to be processed, where the encrypted data is obtained by processing according to the foregoing image processing method. The decryption module 902 may be configured to decrypt the encrypted data to obtain information to be verified and at least one first target information, where the information to be verified is an image to be verified or at least one image block of the image to be verified. In this example, the obtaining module 901 may further be configured to obtain the second target information of each image block of the at least one image block. The verification module 903 may be configured to determine that the to-be-processed image is tampered with, if the first target information and the second target information corresponding to each image block are the same; and the verification result is that the image to be processed is not tampered if the first target information corresponding to any image block in the at least one image block is different from the second target information.
Optionally, the obtaining module 901 is further configured to obtain the service processing request. The image processing apparatus 90 further includes a calling module and a processing module, where the calling module may be configured to respond to a service processing request and call an image acquisition device to acquire a face image, where the face image is an image to be processed. And the processing module is used for processing the service corresponding to the service processing request if the verification result indicates that the face image is not tampered. The processing module is also used for feeding back information of request failure if the verification result is that the face image is tampered.
It is understood that the above division of the modules is only a division of logical functions, and in actual implementation, the above functions of the modules may be integrated into a hardware entity, for example, the function of the acquiring module 901 may be integrated into a transceiver implementation, the function of the decrypting module 902 and the function of the verifying module 903 may be integrated into a processor implementation, a plurality of reference features and applications related to image verification logic may be maintained in a memory, and the like.
Referring to fig. 9B, fig. 9B provides a receiving device 91, where the receiving device 91 can implement the functions of any one of the servers in fig. 2B to fig. 5 and fig. 7. Receiving device 91 may vary widely in configuration or performance and may include one or more Central Processing Units (CPUs) 911 (e.g., one or more processors) and memory 912, one or more storage media 913 (e.g., one or more mass storage devices) storing application programs 9131 or data 9132. Memory 912 and storage medium 913 may be, among other things, transient or persistent storage. The program stored on the storage medium 913 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Further, the central processor 911 may be configured to communicate with the storage medium 913, and execute a series of instruction operations in the storage medium 913 on the reception apparatus 91.
The receiving device 91 may also include one or more power supplies 914, one or more wired or wireless network interfaces 915, one or more input-output interfaces 916, and/or one or more operating systems 917, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
The steps performed by the image processing apparatus 90 in the above-described embodiment may be implemented based on the server configuration shown in fig. 9B.
It should be appreciated that in some possible implementations, the processor illustrated in fig. 8B and 9B may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), field-programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memories illustrated in fig. 8B and 9B may include both read-only memory and random access memory, and provide instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information, and the like.
Also provided in embodiments of the present application is a computer-readable storage medium having stored therein instructions for image processing, which when executed on a computer, cause the computer to perform some or all of the steps of the method described in the foregoing embodiments shown in fig. 2A to 7.
Also provided in embodiments of the present application is a computer program product including instructions for image processing, which when executed on a computer, causes the computer to perform some or all of the steps of the method described in the embodiments of fig. 2A to 7.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a smart phone, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While alternative embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present invention.

Claims (14)

1. An image processing method, characterized in that the method comprises:
acquiring an image to be processed;
dividing the image to be processed to obtain at least one image block;
acquiring target information of each image block in the at least one image block, wherein the target information of each image block comprises target characteristics;
and encrypting the target information and the information to be processed of each image block in the at least one image block to obtain encrypted data of the image to be processed, wherein the information to be processed comprises the image to be processed or the at least one image block.
2. The method according to claim 1, wherein the object information of each of the at least one image block is obtained by:
extracting initial features of each image block in the at least one image block;
for each image block, determining the matching degree of the initial features of the image block and a plurality of reference features, and taking the reference feature with the highest matching degree as the target feature of the image block; the plurality of reference features are derived based on image features of a plurality of sample image blocks, the plurality of sample image blocks being derived by dividing a plurality of sample images, the sample images comprising sample objects.
3. The method of claim 2, wherein the target information of each image block further comprises a target confidence value;
for any image block, the target confidence value represents the matching degree of the initial characteristic of the image block and the target characteristic of the image block.
4. The method according to claim 2, wherein the encrypting the object information of each image block of the at least one image block and each image block comprises:
if the image to be processed comprises a target object, encrypting target information of each image block in the at least one image block and each image block;
determining a mode of including a target object in the image to be processed, including:
acquiring target confidence values of the image blocks, wherein the target confidence values represent the matching degrees of the initial characteristics of the image blocks and the target characteristics of the image blocks aiming at any image block;
and if the number of the target confidence values which are larger than the preset threshold value in the target confidence values meets the condition, determining that the image to be processed comprises the target object.
5. The method of claim 2, wherein the plurality of reference features are obtained by:
obtaining the plurality of sample images, the plurality of sample images including at least one image containing the sample object and at least one image not containing a sample object;
clustering the plurality of sample images to obtain a target image set, wherein the target image set is an image set comprising the sample objects;
respectively dividing each target image in the target image set to obtain a plurality of sample image blocks;
and extracting the features of each sample image block in the plurality of sample image blocks, and taking the features of each sample image block as the plurality of reference features.
6. The method according to any one of claims 1-5, further comprising:
and sending the encrypted data to a receiving end of the image to be processed, so that the receiving end verifies whether the image to be processed is tampered according to target information contained in the encrypted data.
7. The method according to claim 1, wherein the dividing the image to be processed into at least one image block comprises:
extracting first features of the image to be processed, wherein the first features comprise a first number of first feature values;
equally dividing the first feature into a second number of sub-features, wherein each sub-feature comprises at least two first feature values;
corresponding to each sub-feature, calculating the mean value of at least two first feature values contained in the sub-feature, taking the mean value as a second feature value, and taking the feature formed by the second number of second feature values as the second feature of the image to be processed;
and dividing the image corresponding to the second characteristic to obtain the at least one image block.
8. An image processing method, comprising:
acquiring encrypted data corresponding to an image to be processed, wherein the encrypted data is obtained by processing according to any one of claims 1 to 7;
decrypting the encrypted data to obtain information to be verified and at least one first target information, wherein the information to be verified is an image to be verified or at least one image block of the image to be verified;
acquiring second target information of each image block in the at least one image block;
if the first target information and the second target information corresponding to each image block are the same, the verification result is that the image to be processed is not tampered;
and if the first target information and the second target information corresponding to any image block in the at least one image block are different, the verification result is that the image to be processed is tampered.
9. The method according to claim 8, characterized in that the image to be processed is acquired by:
acquiring a service processing request;
responding to the service processing request, calling image acquisition equipment to acquire a face image, wherein the face image is the image to be processed;
further comprising:
if the verification result is that the face image is not tampered, processing the service corresponding to the service processing request;
and if the verification result is that the face image is tampered, feeding back information of failure request.
10. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an image to be processed;
the dividing module is used for dividing the image to be processed to obtain at least one image block;
the acquisition module is further used for acquiring target information of each image block in the at least one image block, wherein the target information of each image block comprises target characteristics;
and the encryption module is used for encrypting the target information and the information to be processed of each image block in the at least one image block to obtain the encrypted data of the image to be processed, wherein the information to be processed comprises the image to be processed or the at least one image block.
11. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring encrypted data corresponding to the image to be processed, wherein the encrypted data is obtained by processing according to any one of claims 1 to 7;
the decryption module is used for decrypting the encrypted data to obtain information to be verified and at least one piece of first target information, wherein the information to be verified is an image to be verified or at least one image block of the image to be verified;
the obtaining module is further configured to obtain second target information of each image block in the at least one image block;
the verification module is used for judging that the image to be processed is not tampered with according to the verification result if the first target information and the second target information corresponding to each image block are the same; and the verification result is that the image to be processed is tampered if the first target information and the second target information corresponding to any image block in the at least one image block are different.
12. A transmitting device, characterized in that the transmitting device comprises a processor and a memory in which at least one instruction, at least one program, set of codes or set of instructions is stored, which is loaded and executed by the processor to implement the image processing method according to any one of claims 1 to 7.
13. A receiving device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, said at least one instruction, said at least one program, said set of codes, or set of instructions being loaded and executed by said processor to implement the image processing method according to claim 8 or 9.
14. A computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the image processing method of any one of claims 1 to 7 or to implement the image processing method of claim 8 or 9.
CN202110535077.9A 2021-05-17 2021-05-17 Image processing method and related device Active CN113766085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110535077.9A CN113766085B (en) 2021-05-17 2021-05-17 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110535077.9A CN113766085B (en) 2021-05-17 2021-05-17 Image processing method and related device

Publications (2)

Publication Number Publication Date
CN113766085A true CN113766085A (en) 2021-12-07
CN113766085B CN113766085B (en) 2023-03-03

Family

ID=78787085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110535077.9A Active CN113766085B (en) 2021-05-17 2021-05-17 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN113766085B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115776410A (en) * 2023-01-29 2023-03-10 深圳汉德霍尔科技有限公司 Face data encryption transmission method for terminal identity authentication
WO2023142440A1 (en) * 2022-01-28 2023-08-03 中国银联股份有限公司 Image encryption method and apparatus, image processing method and apparatus, and device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6532541B1 (en) * 1999-01-22 2003-03-11 The Trustees Of Columbia University In The City Of New York Method and apparatus for image authentication
CN105046633A (en) * 2015-06-30 2015-11-11 合肥高维数据技术有限公司 Method for nondestructive image conformation
CN108711054A (en) * 2018-04-28 2018-10-26 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN111784614A (en) * 2020-07-17 2020-10-16 Oppo广东移动通信有限公司 Image denoising method and device, storage medium and electronic equipment
CN112784823A (en) * 2021-03-17 2021-05-11 中国工商银行股份有限公司 Face image recognition method, face image recognition device, computing equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6532541B1 (en) * 1999-01-22 2003-03-11 The Trustees Of Columbia University In The City Of New York Method and apparatus for image authentication
CN105046633A (en) * 2015-06-30 2015-11-11 合肥高维数据技术有限公司 Method for nondestructive image conformation
CN108711054A (en) * 2018-04-28 2018-10-26 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN111784614A (en) * 2020-07-17 2020-10-16 Oppo广东移动通信有限公司 Image denoising method and device, storage medium and electronic equipment
CN112784823A (en) * 2021-03-17 2021-05-11 中国工商银行股份有限公司 Face image recognition method, face image recognition device, computing equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142440A1 (en) * 2022-01-28 2023-08-03 中国银联股份有限公司 Image encryption method and apparatus, image processing method and apparatus, and device and medium
CN115776410A (en) * 2023-01-29 2023-03-10 深圳汉德霍尔科技有限公司 Face data encryption transmission method for terminal identity authentication
CN115776410B (en) * 2023-01-29 2023-05-02 深圳汉德霍尔科技有限公司 Face data encryption transmission method for terminal identity authentication

Also Published As

Publication number Publication date
CN113766085B (en) 2023-03-03

Similar Documents

Publication Publication Date Title
JP7142778B2 (en) Identity verification method and its apparatus, computer program and computer equipment
US20210064900A1 (en) Id verification with a mobile device
TWI752418B (en) Server, client, user authentication method and system
CN111680672B (en) Face living body detection method, system, device, computer equipment and storage medium
CN113766085B (en) Image processing method and related device
EP3234904B1 (en) Method and apparatus for publishing locational copyrighted watermarking video
CN106503655A (en) A kind of electric endorsement method and sign test method based on face recognition technology
EP3655874B1 (en) Method and electronic device for authenticating a user
CN105518710A (en) Video detecting method, video detecting system and computer program product
CN112802138B (en) Image processing method and device, storage medium and electronic equipment
Stokkenes et al. Multi-biometric template protection—A security analysis of binarized statistical features for bloom filters on smartphones
CN112381000A (en) Face recognition method, device, equipment and storage medium based on federal learning
CN111783677B (en) Face recognition method, device, server and computer readable medium
WO2023142453A1 (en) Biometric identification method, server, and client
CN112597379B (en) Data identification method and device, storage medium and electronic device
CN113190858B (en) Image processing method, system, medium and device based on privacy protection
CN114612991A (en) Conversion method and device for attacking face picture, electronic equipment and storage medium
CN113762970A (en) Data processing method and device, computer readable storage medium and computer equipment
CN109450878B (en) Biological feature recognition method, device and system
CN113518061A (en) Data transmission method, device, apparatus, system and medium in face recognition
CN111049921A (en) Image processing system, method, device and storage medium based on block chain technology
CN111382296A (en) Data processing method, device, terminal and storage medium
CN115225869B (en) Directional processing method and device for monitoring data
CN117560455B (en) Image feature processing method, device, equipment and storage medium
CN116436619B (en) Method and device for verifying streaming media data signature based on cryptographic algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant