CN112446310A - Age identification system, method and device based on block chain - Google Patents

Age identification system, method and device based on block chain Download PDF

Info

Publication number
CN112446310A
CN112446310A CN202011303351.1A CN202011303351A CN112446310A CN 112446310 A CN112446310 A CN 112446310A CN 202011303351 A CN202011303351 A CN 202011303351A CN 112446310 A CN112446310 A CN 112446310A
Authority
CN
China
Prior art keywords
target
age
image
features
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011303351.1A
Other languages
Chinese (zh)
Other versions
CN112446310B (en
Inventor
李伟
张帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qulian Technology Co Ltd
Original Assignee
Hangzhou Qulian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qulian Technology Co Ltd filed Critical Hangzhou Qulian Technology Co Ltd
Priority to CN202011303351.1A priority Critical patent/CN112446310B/en
Publication of CN112446310A publication Critical patent/CN112446310A/en
Application granted granted Critical
Publication of CN112446310B publication Critical patent/CN112446310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computer Security & Cryptography (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an age identification system, method and device based on a block chain. The method comprises the following steps: acquiring a plurality of original image characteristics of any object sent by at least one image acquisition device, wherein the original image characteristics are extracted from a face image of the object by the image acquisition device, and the image acquisition device is a device in a distributed system where a server is located; performing feature fusion on the plurality of original image features to obtain target fusion features; determining a target age corresponding to the target fusion feature according to the age identification model; and uploading the original image features, the target fusion features and the target age to a block chain to generate a target data block for storing the original image features, the target fusion features and the target age. According to the method and the device, the accuracy of the age prediction result is improved by extracting the characteristics of the multiple types of images and performing characteristic fusion. And the distributed equipment is used for collecting images and extracting features, so that the problem of privacy disclosure of users is completely avoided.

Description

Age identification system, method and device based on block chain
Technical Field
The present application relates to the field of blockchain technologies, and in particular, to a system, a method, and an apparatus for identifying an age based on a blockchain.
Background
The age estimation based on the face image is an important biological feature recognition technology and is also an important supplement of the face detection and recognition related technology. However, the result of age estimation is not accurate enough due to interference of various factors such as angle change, occlusion, and light in the face image. Moreover, the face image of the user is possibly stolen in the process of transmission through the network, and the privacy of the user is difficult to guarantee.
At present, in the related art, it is difficult to obtain the image features of a face image comprehensively, the age identification result is inaccurate due to feature loss and feature omission, and the privacy security problem of a user image in transmission is to protect the privacy of the user by encrypting data, but there is a strong problem in the related art, and the privacy security problem of the user cannot be solved completely only by encryption.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The application provides an age identification system, method and device based on a block chain.
In a first aspect, the present application provides a distributed system for age identification, comprising: the system comprises at least one image acquisition device, a server and a display device, wherein the image acquisition device is used for extracting a plurality of original image characteristics from a face image of any object and sending the plurality of original image characteristics to the server; the server is used for determining the age of the object according to the fusion characteristics, and the fusion characteristics are obtained by performing characteristic fusion on a plurality of original image characteristics; and the block chain network is used for generating a data block, and the original image characteristics, the fusion characteristics and the age sent by the server are stored in the data block.
In a second aspect, the present application provides an age identification method based on a blockchain, which is applied to a server, and includes: acquiring a plurality of original image characteristics of any object sent by at least one image acquisition device, wherein the original image characteristics are extracted from a face image of the object by the image acquisition device, and the image acquisition device is a device in a distributed system where a server is located; performing feature fusion on the plurality of original image features to obtain target fusion features; determining a target age corresponding to the target fusion feature according to the age identification model; and uploading the original image features, the target fusion features and the target age to a block chain to generate a target data block for storing the original image features, the target fusion features and the target age.
Optionally, performing feature fusion on the plurality of original image features to obtain a target fusion feature includes: acquiring a weight matrix of multiple types of original image features, wherein the weight matrix is used for storing the associated weight among sub-features in the original image features, the different types of original image features are extracted by image acquisition equipment by utilizing different target neural network models, and the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network; determining a similarity matrix of each type of original image characteristics by using the weight matrix, wherein the similarity matrix is used for storing similarity coefficients among the sub-characteristics; determining an adjacent matrix of each type of original image characteristics according to the similarity matrix of each type of original image characteristics, wherein the adjacent matrix is used for storing edges in the hypergraph; and constructing a hypergraph by using the weight matrix and the adjacency matrix of various original image characteristics, and determining target fusion characteristics by using the hypergraph.
Optionally, constructing a hypergraph by using the weight matrix and the adjacency matrix of each type of original image feature, and determining the target fusion feature by using the hypergraph includes: splicing the adjacent matrixes of various original image characteristics to obtain a fused adjacent matrix; determining an edge degree matrix by using the fused adjacency matrix, wherein the edge degree matrix is used for storing the number of vertexes connected with each edge in the hypergraph; determining a vertex degree matrix by using the weight matrix, wherein the vertex degree matrix is used for storing the number of edges connected with each vertex in the hypergraph; determining a Laplace matrix by utilizing the edge degree matrix and the vertex degree matrix to obtain a hypergraph; and determining an eigenvalue matrix of the Laplace matrix, and taking the eigenvalue matrix as a target fusion characteristic.
Optionally, determining the target age corresponding to the target fusion feature according to an age identification model includes: inputting the target fusion characteristics into an age identification model to obtain a plurality of first probability values obtained after the age identification model identifies the target fusion characteristics, wherein the first probability values are the probability of the age identification model for predicting the age of the object indicated by the face image; the age indicated by the maximum value of the plurality of first probability values is taken as the target age.
Optionally, before determining the target age corresponding to the target fusion feature according to the age identification model, the method further includes training the age identification model as follows: acquiring a training image, wherein a target object in the training image has a target age; carrying out scaling processing on the training image, and converting the scaled training image into a gray image; performing feature extraction on the gray level image by using a target neural network model to obtain training image features; performing feature fusion on the training image features to obtain training fusion features; inputting the training fusion characteristics into a regression model for training; when the probability value that the regression model predicts that the target object has the target age reaches the target threshold value, the regression model is set as the age recognition model.
Optionally, after obtaining the age identification model, the method further comprises: determining a hash value of the age identification model by using a hash function; and uploading the age identification model to a block chain, and performing consensus on the age identification model by using the hash value.
In a third aspect, the present application provides an age identification method based on a blockchain, applied to a distributed edge computing device, including: acquiring a face image of a target object; extracting image characteristics of the face image by using a target neural network model, wherein the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network; the image features are sent to a server.
In a fourth aspect, the present application provides an age identifying apparatus based on a blockchain, applied to a server, including: the system comprises an image characteristic acquisition module, a server and a display module, wherein the image characteristic acquisition module is used for acquiring a plurality of original image characteristics of any object sent by at least one image acquisition device, the original image characteristics are extracted from a face image of the object by the image acquisition device, and the image acquisition device is a device in a distributed system where the server is located; the characteristic fusion module is used for carrying out characteristic fusion on the plurality of original image characteristics to obtain target fusion characteristics; the age prediction module is used for determining a target age corresponding to the target fusion characteristic according to the age identification model; and the data chaining module is used for uploading the original image features, the target fusion features and the target ages to the block chain so as to generate a target data block for storing the original image features, the target fusion features and the target ages.
In a fifth aspect, the present application provides an age identifying apparatus based on a blockchain, applied to a distributed edge computing device, including: the image acquisition module is used for acquiring a face image of a target object; the characteristic extraction module is used for extracting the image characteristics of the face image by utilizing a target neural network model, and the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network; and the sending module is used for sending the image characteristics to the server.
In a sixth aspect, the present application provides an electronic device, comprising: the system comprises a processor, a communication component, a memory and a communication bus, wherein the processor, the communication component and the memory are communicated with each other through the communication bus; the memory for storing a computer program; the processor is configured to execute the program stored in the memory to implement the method of the second aspect or the third aspect.
In a seventh aspect, the present application provides a computer-readable storage medium storing a computer program implementing the method of the second or third aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the method provided by the embodiment of the application, the image features of the face image are extracted through the denoising automatic encoder, the convolutional neural network and the recurrent neural network, and the features are fused, so that the face features are obtained more comprehensively and accurately, and the accuracy of the age prediction result is improved. In addition, the distributed edge computing equipment is used for collecting images and extracting the features, and the distributed edge computing equipment is used for sending the extracted image features to the server, so that only the image features are transmitted, and the problem of privacy disclosure of a user is completely avoided.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a block chain-based age identification system according to an embodiment of the present disclosure;
FIG. 2 is a block chain structure in the present application;
FIG. 3 is a block chain network functional structure diagram according to an embodiment of the present application;
fig. 4 is a hardware environment diagram of an alternative age identification method based on a blockchain according to an embodiment of the present disclosure;
fig. 5 is a flowchart of an alternative age identification method based on a blockchain according to an embodiment of the present disclosure;
fig. 6 is a flowchart of another alternative age identification method based on a blockchain according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an alternative age identification apparatus based on a blockchain according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another alternative age identifying apparatus based on a blockchain according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the following description, reference is made to "one embodiment" which describes a subset of all possible embodiments, but it is understood that "one embodiment" describes the same subset or a different subset of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions referred to in the embodiments of the present invention are described, and the terms and expressions referred to in the embodiments of the present invention are applicable to the following explanations.
(1) Transactions (transactions), equivalent to the computer term "Transaction," include operations that need to be committed to a blockchain network for execution and do not refer solely to transactions in the context of commerce, which embodiments of the present invention follow in view of the convention in blockchain technology that colloquially uses the term "Transaction.
For example, a deployment (deployment) transaction is used to install a specified smart contract to a node in a blockchain network and is ready to be invoked; the Invoke (Invoke) transaction is used to append records of the transaction in the blockchain by invoking the smart contract and to perform operations on the state database of the blockchain, including update operations (including adding, deleting, and modifying key-value pairs in the state database) and query operations (i.e., querying key-value pairs in the state database).
(2) A Block chain (Blockchain) is a storage structure for encrypted, chained transactions formed from blocks (blocks).
(3) A Blockchain Network (Blockchain Network) incorporates new blocks into a set of nodes of a Blockchain in a consensus manner.
(4) Ledger (legger) is a general term for blockchains (also called Ledger data) and state databases synchronized with blockchains. Wherein, the blockchain records the transaction in the form of a file in a file system; the state database records the transaction in the blockchain in the form of different types of Key (Key) Value pairs for supporting quick query of the transaction in the blockchain.
(5) Intelligent Contracts (Smart Contracts), also known as chain codes (chaincodes) or application codes, are programs deployed in nodes of a blockchain network, and the nodes execute the intelligent Contracts called in received transactions to update or query key-value data of a state database.
(6) Consensus (Consensus), a process in a blockchain network, is used to agree on transactions in a block among a plurality of nodes involved, the agreed block is to be appended to the end of the blockchain, and the mechanisms for achieving Consensus include Proof of workload (PoW, Proof of Work), Proof of rights and interests (PoS, Proof of equity (DPoS), Proof of right of stock (DPoS), Proof of Elapsed Time (PoET, Proof of Elapsed Time), and so on.
(7) Artificial Neural Networks (Artificial Neural Networks): the artificial neural network may be composed of neural units, which may be referred to as xsAnd an arithmetic unit with intercept b as input, the output of the arithmetic unit may be:
Figure BDA0002787587190000071
wherein s is 1, 2, … … n, n is a natural number greater than 1, and W issIs xsB is the bias of the neural unit. f is the activation functions of the neural units,for introducing non-linear characteristics into the neural network for converting input signals in the neural unit into output signals. The output signal of the activation function may be used as an input to the next convolutional layer. The activation function may be a sigmoid function. A neural network is a network formed by a number of the above-mentioned single neural units joined together, i.e. the output of one neural unit may be the input of another neural unit. The input of each neural unit can be connected with the local receiving domain of the previous layer to extract the characteristics of the local receiving domain, and the local receiving domain can be a region composed of a plurality of neural units.
(8) Deep Neural Network (DNN): also called multi-layer neural networks, can be understood as neural networks with many hidden layers, where "many" has no particular metric. From the division of DNNs by the location of different layers, neural networks inside DNNs can be divided into three categories: input layer, hidden layer, output layer. Generally, the first layer is an input layer, the last layer is an output layer, and the middle layers are hidden layers. For example, a fully-connected neural network is fully connected between layers, that is, any neuron at the i-th layer must be connected with any neuron at the i + 1-th layer. Although DNN appears complex, it is not really complex in terms of the work of each layer, simply the following linear relational expression:
Figure BDA0002787587190000081
wherein,
Figure BDA0002787587190000082
is the input vector of the input vector,
Figure BDA0002787587190000083
is the output vector of the output vector,
Figure BDA0002787587190000084
is an offset vector, W is a weight matrix (also called coefficient), and α () is an activation function. Each layer is only for the input vector
Figure 1
Obtaining the output vector through such simple operation
Figure BDA0002787587190000086
Due to the large number of DNN layers, the coefficient W and the offset vector
Figure BDA0002787587190000087
The number of the same is large. The definition of these parameters in DNN is as follows: taking coefficient W as an example: assume that in a three-layer DNN, the linear coefficients of the 4 th neuron of the second layer to the 2 nd neuron of the third layer are defined as
Figure BDA0002787587190000088
The superscript 3 represents the number of layers in which the coefficient W is located, while the subscripts correspond to the third layer index 2 of the output and the second layer index 4 of the input. The summary is that: the coefficients of the kth neuron of the L-1 th layer to the jth neuron of the L-1 th layer are defined as
Figure BDA0002787587190000089
Note that the input layer is without the W parameter. In deep neural networks, more hidden layers make the network more able to depict complex situations in the real world. Theoretically, the more parameters the higher the model complexity, the larger the "capacity", which means that it can accomplish more complex learning tasks. The final goal of the process of training the deep neural network, i.e., learning the weight matrix, is to obtain the weight matrix (the weight matrix formed by the vectors W of many layers) of all the layers of the deep neural network that is trained.
(9) Convolutional Neural Network (CNN): a convolutional neural network is a deep neural network with a convolutional structure. The convolutional neural network includes a feature extractor consisting of convolutional layers and sub-sampling layers. The feature extractor may be viewed as a filter and the convolution process may be viewed as convolving an input image or convolved feature plane (feature map) with a trainable filter. The convolutional layer is a neuron layer for performing convolutional processing on an input signal in a convolutional neural network. In convolutional layers of convolutional neural networks, one neuron may be connected to only a portion of the neighbor neurons. In a convolutional layer, there are usually several characteristic planes, and each characteristic plane may be composed of several neural units arranged in a rectangular shape. The neural units of the same feature plane share weights, where the shared weights are convolution kernels. Sharing weights may be understood as the way image information is extracted is location independent. The underlying principle is: the statistics of a certain part of the image are the same as the other parts. Meaning that image information learned in one part can also be used in another part. The same learned image information can be used for all positions on the image. In the same convolution layer, a plurality of convolution kernels can be used to extract different image information, and generally, the greater the number of convolution kernels, the more abundant the image information reflected by the convolution operation. The convolution kernel can be initialized in the form of a matrix of random size, and can be learned to obtain reasonable weights in the training process of the convolutional neural network. In addition, sharing weights brings the direct benefit of reducing connections between layers of the convolutional neural network, while reducing the risk of overfitting.
The convolutional neural network takes ReLU as an activation function, and the mapping relation of the features to the marks is defined as:
Figure BDA0002787587190000091
omega is a parameter of the mapping and,
Figure BDA0002787587190000092
is the activation function (i.e., ReLU). For a convolutional network with a layer number l, the mapping relationship can be described as:
Figure BDA0002787587190000093
wherein, the learning and updating process of W is represented by Δ W:
Figure BDA0002787587190000094
η is the learning rate.
Figure BDA0002787587190000101
Where y is the label information. Through the formula, the difference between the marking information and the predicted value is minimized. And simultaneously, parameter optimization is carried out through a back propagation mechanism:
Figure BDA0002787587190000102
(10) a denoising auto-encoder (DAE) is a type of auto-encoder that accepts damaged data as input and trains to predict the original undamaged data as input, and is an unsupervised hidden layer extraction method. The loss function is defined as:
Figure BDA0002787587190000103
where m is the number of cycles to add noise, n is the number of layers stacked by the auto-encoder,
Figure BDA0002787587190000104
is a feature that adds noise. Further can be rewritten as:
Figure BDA0002787587190000105
wherein
Figure BDA0002787587190000106
Is superposed X after automatic coding, i.e.
Figure BDA0002787587190000107
In order to minimize the above equation,then the derivative is taken over to W, i.e.
Figure BDA0002787587190000108
The following can be obtained:
Figure BDA0002787587190000109
(11) a Recurrent Neural Network (RNN) is an Artificial Neural Network (ANN) having a tree hierarchical structure in which Network nodes recur input information according to their connection order, and is one of deep learning (deep learning) algorithms. With tanh as the activation function, characteristic xiAnd xjThe parent node of (2):
pi,j=tanh(W[xi;xj])
where W is a weight matrix, typically solved by Stochastic Gradient Descent (Stochastic Gradient Description).
(12) The Hypergraph (Hypergraph) is given a Hypergraph G ═ V, E, W >, V is the finite set of vertices of the Hypergraph, E is the set of hyperedges of the Hypergraph, W is the set of weights of the hyperedges, and E ∈ E for any hyperedge is a subset of the set of vertices V. In the hypergraph, the degree of vertex E ∈ E is defined as follows:
Figure BDA0002787587190000111
for a super edge E ∈ E, the degree of the super edge is defined as the number of vertices contained on the super edge:
δ(e)=|e|
two matrices D are definedv、DeThe steps of the vertices and the steps of the superedges of the hypergraph are shown, respectively. Similar to the matrix representation of the ordinary graph, the hypergraph can also be represented by constructing a point-edge correlation matrix H of | V | E | dimension. In the incidence matrix, if a vertex e is on a super edge v, h (v, e) is 1, otherwise h (v, e) is 10. According to the definition of the correlation matrix, the order of the points in the hypergraph and the order of the edges in the hypergraph can be further expressed as:
Figure BDA0002787587190000112
Figure BDA0002787587190000113
an exemplary application of the block chain network provided by the embodiment of the present invention is described below, as shown in fig. 1, fig. 1 is a schematic diagram of an age identification system provided by the embodiment of the present invention, and includes a block chain network 101, a consensus node 102, an authentication center 103, a service agent 104, a client node 104-1, a service agent 105, and a client node 105-1, which are described below respectively:
the type of blockchain network 101 is flexible and may be any of a public chain, a private chain, or a federation chain, for example. Taking a public link as an example, electronic devices such as a user terminal and a server of any service agent can access the blockchain network 101 without authorization; taking a federation chain as an example, an electronic device (e.g., a terminal/server) under the jurisdiction of a service entity after obtaining authorization may access the blockchain network 101, and at this time, become a client node in the blockchain network 101.
In some embodiments, the client node 104 may act as a mere observer of the blockchain network 101, i.e., provide functionality that supports business entities initiating transactions (e.g., for uplink storage of image features and age prediction results or query of data on the chain corresponding to the image features), and may be implemented by default or selectively (e.g., depending on the specific business needs of the business entities) with respect to the functionality of the consensus nodes 102 of the blockchain network 101, such as ranking functionality, consensus services, and ledger functionality, etc. Therefore, the data and the service processing logic of the service subject can be migrated to the blockchain network 101 to the maximum extent, and the credibility and traceability of the data and service processing process are realized through the blockchain network 101.
Consensus nodes in blockchain network 101 receive transactions submitted from different business entities, such as client node 104-1 of business entity 104 shown in fig. 1, perform the transactions to update the ledger or query the ledger, and various intermediate or final results of performing the transactions may be returned for display in client node 104-1 of business entity 104.
For example, client node 104-1 may subscribe to events of interest in blockchain network 101, such as transactions occurring in a particular organization/channel in blockchain network 101, and corresponding transaction notifications are pushed by consensus node 102 to client node 104-1, thereby triggering corresponding business logic in client node 104-1.
As an example of a block chain, as shown in fig. 2, fig. 2 is a schematic structural diagram of a block chain in a block chain network 101 according to an embodiment of the present invention, where a header of each block may include hash values of all transactions in the block and also include hash values of all transactions in a previous block, a record of a newly generated transaction is filled in the block and is added to a tail of the block chain after being identified by nodes in the block chain network, so as to form a chain growth, and a chain structure based on hash values between blocks ensures tamper resistance and forgery prevention of transactions in the block.
An exemplary functional architecture of a blockchain network provided by the embodiment of the present invention is described below, as shown in fig. 3, fig. 3 is a schematic functional architecture diagram of a blockchain network 101 provided by the embodiment of the present invention, and includes an application layer 301, a consensus layer 302, a network layer 303, a data layer 304, and a resource layer 305, which are described below:
the application layer 301 encapsulates various services that the blockchain network can implement, including tracing, crediting, and verifying transactions.
The consensus layer 302 encapsulates the functions of the mechanism by which the nodes 102 in the blockchain network 101 agree on a block (i.e., a consensus mechanism), transaction management, and ledger management. The consensus mechanism comprises consensus algorithms such as POS, POW and DPOS, and the pluggable consensus algorithm is supported. The transaction management is used for verifying the digital signature carried in the transaction received by the node 101, verifying the identity information of the service body 104, and determining whether the service body has the authority to perform the transaction (reading the relevant information from the service body identity management) according to the identity information; for the service agents authorized to access the blockchain network 101, the service agents all have digital certificates issued by the certificate authority, and the service agents sign the submitted transactions by using private keys in the digital certificates of the service agents, so that the legal identities of the service agents are declared. The ledger administration is used to maintain blockchains and state databases. For the block with the consensus, adding the block to the tail of the block chain; executing the transaction in the acquired consensus block, updating the key-value pairs in the state database when the transaction comprises an update operation, querying the key-value pairs in the state database when the transaction comprises a query operation and returning a query result to the client node of the business entity. Supporting query operations for multiple dimensions of a state database, comprising: querying the chunk based on the chunk sequence number (e.g., hash value of the transaction); inquiring the block according to the block hash value; inquiring a block according to the transaction serial number; inquiring the transaction according to the transaction serial number; inquiring account data of a business main body according to an account (serial number) of the business main body; and inquiring the block chain in the channel according to the channel name.
The network layer 303 encapsulates the functions of a point-to-point (P2P, point) network protocol, a data propagation mechanism and a data verification mechanism, an access authentication mechanism, and service agent identity management.
The P2P network protocol implements communication between nodes 102 in the blockchain network 101, the data propagation mechanism ensures propagation of transactions in the blockchain network 101, and the data verification mechanism implements reliability of data transmission between the nodes 102 based on cryptography methods (e.g., digital certificates, digital signatures, public/private key pairs); the access authentication mechanism is used for authenticating the identity of a service subject added to the block chain network 101 according to an actual service scene, and endowing the service subject with the authority of accessing the block chain network 101 when the authentication is passed; the business entity 104 identity management is used to store the identity of the business entity 104 that is allowed to access the blockchain network 101, as well as the permissions (e.g., the types of transactions that can be initiated).
Data layer 304 encapsulates various data structures that implement ledgers, including blockchains implemented in files in a file system, key-value type state databases, and presence certificates (e.g., hash trees for transactions in blocks).
The resource layer 305 encapsulates the computing, storage, and communication resources that implement each node 102 in the blockchain network 101.
Based on the above architecture, the embodiments of the present invention provide the following implementation manners.
In the related art, it is difficult to obtain the image features of the face image comprehensively, the age identification result is inaccurate due to feature loss and feature omission, and the privacy security problem of the user image in transmission is to protect the privacy of the user by encrypting data.
To solve the problems mentioned in the background, according to an aspect of embodiments of the present application, an embodiment of an age identification method based on a blockchain is provided.
Alternatively, in the embodiment of the present application, the method for determining the image quality level may be applied to a hardware environment formed by the terminal 401 and the server 403 as shown in fig. 4. The terminal may be a distributed edge computing device, such as a monitoring device, a vehicle event data recorder, and the like, and the server is a central server and may collect and process data sent by the distributed edge computing device. As shown in fig. 4, a server 403 is connected to the terminal 401 through a network, which may be used to provide services for the terminal or a client installed on the terminal, and a database 405 may be provided on the server or separately from the server, and is used to provide data storage services for the server 403, and the network includes but is not limited to: a wide area network, a metropolitan area network, or a local area network, and the terminal 401 includes but is not limited to a PC, a cell phone, a tablet computer, and the like.
An age identification method based on a block chain in an embodiment of the present application may be executed by a server, as shown in fig. 5, where the method includes the following steps:
step S502, a plurality of original image characteristics of any object sent by at least one image acquisition device are obtained, the original image characteristics are extracted from a face image of the object by the image acquisition device, and the image acquisition device is a device in a distributed system where a server is located.
In this embodiment, the image capturing device may be disposed in a distributed edge computing device, where the distributed edge computing device is a terminal device independent from a server, such as a monitoring device and a vehicle event data recorder. The distributed edge computing equipment can acquire the face image of the target object by means of the installed image acquisition equipment.
In the embodiment of the application, the target neural network model may be set in the distributed edge computing device, and the target neural network model may be executed by a processor of the distributed edge computing device, that is, after the distributed edge computing device collects an image, the image features of a face image are extracted by using the target neural network model, each neural network model may obtain one type of image features, and the distributed edge computing device may send various types of image features to the server, so that transmission of the face image of the user is not performed, privacy and security of the user are protected, the amount of transmitted data is reduced, and transmission efficiency is improved.
And step S504, performing feature fusion on the plurality of original image features to obtain target fusion features.
In the embodiment of the application, after the server acquires various original image features sent by the distributed edge computing device, the server can fuse the various original image features, so that the image features which are richer, more comprehensive and more accurate than a single feature are obtained. Preferably, the various image features may be fused based on a hypergraph construction method of low rank learning.
Optionally, fusing various image features based on the hypergraph construction method of low rank learning may include the following steps:
step 1, acquiring a weight matrix of the characteristics of multiple types of original images. The weight matrix for each class of original image features may be represented by Q(i)The weight matrix is used to hold the associated weights between the sub-features in the original image feature. The different classes of original image features are extracted by the image acquisition equipment by using different target neural network modelsThe neural network model includes at least one of a denoising autoencoder, a convolutional neural network, and a recurrent neural network. Specifically, in the ith class of image features, the weight matrix between the ith sub-feature and the kth sub-feature can be expressed as
Figure BDA0002787587190000161
And
Figure BDA0002787587190000162
wherein
Figure BDA0002787587190000163
Representing the associated weight of the ith sub-feature to the kth sub-feature,
Figure BDA0002787587190000164
representing the associated weight of the kth sub-feature to the l-th sub-feature. From an input image:
X(i)=X(i)Q(i)+E(i)
where i is 1, …, m, m is a class of features, i.e., 3, Q can be obtained by minimizing a multi-modal low-rank learning loss function(i)
Figure BDA0002787587190000165
Where λ and α are weight parameters and E is an initialization parameter, which can be generally set as an identity matrix. II |)*Is a trace pattern, |2,1Is a2,1A paradigm.
And 2, determining a similarity matrix of each type of image characteristics by using the weight matrix, wherein the image characteristics of different types are extracted for different target neural network models, and the similarity matrix is used for storing similarity coefficients among the sub-characteristics. The similarity between the ith and kth features of the ith class of features is defined as:
Figure BDA0002787587190000171
where m is the class of features, i.e., 3. And calculating the similarity between any two features to form a similarity matrix. For example, if the ith class of features has j sub-features, the similarity matrix of the ith class of features is a matrix of j rows and j columns, wherein the similarity between the ith sub-feature and the kth sub-feature is placed in the kth row and the kth column.
And 3, determining an adjacent matrix U of each type of image characteristics according to the similarity matrix of each type of image characteristics, wherein the adjacent matrix is used for storing the edges in the hypergraph. Wherein, the matrix element of the ith row and the kth column can be expressed as:
Figure BDA0002787587190000172
wherein δ is a determination threshold, for example, 0.5, and a similarity greater than 0.5 indicates that the two vertexes instantiated by the two features are connected to the same edge, whereas a similarity less than 0.5 indicates that the two vertexes instantiated by the two features are not connected to the same edge.
And 4, splicing the adjacent matrixes of various image characteristics to obtain a fused adjacent matrix U-U1|U2|…|Um
And 5, determining an edge degree matrix by using the fused adjacent matrix, wherein the edge degree matrix is used for storing the number of vertexes connected with each edge in the hypergraph. Computing an edge degree matrix D in a hypergrapheSum each row of U and put to diagonal position:
Figure BDA0002787587190000173
wherein D isellRepresentation matrix DeThe value at this position of (l, l).
Step 6, determining a vertex degree matrix D by using the weight matrixvThe vertex degree matrix is used to hold the number of edges to which each vertex in the hypergraph is connected. Calculating a vertex degree matrix D in a hypergraphvSum each row of Q and put to diagonal position:
Figure BDA0002787587190000181
if U is presentlk=1
Wherein D isvllRepresentation matrix DvThe value at this position of (l, l).
And 7, determining a Laplace matrix L by using the edge degree matrix and the vertex degree matrix to obtain a hypergraph:
Figure BDA0002787587190000182
wherein I represents an identity matrix. The result of the hypergraph construction is essentially a laplacian matrix L, and thus the hypergraph construction is completed by obtaining the laplacian matrix.
And 8, determining an eigenvalue matrix of the Laplace matrix, and taking the eigenvalue matrix as a target fusion characteristic. And (4) carrying out eigenvalue decomposition on the Laplace matrix to obtain an eigenvalue matrix, namely the fused characteristic.
Step S506, determining the target age corresponding to the target fusion feature according to the age identification model.
Alternatively, in step S506, specifically, the target fusion feature may be input into an age identification model, and a plurality of first probability values obtained after the age identification model identifies the target fusion feature are obtained, where the first probability values are probabilities that the age identification model predicts an age to which a target object indicated by the face image belongs; the age indicated by the maximum value of the plurality of first probability values is taken as the target age.
In this embodiment of the application, the age identification model may be obtained by training the fusion features through a softmax normalization model. After the target fusion feature is input to the trained age recognition model, the all-connected layers in the age recognition model predict the ages of the target objects, and output the predicted probability values of the ages, i.e., the first probability values. The first probability value has a maximum value, and the age corresponding to the maximum value is used as a target age, namely a prediction result of the age identification model.
In the embodiment of the present application, the training process of the age identification model is as follows:
step 1, obtaining a training image, wherein a target object in the training image has a target age.
And 2, carrying out scaling processing on the training image, and converting the scaled training image into a gray image. The scaling process may be performed by upsampling or downsampling, such as resizing the training images all to 64 x 64 to fit the model input. The training image is converted to a grayscale image to make the face texture more prominent.
And 3, extracting the characteristics of the gray level image by using the target neural network model to obtain the characteristics of the training image. And extracting the features of the training image by adopting a denoising automatic encoder, a convolutional neural network and a recurrent neural network.
And 4, performing feature fusion on the training image features to obtain training fusion features. The feature fusion method may adopt the hypergraph construction method based on low rank learning in the above embodiment to perform feature fusion on various training image features, which is not described herein again.
Step 5, inputting the training fusion characteristics into a regression model for training; when the probability value that the regression model predicts that the target object has the target age reaches the target threshold value, the regression model is set as the age recognition model.
Optionally, after the age identification model is obtained, the trained age identification model may be uploaded to a blockchain to ensure that the age identification model is not tampered. And determining a hash value of the age identification model by adopting a hash function, uploading the age identification model to a block chain, and performing consensus on the age identification model by utilizing the hash value so as to achieve consistency in consensus nodes. The hash value may also be used to verify that the model is legitimate. For example, a user takes a model, and the hash value of the model can be extracted and compared with that in the blockchain network. If the two are the same, the source of the model is legal.
Step S508, uploading the original image feature, the target fusion feature and the target age to a block chain to generate a target data block storing the original image feature, the target fusion feature and the target age.
In the embodiment of the application, the image characteristics and the age prediction result can be stored in the block chain to prevent tampering, and an inquiry function can be provided for a user.
By adopting the technical scheme, the image characteristics of the face image are extracted through the denoising automatic encoder, the convolutional neural network and the recursive neural network, and the characteristics are fused, so that the face characteristics are obtained more comprehensively and accurately, and the accuracy of the age prediction result is improved. In addition, the distributed edge computing equipment is used for collecting images and extracting the features, and the distributed edge computing equipment is used for sending the extracted image features to the server, so that only the image features are transmitted, and the problem of privacy disclosure of a user is completely avoided.
An embodiment of the present application further provides an age identification method based on a blockchain, which may be executed by a distributed edge computing device, as shown in fig. 6, where the method includes the following steps:
step S602, collecting a face image of a target object;
step S604, extracting image characteristics of the face image by using a target neural network model, wherein the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network;
step S606, the image feature is sent to the server.
In this embodiment of the application, the distributed edge computing device may be a monitoring device, a vehicle event data recorder, a mobile phone, a tablet computer, a PC, or the like. The denoising autoencoder, the convolutional neural network and the recurrent neural network can be arranged in a processor of the distributed edge computing device to extract the characteristics of the face image collected by the device. And then sending the extracted image features to a server for age prediction. The localization processing can completely block the privacy data leakage of the user, and meanwhile, the data volume can be greatly reduced only by transmitting the image characteristics.
In the embodiment of the present application, the storage of the age identification model, the image characteristics, and the age prediction result in the block chain is performed according to the rules of an intelligent contract.
In the following, for the sake of clarity of the present application, the working principle of the intelligent contract is first briefly described:
constructing an intelligent contract: the intelligent contract is made by a plurality of users in the block chain, and can be used for any transaction between any users. The agreement defines the rights and obligations of the parties to the transaction, which are programmed electronically by the developer, the code containing conditions that trigger the automatic execution of the contract.
Storing the intelligent contract: once the encoding is completed, the intelligent contract is uploaded to the blockchain network, that is, each node of the whole network can receive the intelligent contract.
Executing the intelligent contract: the intelligent contract can regularly check whether related events and trigger conditions exist or not, the events meeting the conditions are pushed to a queue to be verified, the verification nodes on the block chain firstly carry out signature verification on the events to ensure the validity of the events, most verification nodes agree with the events, the intelligent contract is successfully executed, and a user is informed of the successful execution.
The embodiment of the present application further provides an age identification apparatus based on a block chain, which is applied to a server, and the specific implementation of the apparatus may refer to the description of the method embodiment, and repeated parts are not described again, as shown in fig. 7, the apparatus mainly includes: the image feature acquisition module 701 is configured to acquire a plurality of original image features of any object sent by at least one image acquisition device, where the original image features are extracted from a face image of the object by the image acquisition device, and the image acquisition device is a device in a distributed system where a server is located; the feature fusion module 703 is configured to perform feature fusion on the multiple original image features to obtain a target fusion feature; the age prediction module 705 is used for determining a target age corresponding to the target fusion feature according to the age identification model; the data chaining module 707 is configured to upload the original image feature, the target fusion feature, and the target age to a block chain to generate a target data block storing the original image feature, the target fusion feature, and the target age.
It should be noted that the image feature obtaining module 701 in this embodiment may be configured to execute step S502 in this embodiment, the feature fusing module 703 in this embodiment may be configured to execute step S504 in this embodiment, the age predicting module 705 in this embodiment may be configured to execute step S506 in this embodiment, and the data uplink module 707 in this embodiment may be configured to execute step S508 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 4, and may be implemented by software or hardware.
Optionally, the feature fusion module is specifically configured to: acquiring a weight matrix of multiple types of original image features, wherein the weight matrix is used for storing the associated weight among sub-features in the original image features, the different types of original image features are extracted by image acquisition equipment by utilizing different target neural network models, and the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network; determining a similarity matrix of each type of original image characteristics by using the weight matrix, wherein the similarity matrix is used for storing similarity coefficients among the sub-characteristics; determining an adjacent matrix of each type of original image characteristics according to the similarity matrix of each type of original image characteristics, wherein the adjacent matrix is used for storing edges in the hypergraph; and constructing a hypergraph by using the weight matrix and the adjacency matrix of various original image characteristics, and determining target fusion characteristics by using the hypergraph.
Optionally, the feature fusion module is further configured to: splicing the adjacent matrixes of various original image characteristics to obtain a fused adjacent matrix; determining an edge degree matrix by using the fused adjacency matrix, wherein the edge degree matrix is used for storing the number of vertexes connected with each edge in the hypergraph; determining a vertex degree matrix by using the weight matrix, wherein the vertex degree matrix is used for storing the number of edges connected with each vertex in the hypergraph; determining a Laplace matrix by utilizing the edge degree matrix and the vertex degree matrix to obtain a hypergraph; and determining an eigenvalue matrix of the Laplace matrix, and taking the eigenvalue matrix as a target fusion characteristic.
Optionally, the age prediction module is specifically configured to: inputting the target fusion characteristics into an age identification model to obtain a plurality of first probability values obtained after the age identification model identifies the target fusion characteristics, wherein the first probability values are the probability of the age identification model for predicting the age of the object indicated by the face image; the age indicated by the maximum value of the plurality of first probability values is taken as the target age.
Optionally, the apparatus for identifying age based on blockchain further includes a model training module, configured to: acquiring a training image, wherein a target object in the training image has a target age; carrying out scaling processing on the training image, and converting the scaled training image into a gray image; performing feature extraction on the gray level image by using a target neural network model to obtain training image features; performing feature fusion on the training image features to obtain training fusion features; inputting the training fusion characteristics into a regression model for training; when the probability value that the regression model predicts that the target object has the target age reaches the target threshold value, the regression model is set as the age recognition model.
Optionally, the apparatus for identifying age based on blockchain further includes a model chaining module, configured to: determining a hash value of the age identification model by using a hash function; and uploading the age identification model to a block chain, and performing consensus on the age identification model by using the hash value.
The embodiment of the present application further provides an age identifying apparatus based on a block chain, which is applied to a distributed edge computing device, and the specific implementation of the apparatus may refer to the description of the method embodiment, and repeated details are not repeated, as shown in fig. 8, the apparatus mainly includes: an image acquisition module 801, configured to acquire a face image of a target object; a feature extraction module 803, configured to extract image features of the face image by using a target neural network model, where the target neural network model includes at least one of a denoising auto-encoder, a convolutional neural network, and a recurrent neural network; a sending module 805, configured to send the image feature to a server.
It should be noted that the image acquisition module 801 in this embodiment may be configured to execute step S602 in this embodiment, the feature extraction module 803 in this embodiment may be configured to execute step S604 in this embodiment, and the sending module 805 in this embodiment may be configured to execute step S606 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 4, and may be implemented by software or hardware.
According to another aspect of the embodiments of the present application, an electronic device is provided, as shown in fig. 9, and includes a memory 901, a processor 903, a communication interface 905, and a communication bus 907, where a computer program operable on the processor 903 is stored in the memory 901, the memory 901 and the processor 903 communicate through the communication interface 905 and the communication bus 907, and the steps of the method are implemented when the processor 903 executes the computer program.
The memory and the processor in the electronic equipment are communicated with the communication interface through a communication bus. The communication bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
There is also provided, in accordance with yet another aspect of an embodiment of the present application, a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the steps of any of the embodiments described above.
Optionally, in an embodiment of the present application, the computer program product or the computer program is a program code for a processor to execute the following steps:
acquiring image characteristics sent by distributed edge computing equipment, wherein the image characteristics are obtained by utilizing a target neural network model to perform characteristic extraction on a face image acquired by the distributed edge computing equipment, and the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network;
performing feature fusion on the image features to obtain target fusion features;
inputting the target fusion characteristics into an age identification model to obtain an age prediction result output by the age identification model;
and uploading the image characteristics and the age prediction result to a block chain to generate a target data block.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
When the embodiments of the present application are specifically implemented, reference may be made to the above embodiments, and corresponding technical effects are achieved.
Optionally, in an embodiment of the present application, the computer program product or the computer program is further used for the processor to execute the following steps:
acquiring a face image of a target object;
extracting image characteristics of the face image by using a target neural network model, wherein the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network;
the image features are sent to a server.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
When the embodiments of the present application are specifically implemented, reference may be made to the above embodiments, and corresponding technical effects are achieved.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes, etc.), optical media (e.g., DVDs), or semiconductor media (e.g., solid state drives), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A distributed system for age identification, comprising:
the system comprises at least one image acquisition device, a server and a display device, wherein the image acquisition device is used for extracting a plurality of original image characteristics from a face image of any object and sending the original image characteristics to the server;
the server is used for determining the age of the object according to fusion characteristics, wherein the fusion characteristics are obtained by performing characteristic fusion on a plurality of original image characteristics;
and the block chain network is used for generating a data block, and the original image characteristics, the fusion characteristics and the age sent by the server are stored in the data block.
2. An age identification method based on a block chain is applied to a server and is characterized by comprising the following steps:
acquiring a plurality of original image characteristics of any object sent by at least one image acquisition device, wherein the original image characteristics are extracted from a face image of the object by the image acquisition device, and the image acquisition device is a device in a distributed system where the server is located;
performing feature fusion on the plurality of original image features to obtain target fusion features;
determining a target age corresponding to the target fusion feature according to an age identification model;
uploading the original image features, the target fusion features and the target age to a block chain to generate a target data block storing the original image features, the target fusion features and the target age.
3. The method of claim 2, wherein performing feature fusion on the plurality of original image features to obtain a target fusion feature comprises:
acquiring a weight matrix of multiple types of the original image features, wherein the weight matrix is used for storing the association weight among the sub-features in the original image features, the different types of the original image features are extracted by the image acquisition equipment by using different target neural network models, and the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network;
determining a similarity matrix of each type of the original image features by using the weight matrix, wherein the similarity matrix is used for storing similarity coefficients among the sub-features;
determining an adjacent matrix of each type of the original image features according to the similarity matrix of each type of the original image features, wherein the adjacent matrix is used for storing edges in a hypergraph;
constructing a hypergraph by using the weight matrix and the adjacency matrix of each type of original image features, and determining the target fusion features by using the hypergraph.
4. The method of claim 3, wherein constructing a hypergraph using the weight matrix and the adjacency matrix for each type of the original image feature, and determining the target fusion feature using the hypergraph comprises:
splicing the adjacent matrixes of the original image characteristics to obtain a fused adjacent matrix;
determining an edge degree matrix by using the fused adjacency matrix, wherein the edge degree matrix is used for storing the number of vertexes connected with each edge in the hypergraph;
determining a vertex degree matrix by using the weight matrix, wherein the vertex degree matrix is used for storing the number of edges connected with each vertex in the hypergraph;
determining a Laplace matrix by using the edge degree matrix and the vertex degree matrix to obtain the hypergraph;
and determining an eigenvalue matrix of the Laplace matrix, and taking the eigenvalue matrix as the target fusion characteristic.
5. The method of claim 2, wherein determining the target age corresponding to the target fusion feature from an age identification model comprises:
inputting the target fusion feature into the age identification model, and obtaining a plurality of first probability values obtained after the age identification model identifies the target fusion feature, wherein the first probability values are probabilities that the age identification model predicts an age of the object indicated by the face image;
taking an age indicated by a maximum value of the plurality of the first probability values as the target age.
6. The method of any one of claims 2 to 5, wherein prior to determining the target age corresponding to the target fusion feature based on an age identification model, the method further comprises training the age identification model as follows:
acquiring a training image, wherein a target object in the training image has a target age;
scaling the training image, and converting the scaled training image into a gray image;
performing feature extraction on the gray level image by using a target neural network model to obtain training image features;
performing feature fusion on the training image features to obtain training fusion features;
inputting the training fusion characteristics into a regression model for training;
and when the probability value of the target age predicted by the regression model reaches a target threshold value, taking the regression model as the age identification model.
7. An age identification method based on a block chain is applied to distributed edge computing equipment and is characterized by comprising the following steps:
acquiring a face image of a target object;
extracting image features of the face image by using a target neural network model, wherein the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network;
and sending the image characteristics to a server.
8. An age identification device based on a block chain is applied to a server and is characterized by comprising:
the system comprises an image characteristic acquisition module, a server and a processing module, wherein the image characteristic acquisition module is used for acquiring a plurality of original image characteristics of any object sent by at least one image acquisition device, the original image characteristics are extracted from a face image of the object by the image acquisition device, and the image acquisition device is a device in a distributed system where the server is located;
the characteristic fusion module is used for carrying out characteristic fusion on the plurality of original image characteristics to obtain target fusion characteristics;
the age prediction module is used for determining a target age corresponding to the target fusion characteristic according to an age identification model;
and the data uplink module is used for uploading the original image features, the target fusion features and the target ages to a block chain so as to generate a target data block for storing the original image features, the target fusion features and the target ages.
9. An age identifying apparatus based on a blockchain, applied to a distributed edge computing device, comprising:
the image acquisition module is used for acquiring a face image of a target object;
the characteristic extraction module is used for extracting the image characteristics of the face image by utilizing a target neural network model, wherein the target neural network model comprises at least one of a denoising automatic encoder, a convolutional neural network and a recurrent neural network;
and the sending module is used for sending the image characteristics to a server.
10. An electronic device, comprising: the system comprises a processor, a communication component, a memory and a communication bus, wherein the processor, the communication component and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor, configured to execute the program stored in the memory, to implement the method of any one of claims 2 to 6 or 7.
CN202011303351.1A 2020-11-19 2020-11-19 Age identification system, method and device based on block chain Active CN112446310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011303351.1A CN112446310B (en) 2020-11-19 2020-11-19 Age identification system, method and device based on block chain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011303351.1A CN112446310B (en) 2020-11-19 2020-11-19 Age identification system, method and device based on block chain

Publications (2)

Publication Number Publication Date
CN112446310A true CN112446310A (en) 2021-03-05
CN112446310B CN112446310B (en) 2024-09-24

Family

ID=74738833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011303351.1A Active CN112446310B (en) 2020-11-19 2020-11-19 Age identification system, method and device based on block chain

Country Status (1)

Country Link
CN (1) CN112446310B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239935A (en) * 2021-04-15 2021-08-10 广州广电运通金融电子股份有限公司 Image feature extraction method, device, equipment and medium based on block chain
CN113395491A (en) * 2021-06-11 2021-09-14 上海海事大学 Remote monitoring and alarming system for marine engine room
CN113505765A (en) * 2021-09-09 2021-10-15 北京轻松筹信息技术有限公司 Age prediction method and device based on user head portrait and electronic equipment
CN113517057A (en) * 2021-09-10 2021-10-19 南通剑烽机械有限公司 Medical image information identification and storage method based on data representation and neural network
CN113516065A (en) * 2021-07-03 2021-10-19 北京中建建筑科学研究院有限公司 Data weight measuring and calculating method and device based on block chain, server and storage medium
ES2928611A1 (en) * 2021-05-18 2022-11-21 Univ Leon METHOD AND AUTOMATED SYSTEM FOR GENERATION OF A DIGITAL SIGNATURE FOR VERIFICATION OF A FACE (Machine-translation by Google Translate, not legally binding)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899921A (en) * 2015-06-04 2015-09-09 杭州电子科技大学 Single-view video human body posture recovery method based on multi-mode self-coding model
CN106295506A (en) * 2016-07-25 2017-01-04 华南理工大学 A kind of age recognition methods based on integrated convolutional neural networks
CN106447625A (en) * 2016-09-05 2017-02-22 北京中科奥森数据科技有限公司 Facial image series-based attribute identification method and device
CN106778584A (en) * 2016-12-08 2017-05-31 南京邮电大学 A kind of face age estimation method based on further feature Yu shallow-layer Fusion Features
CN111401344A (en) * 2020-06-04 2020-07-10 腾讯科技(深圳)有限公司 Face recognition method and device and training method and device of face recognition system
US20200285836A1 (en) * 2019-03-05 2020-09-10 Jpmorgan Chase Bank, N.A. Systems and methods for secure user logins with facial recognition and blockchain
CN111783593A (en) * 2020-06-23 2020-10-16 中国平安人寿保险股份有限公司 Human face recognition method and device based on artificial intelligence, electronic equipment and medium
CN111931693A (en) * 2020-08-31 2020-11-13 平安国际智慧城市科技股份有限公司 Traffic sign recognition method, device, terminal and medium based on artificial intelligence

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899921A (en) * 2015-06-04 2015-09-09 杭州电子科技大学 Single-view video human body posture recovery method based on multi-mode self-coding model
CN106295506A (en) * 2016-07-25 2017-01-04 华南理工大学 A kind of age recognition methods based on integrated convolutional neural networks
CN106447625A (en) * 2016-09-05 2017-02-22 北京中科奥森数据科技有限公司 Facial image series-based attribute identification method and device
CN106778584A (en) * 2016-12-08 2017-05-31 南京邮电大学 A kind of face age estimation method based on further feature Yu shallow-layer Fusion Features
US20200285836A1 (en) * 2019-03-05 2020-09-10 Jpmorgan Chase Bank, N.A. Systems and methods for secure user logins with facial recognition and blockchain
CN111401344A (en) * 2020-06-04 2020-07-10 腾讯科技(深圳)有限公司 Face recognition method and device and training method and device of face recognition system
CN111783593A (en) * 2020-06-23 2020-10-16 中国平安人寿保险股份有限公司 Human face recognition method and device based on artificial intelligence, electronic equipment and medium
CN111931693A (en) * 2020-08-31 2020-11-13 平安国际智慧城市科技股份有限公司 Traffic sign recognition method, device, terminal and medium based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
彭瑶: "基于超图的多模态特征选择及分类方法研究", 《中国优秀硕士学位论文全文数据库》, 15 February 2020 (2020-02-15), pages 4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239935A (en) * 2021-04-15 2021-08-10 广州广电运通金融电子股份有限公司 Image feature extraction method, device, equipment and medium based on block chain
ES2928611A1 (en) * 2021-05-18 2022-11-21 Univ Leon METHOD AND AUTOMATED SYSTEM FOR GENERATION OF A DIGITAL SIGNATURE FOR VERIFICATION OF A FACE (Machine-translation by Google Translate, not legally binding)
CN113395491A (en) * 2021-06-11 2021-09-14 上海海事大学 Remote monitoring and alarming system for marine engine room
CN113516065A (en) * 2021-07-03 2021-10-19 北京中建建筑科学研究院有限公司 Data weight measuring and calculating method and device based on block chain, server and storage medium
CN113505765A (en) * 2021-09-09 2021-10-15 北京轻松筹信息技术有限公司 Age prediction method and device based on user head portrait and electronic equipment
CN113505765B (en) * 2021-09-09 2022-02-08 北京轻松筹信息技术有限公司 Age prediction method and device based on user head portrait and electronic equipment
CN113517057A (en) * 2021-09-10 2021-10-19 南通剑烽机械有限公司 Medical image information identification and storage method based on data representation and neural network

Also Published As

Publication number Publication date
CN112446310B (en) 2024-09-24

Similar Documents

Publication Publication Date Title
CN112446310B (en) Age identification system, method and device based on block chain
US11436615B2 (en) System and method for blockchain transaction risk management using machine learning
CN113688855B (en) Data processing method, federal learning training method, related device and equipment
CN111816252B (en) Drug screening method and device and electronic equipment
CN111401558A (en) Data processing model training method, data processing device and electronic equipment
CN111681091B (en) Financial risk prediction method and device based on time domain information and storage medium
CN111695415A (en) Construction method and identification method of image identification model and related equipment
US20230342846A1 (en) Micro-loan system
CN111860865B (en) Model construction and analysis method, device, electronic equipment and medium
CN113011387B (en) Network training and human face living body detection method, device, equipment and storage medium
CN113822315A (en) Attribute graph processing method and device, electronic equipment and readable storage medium
CN111081337A (en) Collaborative task prediction method and computer readable storage medium
US11989276B2 (en) Intelligent authentication of users in Metaverse leveraging non-fungible tokens and behavior analysis
CN114240659A (en) Block chain abnormal node identification method based on dynamic graph convolutional neural network
CN114119997A (en) Training method and device for image feature extraction model, server and storage medium
CN117834175A (en) Method and system for detecting and classifying DDoS attack of integrated multi-model block chain
WO2023185541A1 (en) Model training method and related device
Xiao et al. CTDM: Cryptocurrency abnormal transaction detection method with spatio-temporal and global representation
Prem Kumar et al. Metaheuristics with Optimal Deep Transfer Learning Based Copy-Move Forgery Detection Technique.
US20230138780A1 (en) System and method of training heterogenous models using stacked ensembles on decentralized data
Cristin et al. Image tampering detection in image forensics using earthworm‐rider optimization
CN114418767A (en) Transaction intention identification method and device
Ghosh et al. A deep learning‐based SAR image change detection using spatial intuitionistic fuzzy C‐means clustering
Zhang A novel data preprocessing solution for large scale digital forensics investigation on big data
US20240311658A1 (en) Dynamic prototype learning framework for non-homophilous graphs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant