CN110909195A - Picture labeling method and device based on block chain, storage medium and server - Google Patents

Picture labeling method and device based on block chain, storage medium and server Download PDF

Info

Publication number
CN110909195A
CN110909195A CN201910969216.1A CN201910969216A CN110909195A CN 110909195 A CN110909195 A CN 110909195A CN 201910969216 A CN201910969216 A CN 201910969216A CN 110909195 A CN110909195 A CN 110909195A
Authority
CN
China
Prior art keywords
picture
features
information
block chain
labeled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910969216.1A
Other languages
Chinese (zh)
Inventor
王建华
何四燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910969216.1A priority Critical patent/CN110909195A/en
Priority to PCT/CN2019/118384 priority patent/WO2021068349A1/en
Publication of CN110909195A publication Critical patent/CN110909195A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of image detection, image classification and neural network, and provides a picture labeling method based on a block chain, which comprises the following steps: acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the request, wherein the layered features comprise geometric features, texture features and semantic features; determining the picture information of the picture to be marked according to the hierarchical characteristics, wherein the picture information comprises: semantic information, geometric information, texture information; and marking the picture to be marked according to the picture information to obtain a marked picture. By fusing the finely divided and roughly divided features of the features, the information of the object in the obtained picture is more accurate. And the deep convolutional neural network is trained by combining big data of the block chain, so that the error of the deep neural network in the calculation process is reduced.

Description

Picture labeling method and device based on block chain, storage medium and server
Technical Field
The invention relates to the technical field of image detection, image classification and neural networks, in particular to a picture labeling method and device based on a block chain, a storage medium and a server.
Background
With the development of artificial intelligence in recent years, the demand of the task of image annotation is increasing. Some tools for labeling pictures have appeared, and most of these tools are desktop offline applications. The tools for marking pictures in the prior art mainly include the following: firstly, labeling pictures based on labels. The labeling method is to label the picture. For example: inputting a picture, and outputting a plurality of keywords according to the output result of the picture marking. And secondly, labeling the application program for the desktop picture. Such labeling methods are generally used to provide training data for deep learning based target detection and recognition algorithms. For example: when the marks are marked, each airplane in the drawing is marked by a rectangular frame, and corresponding information such as length, width coordinates and the like is recorded. And thirdly, marking a website by using the online pictures. The existing online picture annotation website only provides a single type of annotation and the like. In the product development process, developers need other personnel to assist in completing the labeling of the pictures, and other personnel can not correctly label the pictures on the basis that the products of the developers cannot be understood, and the time and the energy of the developers are greatly wasted.
Disclosure of Invention
In order to overcome the technical problems, particularly the problems of single labeling form, low labeling precision and high labeling error of the current pictures, the following technical scheme is provided:
the image labeling method based on the block chain provided by the embodiment of the invention comprises the following steps:
acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the labeling request, wherein the layered features comprise bottom layer geometric features, middle layer texture features and high layer semantic features of the picture to be labeled in the convolution layers;
determining the picture information of the picture to be labeled according to the hierarchical characteristics, wherein the picture information comprises: semantic information, geometric information, texture information;
and marking the picture to be marked according to the picture information to obtain a marked picture.
Optionally, before the obtaining of the annotation request for annotating the to-be-annotated picture, the method further includes:
responding to a training request initiated by a node where a deep convolutional neural network in a block chain network is located, and acquiring a sample picture in the block chain network;
and training the deep convolutional neural network through the sample picture, wherein the sample picture is a labeled picture provided by different sample nodes in the block chain network.
Optionally, before responding to a training request initiated by a node where the deep convolutional neural network is located in the blockchain network, the method further includes:
and according to a preset training triggering operation, the node where the deep convolutional neural network is located sends the training request.
Optionally, before the obtaining of the annotation request for annotating the to-be-annotated picture, the method further includes:
the method comprises the steps that a block chain network acquires registered account information of a user, and an account key pair of the user logging in the block chain network is generated according to the registered account information, wherein the account key pair comprises a private key and a public key;
storing the public key in the blockchain, sending the private key to a user,
optionally, the obtaining a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the labeling request includes:
acquiring the information of the user processed by the private key in the annotation request;
verifying the information of the user by using the public key, and determining whether the user is a blockchain network user;
and when the user is a user of the block chain network, extracting the layered features of the picture to be labeled through the convolution layers with different scales of the deep convolutional neural network in the block chain network according to the labeling request.
Optionally, the extracting, according to the labeling request, the layered features of the to-be-labeled picture through each convolution layer of the deep convolutional neural network in the block chain network with different scales includes:
inputting the picture to be labeled into the deep convolutional neural network in the block chain network according to the labeling request, wherein the deep convolutional neural network sequentially executes convolution of each scale on the picture to be labeled from a first scale to obtain convolution characteristics under different scales;
obtaining the convolution characteristic under the last scale, and performing deconvolution on the convolution characteristic to obtain a reduction characteristic in an adjacent scale which is adjacent to the scale and is one scale larger than the scale;
fusing the reduction features and the convolution features which are positioned in the same scale with the reduction features to obtain fused features which are positioned in the same scale with the reduction features;
counting convolution times and deconvolution times, and when the deconvolution times are smaller than the convolution times, performing deconvolution on the fusion feature and the convolution feature which is located at the same scale as the fusion feature; and when the deconvolution times are equal to the convolution times, determining the fusion features as the hierarchical features of the picture to be labeled.
Optionally, the labeling the picture to be labeled according to the picture information, and after obtaining a labeled picture, the labeling includes:
and determining an object occupying the largest area of the marked picture in the marked picture according to the picture information of the marked picture, and determining the category of the marked picture based on the object.
The image labeling device based on the block chain provided by the embodiment of the invention comprises:
the acquisition module is used for acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through various convolution layers with different scales of a deep convolutional neural network in a block chain network according to the labeling request, wherein the layered features comprise bottom layer geometric features, middle layer texture features and high layer semantic features of the picture to be labeled in the convolution layers;
a determining module, configured to determine, according to the hierarchical feature, picture information of the picture to be labeled, where the picture information includes: semantic information, geometric information, texture information;
and the marking module is used for marking the picture to be marked according to the picture information to obtain a marked picture.
Optionally, the method further comprises:
and the training module is used for acquiring a sample picture through the block chain network and training the deep convolutional neural network through the sample picture.
Optionally, the method further comprises:
and the generation module is used for generating an account key pair by using the block chain network, storing the public key in the block chain and sending the private key to a user, wherein the account key pair comprises a private key and a public key.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the image annotation method based on the block chain according to any technical solution is implemented.
An embodiment of the present invention further provides a server, including:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications being configured to perform the steps of the block chain based picture annotation method according to any of the claims.
Compared with the prior art, the invention has the following beneficial effects:
1. the image labeling method based on the block chain provided by the embodiment of the application comprises the following steps: acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the labeling request, wherein the layered features comprise bottom layer geometric features, middle layer texture features and high layer semantic features of the picture to be labeled in the convolution layers; determining the picture information of the picture to be labeled according to the hierarchical characteristics, wherein the picture information comprises: semantic information, geometric information, texture information; and marking the picture to be marked according to the picture information to obtain a marked picture. The features of all scales are fused together, namely the features of fine division and rough division are fused together, so that the information of the object in the obtained picture is more accurate. The deep neural network in the block chain network is adopted to extract the features of the picture in different scales, and the error rate is lower due to the fact that the sample data size of deep neural network training is large, and therefore the accuracy is higher in the feature extraction and classification processes.
2. The image labeling method based on the block chain provided by the embodiment of the application further includes, before acquiring a labeling request for labeling an image to be labeled: the method comprises the steps that a block chain network acquires registered account information of a user, and an account key pair of the user logging in the block chain network is generated according to the registered account information, wherein the account key pair comprises a private key and a public key; and storing the public key in the block chain, and sending the private key to a user. On the basis of big data, in order to ensure the privacy of products, the data of users on a block chain using platform is encrypted based on a block chain technology, so that the marked pictures are prevented from being stolen and maliciously tampered.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart illustrating an implementation manner of an exemplary embodiment of a method for tagging a picture based on a block chain according to the present invention;
FIG. 2 is a schematic structural diagram of an exemplary embodiment of a device for labeling pictures based on a block chain according to the present invention;
fig. 3 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, or operations, but do not preclude the presence or addition of one or more other features, integers, steps, operations, or groups thereof.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It will be appreciated by those skilled in the art that the terms "application," "application program," "application software," and the like, as used herein, are intended to refer to a computer software product electronically-adapted to be electronically-constructed, from a collection of computer instructions and associated data resources, in accordance with the principles of the present invention. Unless otherwise specified, such nomenclature is not itself limited by the programming language class, level, or operating system or platform upon which it depends. Of course, such concepts are not limited to any type of terminal.
In an implementation manner of the picture labeling method based on the block chain provided in the embodiment of the present application, as shown in fig. 1, the method includes: s100, S200 and S300.
S100: acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the labeling request, wherein the layered features comprise bottom layer geometric features, middle layer texture features and high layer semantic features of the picture to be labeled in the convolution layers;
s200: determining the picture information of the picture to be labeled according to the hierarchical characteristics, wherein the picture information comprises: semantic information, geometric information, texture information;
s300: and marking the picture to be marked according to the picture information to obtain a marked picture.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. With distributed accounting and storage, there is no centralized hardware or management mechanism, the rights and obligations of any node are equal, and the data blocks in the system are commonly maintained by the nodes with maintenance function in the whole system.
The hierarchical features comprise bottom layer geometric features, middle layer textural features and high-level semantic features of the pictures in the convolutional layers, wherein the bottom layer geometric features are the geometric shapes and the geometric sizes of all objects in the pictures, the middle layer textural features are used for distinguishing the categories of all the objects, such as plants, animals, buildings and the like, and the high-level semantic features are used for matting according to the meanings expressed by the objects in the pictures, namely distinguishing the same object in the pictures. The object types in the pictures can be more accurately expressed and distinguished by extracting the hierarchical features in the pictures, and the pictures are labeled based on the object types in the pictures. For example, a picture includes: ground, traffic lines, sidewalks, pedestrians, buildings, trees, and other infrastructure. Geometric features such as ground geometry and size, traffic line shape size, tree shape and size. The textural features are the shape and size of the traffic line, ground, tree.
And inputting the picture to be marked into a deep convolutional neural network, after the deep convolutional neural network acquires the picture to be marked, carrying out convolution on the picture to be marked, and extracting the hierarchical features of the picture to be marked in each convolutional layer and each pooling layer through convolution, wherein the hierarchical features comprise features passing through each convolutional layer and each pooling layer under different scales. Because in the neural network result, under the same scale, there is one pooling layer and more than one convolution layer. The convolution layer is used for carrying out feature extraction on an input picture, and the pooling layer is as follows: compressing the input feature diagram, so that the feature diagram is reduced and the network computation complexity is simplified; on one hand, feature compression is carried out, and main features are extracted. After one convolution layer, the pooling layer of the next scale samples the features extracted from the convolution layer of the previous scale, and the pooling layer performs down-sampling on the features extracted from the convolution layer of the previous scale, so that smaller features with the same quantity can be obtained; and then, by convolution of the convolution layer under the scale of the pooling layer, refining the characteristics after passing through the pooling layer, and further obtaining more accurate and/or more characteristics, so that the extracted characteristics can more express corresponding target objects in the picture along with the deepening of the convolution in the follow-up process, and further the deep convolutional neural network can more accurately distinguish object types in the picture to be labeled. Furthermore, in combination with the above, different features are extracted at different scales, so that hierarchical features can be obtained. In order to enable more features corresponding to the same target object in the picture to be marked, the layered features extracted from the convolutional layers under different scales are fused, and then picture information can be obtained. After the picture information is obtained, the picture information comprises one or more of bottom layer geometric information, middle layer texture information and high layer semantic information of a target, so that the information of the target is rich, the deep convolutional neural network can integrate the picture information to accurately classify each object in the picture to be labeled, and then label the picture to be labeled according to the classification, so that the influence of noise and the like in the picture on labeling of the picture is avoided, the picture information is accurately obtained, the picture labeling provides reference data, and the accuracy of the picture labeling is further realized.
In combination with the foregoing process, in order to clearly label the picture, the object in the picture is determined based on the foregoing layered features, and then the object information in the picture is determined, so that the picture is labeled based on the information. For example, determining an object having a regular geometric shape in a graph by using a geometric feature, and obtaining size information of the object, determining a texture of the object as a building by using a texture feature, and determining an occupied area of the object in the graph by using a semantic feature, and labeling the object as a building in the graph according to the foregoing features, includes: the area of the building, the size of the building, etc. If plants are also included in the picture, this can be done based on the building labeling process as well. Through the extraction of the hierarchical features of all levels of the deep convolutional neural network, the classification of object categories and size regions in the picture is realized in one step, so that the information in the marked picture is more accurate and fine, and after the same object in the picture has the fusion features, the features of the same object are more abundant, so that the object in the picture can be accurately described through the abundant features.
Further, the deep convolutional neural network is trained through the sample picture, namely the hierarchical features extracted through the deep convolutional neural network can be converted into the text information corresponding to each hierarchical feature, for example, the size features of the geometric features are extracted to be used as the size features of the object, the text information corresponding to the texture features is extracted and determined to be the text information, and the accurate outline of the object in the picture is determined based on the semantic features. The specific category of each object in the picture can be obtained according to the information corresponding to the hierarchical features, and then the picture is labeled according to the category, wherein the label can be one or more of character labeling, color coverage of the same category, colored lines and the same category division, the label content can comprise the hierarchical feature information such as geometric dimension, texture and corresponding area, the information of the three is combined together to realize accurate labeling of each object in the picture, and the accuracy of the picture labeling is improved due to the fact that the related features are multiple and have multi-level features.
Optionally, before the obtaining of the annotation request for annotating the to-be-annotated picture, the method further includes:
responding to a training request initiated by a node where a deep convolutional neural network in a block chain network is located, and acquiring a sample picture in the block chain network;
and training the deep convolutional neural network through the sample picture, wherein the sample picture is a labeled picture provided by different sample nodes in the block chain network.
And acquiring a sample picture through a block chain network, and training the deep convolutional neural network through the sample picture. Specifically, a training request initiated by a node where a deep convolutional neural network in a block chain network is located is responded; and acquiring a sample picture in the block chain network according to the training request, and training the deep convolutional neural network through the sample picture, wherein the sample picture is a labeled picture provided by different sample nodes in the block chain network. The block chain network comprises a plurality of nodes, wherein the deep convolutional neural network is located at one node in the block chain network, in order to train the deep convolutional neural network, when the deep convolutional neural network needs to be trained, a training request is sent out through the node where the deep convolutional neural network is located, and the block chain network acquires a sample picture in the block chain network according to the training request sent out by the node, so that the deep convolutional neural network can be trained through the acquired sample picture, wherein the sample picture is a labeled picture provided by different sample nodes in the block chain network, and the different sample nodes are from labeled pictures provided by users in the block chain network. The method comprises the steps of collecting sample pictures through a block chain network, and then training a deep convolutional neural network, wherein the sample pictures collected through the block chain network are large in quantity, and the deep convolutional neural network is trained irregularly through more sample pictures, so that the loss of the convolutional neural network in the information extraction and classification process is reduced, and the accuracy of information identification and classification of the convolutional neural network is improved. In one embodiment, in order to improve the capability of the deep convolutional neural network to update more quickly, the accuracy of extracting the hierarchical features by the deep convolutional neural network is improved, and the extraction error is reduced. When the block chain network has a preset training triggering operation, the node where the deep convolutional neural network is located sends out a training request. Correspondingly, in one embodiment, the preset training trigger operation includes a preset training time period, for example, a preset interval of one day or one week, the node where the deep convolutional neural network is located sends a training request; in another embodiment, the preset training triggering operation includes the number of sample pictures, the block chain network counts the number of sample pictures provided by each sample node, and when the number of new sample pictures reaches a certain threshold value or the total number of sample pictures reaches a certain threshold value, the node where the deep convolutional neural network is located sends out a training request; through the process, the training effect of the deep convolutional neural network can be rapidly improved, namely the training precision of the deep convolutional neural network is improved, the hierarchical features extracted by the deep convolutional neural network are more accurate, and then the accurate and effective marking can be carried out on the image to be marked based on the accurate hierarchical features, so that the marking error is reduced. In another embodiment, the preset training trigger operation includes a training operation triggered by a user, and if the user enters the block chain network and clicks a training trigger module in the block chain network, the preset training trigger operation is triggered, and a node where the deep convolutional neural network is located sends a training request. The deep convolutional neural network training method and device have the advantages that the deep convolutional neural network is trained according to user requirements, so that the characteristic extraction effect of the deep convolutional neural network after training can meet the user requirements, and user experience is improved.
Optionally, before the obtaining of the annotation request for annotating the to-be-annotated picture, the method further includes:
the method comprises the steps that a block chain network acquires registered account information of a user, and an account key pair of the user logging in the block chain network is generated according to the registered account information, wherein the account key pair comprises a private key and a public key;
and storing the public key in the block chain, and sending the private key to a user.
And generating an account key pair by using the block chain network, storing the public key in the block chain, and sending the private key to a user, wherein the account key pair comprises a private key and a public key. For the embodiment of the application, the method provided by the technical scheme can be applied to a picture marking platform, and the picture marking platform is located in a certain public block chain. In practical application, the deep convolutional neural network and the image annotation platform can be in the same block chain, and can also be in different block chains. Therefore, when the deep convolutional neural network and the image labeling platform are located in different block chains, the cross-chain operation of image labeling can be realized. For example, before using the image annotation applying the method provided by the application, a user can send a storage request of the image to be annotated by using an interface provided in advance by the image annotation platform, so that all the images to be annotated of the user can be stored in the block chain. In practical application, the information of the user can be collected at a system account or a picture marking system account, or can be collected before picture marking is needed. Further, in order to ensure the security of the user information and the data (the picture to be labeled and the labeled picture) associated with the user information, when the user registers an account, the blockchain network obtains the registered account information of the user, and generates an account key pair for the user to log in the blockchain network according to the registered account information, where the account key pair includes a private key and a public key, as described above. Specifically, an account key pair is generated by using a block chain network, and a private key in the account key pair is sent to a user side, so that the obtained user information is processed by using the private key, and the purpose of signing the user information by using the private key is realized; and storing the public key in the account key pair in the labeling platform applying the method provided by the technical scheme so as to carry out verification by using the public key, and determining the users of the pictures to be labeled through verification, thereby realizing the purpose of distinguishing the users in different block chain networks and preventing the safety problem of the product pictures in the research and development process caused by falsification of information.
Optionally, the obtaining a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the labeling request includes:
acquiring the information of the user processed by the private key in the annotation request;
verifying the information of the user by using the public key, and determining whether the user is a blockchain network user;
and when the user is a user of the block chain network, extracting the layered features of the picture to be labeled through the convolution layers with different scales of the deep convolutional neural network in the block chain network according to the labeling request.
In the embodiment provided by the application, after the block chain network acquires the request provided by the user for labeling the picture to be labeled, in order to ensure the safety of each user picture in the block chain network and avoid malicious tampering of data therein by the user with other purposes, in the embodiment provided by the application, the information of the user processed by the private key in the labeling request is acquired, then the block chain network determines the information of the user by using the public key, when the block chain network user is the block chain network user, the subsequent picture information extraction and the like can be performed on the picture to be labeled based on the request provided by the block chain network user, and in the embodiment provided by the application, the layering characteristics of the picture to be labeled are extracted through the rolling layers of the deep convolutional neural network in the block chain network with different scales.
Optionally, the extracting, according to the labeling request, the layered features of the to-be-labeled picture through each convolution layer of the deep convolutional neural network in the block chain network with different scales includes:
inputting the picture to be labeled into the deep convolutional neural network in the block chain network according to the labeling request, wherein the deep convolutional neural network sequentially executes convolution of each scale on the picture to be labeled from a first scale to obtain convolution characteristics under different scales;
obtaining the convolution characteristic under the last scale, and performing deconvolution on the convolution characteristic to obtain a reduction characteristic in an adjacent scale which is adjacent to the scale and is one scale larger than the scale;
fusing the reduction features and the convolution features which are positioned in the same scale with the reduction features to obtain fused features which are positioned in the same scale with the reduction features;
counting convolution times and deconvolution times, and when the deconvolution times are smaller than the convolution times, performing deconvolution on the fusion feature and the convolution feature which is located at the same scale as the fusion feature; and when the deconvolution times are equal to the convolution times, determining the fusion features as the hierarchical features of the picture to be labeled.
In the embodiment provided by the application, when hierarchical feature extraction is performed through a deep convolutional neural network, after a picture to be labeled is input into the deep convolutional neural network, the deep convolutional neural network performs first scale convolution on the picture to be labeled to obtain a first convolutional feature, then performs second scale convolution on the picture to be labeled, which performs the first scale convolution, to obtain a second convolutional feature, then performs third scale convolution on the picture to be labeled, which performs the second scale convolution, to obtain a third convolutional feature … …, and so on, until the last scale of the deep convolutional neural network is performed. In order to improve the accuracy of the extraction of the hierarchical features, each convolution feature needs to be deconvoluted, correspondingly, after the convolution process of the last scale is completed, the convolution feature of the last scale is deconvoluted, the convolution feature of the last scale is restored, the restoration feature of the last scale adjacent to the convolution feature of the last scale is further obtained, the restoration feature and the convolution feature of the restoration feature under the same scale are fused together, so that the fusion feature can be obtained, the fusion feature and the restoration feature are located in the same scale, the fusion feature represents the feature which does not execute convolution yet in the scale, but the feature which does not execute convolution yet is not necessarily the same as or equal to the fusion feature, and further, the fusion feature is more accurate and finer than the feature which does not execute convolution yet in the same scale. In order to avoid infinite deconvolution of the deep convolutional neural network, counting the times of deconvolution and convolution, and when the times of deconvolution and convolution are equal, indicating that the fusion feature reaches the first scale, at this time, determining the fusion feature under the first scale as a hierarchical feature, wherein the fusion feature comprises the geometric feature, the texture feature and the semantic feature of the hierarchical feature. Because the fusion characteristics are more accurate and delicate, the characteristics corresponding to the same object are richer, the more delicate characteristics of the same object can be expressed, and the object in the picture can be marked more accurately. Optionally, the labeling the picture to be labeled according to the picture information, and after obtaining a labeled picture, the labeling includes:
and determining an object occupying the largest area of the marked picture in the marked picture according to the picture information of the marked picture, and determining the category of the marked picture based on the object.
In the embodiment provided by the application, in order to realize the classification of the labeled pictures, the labeled pictures are classified on the basis of combining the picture information of the standard pictures, correspondingly, the pictures are labeled according to the area of each object in the labeled pictures, and further the content in the labeled pictures can be embodied, for example, one picture comprises street view and public transport vehicles, the public transport vehicles occupy the largest area of the whole labeled picture, the labeled picture is classified into a vehicle picture, and further, the pictures can be classified according to the category of each object in the picture, so that the condition that one picture is classified into multiple categories is realized. After the image annotation is finished, sending the annotated image to a user so that the user can use the image in the research and development process; further, the picture category can be determined based on preset picture classification rules, and the picture classification rules can extract key information in the labels to classify the pictures. On the basis of the above, overall classification is performed based on various object categories in the picture, for example, the picture includes: when the information is information such as ground, trees, vehicles, buildings, zebra crossings and the like, the pictures can be classified into street pictures, the pictures can be classified by combining various information in the pictures, one picture can be classified into multiple categories, if only people exist in the picture, the picture can be classified into a person picture, if the picture comprises return words and sentences, after the hierarchical feature is extracted, the picture information obtained based on the hierarchical layer comprises character information, the picture is further comprehensively classified based on the character information and picture information of other structures, and the character information can be object annotation information (such as watermarks), character information shot in the picture or character information added artificially.
In the embodiment provided by the application, in combination with the foregoing process, further, it is determined whether an object whose area corresponding to the semantic information matches an area corresponding to preset semantic information exists in the labeled picture; if so, determining the object corresponding to the semantic information in the labeled picture as a maximum object;
if not, judging whether an object with the area corresponding to the texture information matched with the area corresponding to the preset texture information exists in the marked picture, and if so, determining the object corresponding to the texture information in the marked picture as a maximum object; if not, judging whether an object with the area corresponding to the geometric information matched with the area corresponding to the preset geometric information exists in the marked picture, and if so, determining the object corresponding to the geometric information in the marked picture as a maximum object; and if not, determining the object corresponding to the semantic information in the labeled picture as the maximum object. For example, the semantic information is selected first, an object in the annotation picture is matched, and if the area of the object occupies one fourth of the area of the annotation picture, the object is the largest object; if the object is not matched, selecting texture information, matching and marking the object in the picture, and if the area of the object occupies one third of the area of the annotation picture, determining that the object is the largest object; if the object is not matched, selecting geometric information to match the object in the annotation picture, and if the area of the object occupies one half of the area of the annotation picture, determining that the object is the largest object. And if the semantic information does not exist in the object list, selecting the object matched with the semantic information as a standard.
An embodiment of the present invention further provides a device for labeling a picture based on a block chain, in one implementation manner, as shown in fig. 2, the device includes: the acquisition module 100, the determination module 200 and the labeling module 300:
the acquiring module 100 is configured to acquire a labeling request for labeling a picture to be labeled, and extract layered features of the picture to be labeled through each convolutional layer of a deep convolutional neural network in a block chain network, where the convolutional layers are different in scale, according to the labeling request, where the layered features include a bottom layer geometric feature, a middle layer texture feature, and a high layer semantic feature of the picture to be labeled in the convolutional layers;
a determining module 200, configured to determine, according to the hierarchical feature, picture information of the picture to be annotated, where the picture information includes: semantic information, geometric information, texture information;
and the labeling module 300 is configured to label the to-be-labeled picture according to the picture information to obtain a labeled picture.
Further, as shown in fig. 2, the image annotation apparatus based on a block chain provided in the embodiment of the present invention further includes: the response module 101 is configured to respond to a training request initiated by a node where a deep convolutional neural network in a blockchain network is located, and obtain a sample picture in the blockchain network; the training module 102 is configured to train the deep convolutional neural network through the sample picture, where the sample picture is a labeled picture provided by different sample nodes in the block chain network. A request sending module 1011, configured to trigger operation according to preset training, where a node where the deep convolutional neural network is located sends the training request. The generation module 103 is configured to acquire registration account information of a user in a blockchain network, and generate an account key pair for the user to log in the blockchain network according to the registration account information, where the account key pair includes a private key and a public key. A private key sending module 104, configured to store the public key in the block chain, and send the private key to the user. An obtaining unit 110, configured to obtain information of the user, which is processed by using the private key, in the annotation request; a determining unit 120, configured to verify information of the user by using the public key, and determine whether the user is a blockchain network user; an extracting unit 130, configured to extract, when the user is a user of the blockchain network, the hierarchical features of the picture to be labeled through each convolution layer of the deep convolutional neural network in the blockchain network with different scales according to the labeling request. The convolution unit 131 is configured to input the picture to be labeled into the deep convolution neural network in the block chain network according to the labeling request, where the deep convolution neural network sequentially performs convolution of each scale on the picture to be labeled from a first scale to obtain convolution features under different scales; a deconvolution unit 132, configured to obtain the convolution feature in the last scale, perform deconvolution on the convolution feature, and obtain a reduction feature in an adjacent scale that is adjacent to the scale and is one-size larger than the scale; a fusion unit 133, configured to fuse the reduction feature and the convolution feature located in the same scale as the reduction feature, so as to obtain a fusion feature located in the same scale as the reduction feature; a counting unit 134, configured to count a convolution number and a deconvolution number, and when the deconvolution number is smaller than the convolution number, perform a deconvolution step on the fusion feature and the convolution feature that is located at the same scale as the fusion feature; and when the deconvolution times are equal to the convolution times, determining the fusion features as the hierarchical features of the picture to be labeled. The category determining module 400 is configured to determine, according to the image information of the labeled image, an object occupying the largest area of the labeled image in the labeled image, and determine a category of the labeled image based on the object.
The image labeling device based on the block chain according to the embodiment of the present invention can implement the above-mentioned embodiment of the image labeling method based on the block chain, and for specific function implementation, please refer to the description in the embodiment of the method, which is not described herein again.
In the computer-readable storage medium provided in the embodiment of the present invention, a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the image annotation method based on a block chain according to any technical scheme is implemented. The computer-readable storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only memories), RAMs (Random AcceSS memories), EPROMs (EraSable Programmable Read-Only memories), EEPROMs (Electrically EraSable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards. That is, a storage device includes any medium that stores or transmits information in a form readable by a device (e.g., a computer, a cellular phone), and may be a read-only memory, a magnetic or optical disk, or the like.
According to the computer-readable storage medium provided by the embodiment of the invention, the embodiment of the picture labeling method based on the block chain can be realized, the features of the picture with different scales are extracted by adopting the deep neural network in the block chain network, and the sample data volume of deep neural network training is larger, so that the feature extraction and classification process has higher accuracy and lower error rate. The features of all scales are fused together, namely the features of fine division and rough division are fused together, so that the information of the object in the obtained picture is more accurate. Meanwhile, the deep convolutional neural network is trained by combining big data of the block chain, so that the error of the deep neural network in the calculation process is reduced; the image labeling method based on the block chain provided by the embodiment of the application comprises the following steps: acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the labeling request, wherein the layered features comprise bottom layer geometric features, middle layer texture features and high layer semantic features of the picture to be labeled in the convolution layers; determining the picture information of the picture to be labeled according to the hierarchical characteristics, wherein the picture information comprises: semantic information, geometric information, texture information; and marking the picture to be marked according to the picture information to obtain a marked picture. The deep convolutional neural network in the block chain network has larger sample pictures, so that the deep convolutional neural network trained by a large number of sample pictures is more accurate in feature extraction and classification, the error rate is lower, the bottom layer geometric features are the geometric shapes and the geometric sizes of all objects in the picture, the middle layer texture features are used for distinguishing the categories of all the objects, such as plants, animals, buildings and the like, and the high layer semantic features are matting according to the meaning expressed by the objects in the picture, namely, the same object in the picture is distinguished. The object types in the pictures can be more accurately expressed and distinguished by extracting the hierarchical features in the pictures, and the pictures are labeled based on the object types in the pictures. For example, a picture includes: ground, traffic lines, sidewalks, pedestrians, buildings, trees, and other infrastructure. Geometric features such as ground geometry and size, traffic line shape size, tree shape and size. The textural features are the shape and size of the traffic line, ground, tree. And inputting the picture to be marked into a deep convolutional neural network, after the deep convolutional neural network acquires the picture to be marked, carrying out convolution on the picture to be marked, and extracting the hierarchical features of the picture to be marked in each convolutional layer and each pooling layer through convolution, wherein the hierarchical features comprise features passing through each convolutional layer and each pooling layer under different scales. Because in the neural network result, under the same scale, there is one pooling layer and more than one convolution layer. In combination with the above, different features are extracted at different scales, and thus, hierarchical features can be obtained. In order to enable more features corresponding to the same target object in the picture to be marked, the layered features extracted from the convolutional layers under different scales are fused, and feature information of each object in the picture can be obtained. After the picture information is obtained, the picture information comprises one or more of bottom layer geometric information, middle layer texture information and high layer semantic information of a target, so that the information of the target is rich, the deep convolutional neural network can integrate the picture information to accurately classify each object in the picture to be labeled, and then label the picture to be labeled according to the classification, so that the influence of noise and the like in the picture on labeling of the picture is avoided, the picture information is accurately obtained, the picture labeling provides reference data, and the accuracy of the picture labeling is further realized. In combination with the foregoing process, in order to clearly label the picture, the object in the picture is determined based on the foregoing layered features, and then the object information in the picture is determined, so that the picture is labeled based on the information. For example, determining an object having a regular geometric shape in a graph by using a geometric feature, and obtaining size information of the object, determining a texture of the object as a building by using a texture feature, and determining an occupied area of the object in the graph by using a semantic feature, and labeling the object as a building in the graph according to the foregoing features, includes: the area of the building, the size of the building, etc. If plants are also included in the picture, this can be done based on the building labeling process as well. Through the extraction of the hierarchical features of all levels of the deep convolutional neural network, the classification of object categories and size regions in the picture is realized in one step, so that the information in the marked picture is more accurate and fine, and after the same object in the picture has the fusion features, the features of the same object are more abundant, so that the object in the picture can be accurately described through the abundant features. The specific category of each object in the picture can be obtained according to the information corresponding to the hierarchical features, and then the picture is labeled according to the category, wherein the label can be one or more of character labeling, color coverage of the same category, colored lines and the same category division, the label content can comprise the hierarchical feature information such as geometric dimension, texture and corresponding area, the information of the three is combined together to realize accurate labeling of each object in the picture, and the accuracy of the picture labeling is improved due to the fact that the related features are multiple and have multi-level features.
In addition, in another embodiment, the present invention further provides a server, as shown in fig. 3, the server includes a processor 503, a memory 505, an input unit 507, and a display unit 509. Those skilled in the art will appreciate that the structural elements shown in fig. 3 do not constitute a limitation of all servers and may include more or fewer components than those shown, or some combination of components. The memory 505 may be used to store the application 501 and various functional modules, and the processor 503 executes the application 501 stored in the memory 505, thereby performing various functional applications of the device and data processing. Memory 505 may be internal memory or external memory, or include both internal and external memory. The memory may comprise read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), flash memory, or random access memory. The external memory may include a hard disk, a floppy disk, a ZIP disk, a usb-disk, a magnetic tape, etc. The disclosed memory includes, but is not limited to, these types of memory. The memory 505 disclosed herein is provided by way of example only and not by way of limitation.
The input unit 507 is used for receiving input of signals, account registration information input by a user and related picture information. The input unit 507 may include a touch panel and other input devices. The touch panel can collect touch operations of a client on or near the touch panel (for example, operations of the client on or near the touch panel by using any suitable object or accessory such as a finger, a stylus and the like) and drive the corresponding connecting device according to a preset program; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., play control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like. The display unit 509 may be used to display information input by the customer or information provided to the customer and various menus of the computer device. The display unit 509 may take the form of a liquid crystal display, an organic light emitting diode, or the like. The processor 503 is a control center of the computer device, connects various parts of the entire computer using various interfaces and lines, and performs various functions and processes data by operating or executing software programs and/or modules stored in the memory 503 and calling data stored in the memory. The one or more processors 503 shown in fig. 3 are capable of executing, implementing, the functions of the acquisition module 100, the determination module 200, the labeling module 300, the response module 101, the training module 102, the request transmission module 1011, the generation module 103, the private key transmission module 104, the acquisition unit 110, the determination unit 120, the extraction unit 130, the convolution unit 131, the deconvolution unit 132, the fusion unit 133, the statistics unit 134, and the category determination module 400 shown in fig. 2.
In one embodiment, the server includes one or more processors 503, one or more memories 505, and one or more applications 501, wherein the one or more applications 501 are stored in the memory 505 and configured to be executed by the one or more processors 503, and the one or more applications 301 are configured to perform the method for tagging pictures based on a blockchain as described in the above embodiment.
According to the server provided by the embodiment of the invention, the embodiment of the picture labeling method based on the block chain can be realized, the deep neural network in the block chain network is adopted to extract the features of the picture with different scales, and the sample data volume of deep neural network training is large, so that the accuracy is higher in the feature extraction and classification processes, and the error rate is lower. The features of all scales are fused together, namely the features of fine division and rough division are fused together, so that the information of the object in the obtained picture is more accurate. Meanwhile, the deep convolutional neural network is trained by combining big data of the block chain, so that the error of the deep neural network in the calculation process is reduced; the image labeling method based on the block chain provided by the embodiment of the application comprises the following steps: acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the labeling request, wherein the layered features comprise bottom layer geometric features, middle layer texture features and high layer semantic features of the picture to be labeled in the convolution layers; determining the picture information of the picture to be labeled according to the hierarchical characteristics, wherein the picture information comprises: semantic information, geometric information, texture information; and marking the picture to be marked according to the picture information to obtain a marked picture. The deep convolutional neural network in the block chain network has larger sample pictures, so that the deep convolutional neural network trained by a large number of sample pictures is more accurate in feature extraction and classification, the error rate is lower, the bottom layer geometric features are the geometric shapes and the geometric sizes of all objects in the picture, the middle layer texture features are used for distinguishing the categories of all the objects, such as plants, animals, buildings and the like, and the high layer semantic features are matting according to the meaning expressed by the objects in the picture, namely, the same object in the picture is distinguished. The object types in the pictures can be more accurately expressed and distinguished by extracting the hierarchical features in the pictures, and the pictures are labeled based on the object types in the pictures. For example, a picture includes: ground, traffic lines, sidewalks, pedestrians, buildings, trees, and other infrastructure. Geometric features such as ground geometry and size, traffic line shape size, tree shape and size. The textural features are the shape and size of the traffic line, ground, tree. And inputting the picture to be marked into a deep convolutional neural network, after the deep convolutional neural network acquires the picture to be marked, carrying out convolution on the picture to be marked, and extracting the hierarchical features of the picture to be marked in each convolutional layer and each pooling layer through convolution, wherein the hierarchical features comprise features passing through each convolutional layer and each pooling layer under different scales. Because in the neural network result, under the same scale, there is one pooling layer and more than one convolution layer. In combination with the above, different features are extracted at different scales, and thus, hierarchical features can be obtained. In order to enable more features corresponding to the same target object in the picture to be marked, the layered features extracted from the convolutional layers under different scales are fused, and feature information of each object in the picture can be obtained. After the picture information is obtained, the picture information comprises one or more of bottom layer geometric information, middle layer texture information and high layer semantic information of a target, so that the information of the target is rich, the deep convolutional neural network can integrate the picture information to accurately classify each object in the picture to be labeled, and then label the picture to be labeled according to the classification, so that the influence of noise and the like in the picture on labeling of the picture is avoided, the picture information is accurately obtained, the picture labeling provides reference data, and the accuracy of the picture labeling is further realized. In combination with the foregoing process, in order to clearly label the picture, the object in the picture is determined based on the foregoing layered features, and then the object information in the picture is determined, so that the picture is labeled based on the information. For example, determining an object having a regular geometric shape in a graph by using a geometric feature, and obtaining size information of the object, determining a texture of the object as a building by using a texture feature, and determining an occupied area of the object in the graph by using a semantic feature, and labeling the object as a building in the graph according to the foregoing features, includes: the area of the building, the size of the building, etc. If plants are also included in the picture, this can be done based on the building labeling process as well. Through the extraction of the hierarchical features of all levels of the deep convolutional neural network, the classification of object categories and size regions in the picture is realized in one step, so that the information in the marked picture is more accurate and fine, and after the same object in the picture has the fusion features, the features of the same object are more abundant, so that the object in the picture can be accurately described through the abundant features. The specific category of each object in the picture can be obtained according to the information corresponding to the hierarchical features, and then the picture is labeled according to the category, wherein the label can be one or more of character labeling, color coverage of the same category, colored lines and the same category division, the label content can comprise the hierarchical feature information such as geometric dimension, texture and corresponding area, the information of the three is combined together to realize accurate labeling of each object in the picture, and the accuracy of the picture labeling is improved due to the fact that the related features are multiple and have multi-level features.
The server provided by the embodiment of the present invention can implement the embodiment of the image labeling method based on the block chain, and for specific function implementation, reference is made to the description in the embodiment of the method, which is not described herein again.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A picture marking method based on a block chain is characterized by comprising the following steps:
acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through each convolution layer with different scales of a deep convolutional neural network in a block chain network according to the labeling request, wherein the layered features comprise bottom layer geometric features, middle layer texture features and high layer semantic features of the picture to be labeled in the convolution layers;
determining the picture information of the picture to be labeled according to the hierarchical characteristics, wherein the picture information comprises: semantic information, geometric information, texture information;
and marking the picture to be marked according to the picture information to obtain a marked picture.
2. The method for labeling a picture based on a block chain according to claim 1, wherein before the obtaining of the labeling request for labeling the picture to be labeled, the method further comprises:
responding to a training request initiated by a node where a deep convolutional neural network in a block chain network is located, and acquiring a sample picture in the block chain network;
and training the deep convolutional neural network through the sample picture, wherein the sample picture is a labeled picture provided by different sample nodes in the block chain network.
3. The method for labeling pictures based on blockchains according to claim 2, wherein the responding to the training request initiated by the node where the deep convolutional neural network is located in the blockchain network further comprises:
and according to a preset training triggering operation, the node where the deep convolutional neural network is located sends the training request.
4. The method for labeling a picture based on a block chain according to claim 1, wherein before the obtaining of the labeling request for labeling the picture to be labeled, the method further comprises:
the method comprises the steps that a block chain network acquires registered account information of a user, and an account key pair of the user logging in the block chain network is generated according to the registered account information, wherein the account key pair comprises a private key and a public key;
and storing the public key in the block chain, and sending the private key to a user.
5. The block chain-based picture labeling method according to claim 4, wherein the obtaining of a labeling request for labeling a picture to be labeled and the extracting of the layered features of the picture to be labeled through each convolutional layer of a deep convolutional neural network in a block chain network at different scales according to the labeling request comprises:
acquiring the information of the user processed by the private key in the annotation request;
verifying the information of the user by using the public key, and determining whether the user is a blockchain network user;
and when the user is a user of the block chain network, extracting the layered features of the picture to be labeled through the convolution layers with different scales of the deep convolutional neural network in the block chain network according to the labeling request.
6. The method according to claim 5, wherein the extracting layered features of the to-be-labeled picture through convolution layers of different scales of a deep convolutional neural network in a block chain network according to the labeling request comprises:
inputting the picture to be labeled into the deep convolutional neural network in the block chain network according to the labeling request, wherein the deep convolutional neural network sequentially executes convolution of each scale on the picture to be labeled from a first scale to obtain convolution characteristics under different scales;
obtaining the convolution characteristic under the last scale, and performing deconvolution on the convolution characteristic to obtain a reduction characteristic in an adjacent scale which is adjacent to the scale and is one scale larger than the scale;
fusing the reduction features and the convolution features which are positioned in the same scale with the reduction features to obtain fused features which are positioned in the same scale with the reduction features;
counting convolution times and deconvolution times, and when the deconvolution times are smaller than the convolution times, performing deconvolution on the fusion feature and the convolution feature which is located at the same scale as the fusion feature; and when the deconvolution times are equal to the convolution times, determining the fusion features as the hierarchical features of the picture to be labeled.
7. The method according to any one of claims 1 to 4, wherein the labeling the picture to be labeled according to the picture information to obtain a labeled picture comprises:
and determining an object occupying the largest area of the marked picture in the marked picture according to the picture information of the marked picture, and determining the category of the marked picture based on the object.
8. The utility model provides a picture marking device based on block chain which characterized in that includes:
the acquisition module is used for acquiring a labeling request for labeling a picture to be labeled, and extracting layered features of the picture to be labeled through various convolution layers with different scales of a deep convolutional neural network in a block chain network according to the labeling request, wherein the layered features comprise bottom layer geometric features, middle layer texture features and high layer semantic features of the picture to be labeled in the convolution layers;
a determining module, configured to determine, according to the hierarchical feature, picture information of the picture to be labeled, where the picture information includes: semantic information, geometric information, texture information;
and the marking module is used for marking the picture to be marked according to the picture information to obtain a marked picture.
9. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the program is executed by a processor, the method for labeling a picture based on a block chain according to any one of claims 1 to 7 is implemented.
10. A server, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications being configured to perform the steps of the block chain based picture annotation method according to any one of claims 1 to 7.
CN201910969216.1A 2019-10-12 2019-10-12 Picture labeling method and device based on block chain, storage medium and server Pending CN110909195A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910969216.1A CN110909195A (en) 2019-10-12 2019-10-12 Picture labeling method and device based on block chain, storage medium and server
PCT/CN2019/118384 WO2021068349A1 (en) 2019-10-12 2019-11-14 Blockchain-based picture labelling method and apparatus, storage medium and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910969216.1A CN110909195A (en) 2019-10-12 2019-10-12 Picture labeling method and device based on block chain, storage medium and server

Publications (1)

Publication Number Publication Date
CN110909195A true CN110909195A (en) 2020-03-24

Family

ID=69815369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910969216.1A Pending CN110909195A (en) 2019-10-12 2019-10-12 Picture labeling method and device based on block chain, storage medium and server

Country Status (2)

Country Link
CN (1) CN110909195A (en)
WO (1) WO2021068349A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111581671A (en) * 2020-05-11 2020-08-25 笵成科技南京有限公司 Digital passport protection method combining deep neural network and block chain
CN112418263A (en) * 2020-10-10 2021-02-26 上海鹰瞳医疗科技有限公司 Medical image focus segmentation and labeling method and system
CN113744848A (en) * 2021-08-02 2021-12-03 中山大学中山眼科中心 Method and system for realizing medical image labeling management

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419199B (en) * 2021-12-20 2023-11-07 北京百度网讯科技有限公司 Picture marking method and device, electronic equipment and storage medium
CN114462020B (en) * 2022-04-11 2022-07-12 广州卓远虚拟现实科技有限公司 Software authorization method and software authorization system based on block chain

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180247399A1 (en) * 2017-02-27 2018-08-30 Aniket Bharat Parikh System, Method and Computer Program Product for Security Analysis of Jewellery Items
CN108898219A (en) * 2018-06-07 2018-11-27 广东工业大学 A kind of neural network training method based on block chain, device and medium
CN109801293A (en) * 2019-01-08 2019-05-24 平安科技(深圳)有限公司 Remote Sensing Image Segmentation, device and storage medium, server
WO2019144353A1 (en) * 2018-01-25 2019-08-01 深圳前海达闼云端智能科技有限公司 Blockchain-based data training method and device, storage medium and blockchain node
CN110188787A (en) * 2019-04-11 2019-08-30 淮阴工学院 It is a kind of mutually to be demonstrate,proved based on block chain and the voucher formula bookkeeping methods of convolutional neural networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9767381B2 (en) * 2015-09-22 2017-09-19 Xerox Corporation Similarity-based detection of prominent objects using deep CNN pooling layers as features
CN107292181B (en) * 2017-06-20 2020-05-19 无锡井通网络科技有限公司 Database system based on block chain and using method using system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180247399A1 (en) * 2017-02-27 2018-08-30 Aniket Bharat Parikh System, Method and Computer Program Product for Security Analysis of Jewellery Items
WO2019144353A1 (en) * 2018-01-25 2019-08-01 深圳前海达闼云端智能科技有限公司 Blockchain-based data training method and device, storage medium and blockchain node
CN108898219A (en) * 2018-06-07 2018-11-27 广东工业大学 A kind of neural network training method based on block chain, device and medium
CN109801293A (en) * 2019-01-08 2019-05-24 平安科技(深圳)有限公司 Remote Sensing Image Segmentation, device and storage medium, server
CN110188787A (en) * 2019-04-11 2019-08-30 淮阴工学院 It is a kind of mutually to be demonstrate,proved based on block chain and the voucher formula bookkeeping methods of convolutional neural networks

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111581671A (en) * 2020-05-11 2020-08-25 笵成科技南京有限公司 Digital passport protection method combining deep neural network and block chain
CN112418263A (en) * 2020-10-10 2021-02-26 上海鹰瞳医疗科技有限公司 Medical image focus segmentation and labeling method and system
CN113744848A (en) * 2021-08-02 2021-12-03 中山大学中山眼科中心 Method and system for realizing medical image labeling management

Also Published As

Publication number Publication date
WO2021068349A1 (en) 2021-04-15

Similar Documents

Publication Publication Date Title
Porzi et al. Predicting and understanding urban perception with convolutional neural networks
CN110909195A (en) Picture labeling method and device based on block chain, storage medium and server
WO2019120115A1 (en) Facial recognition method, apparatus, and computer apparatus
WO2019153604A1 (en) Device and method for creating human/machine identification model, and computer readable storage medium
US12045720B2 (en) Systems and methods for distributed data analytics
CN111191642B (en) Fingerprint anti-counterfeiting identification method and device based on multi-task classification and electronic equipment
CN111027600B (en) Image category prediction method and device
CN111368926B (en) Image screening method, device and computer readable storage medium
CN113918526B (en) Log processing method, device, computer equipment and storage medium
US20200034605A1 (en) Intelligent persona generation
CN110929806A (en) Picture processing method and device based on artificial intelligence and electronic equipment
CN110855648A (en) Early warning control method and device for network attack
CN106778851A (en) Social networks forecasting system and its method based on Mobile Phone Forensics data
CN112270671B (en) Image detection method, device, electronic equipment and storage medium
CN114219971A (en) Data processing method, data processing equipment and computer readable storage medium
CN115130711A (en) Data processing method and device, computer and readable storage medium
CN115049397A (en) Method and device for identifying risk account in social network
CN116865994A (en) Network data security prediction method based on big data
CN112925899B (en) Ordering model establishment method, case clue recommendation method, device and medium
Sun et al. Automatic building age prediction from street view images
CN116307736A (en) Method, device, equipment and storage medium for automatically generating risk image
CN116166842A (en) Method and system for fast screening and attribute comparison based on AI technology
CN117132323A (en) Recommended content analysis method, recommended content analysis device, recommended content analysis equipment, recommended content analysis medium and recommended content analysis program product
CN114373098A (en) Image classification method and device, computer equipment and storage medium
CN113762324A (en) Virtual object detection method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200324