CN113032843B - Method and apparatus for obtaining and processing tensor data with digital signature information - Google Patents

Method and apparatus for obtaining and processing tensor data with digital signature information Download PDF

Info

Publication number
CN113032843B
CN113032843B CN202110339163.2A CN202110339163A CN113032843B CN 113032843 B CN113032843 B CN 113032843B CN 202110339163 A CN202110339163 A CN 202110339163A CN 113032843 B CN113032843 B CN 113032843B
Authority
CN
China
Prior art keywords
convolution
digital signature
signature information
data
tensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110339163.2A
Other languages
Chinese (zh)
Other versions
CN113032843A (en
Inventor
黄畅
李建军
谭洪贺
钟思志
凌坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Information Technology Co Ltd
Original Assignee
Beijing Horizon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Information Technology Co Ltd filed Critical Beijing Horizon Information Technology Co Ltd
Priority to CN202110339163.2A priority Critical patent/CN113032843B/en
Publication of CN113032843A publication Critical patent/CN113032843A/en
Application granted granted Critical
Publication of CN113032843B publication Critical patent/CN113032843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computer Hardware Design (AREA)
  • Bioethics (AREA)
  • Image Analysis (AREA)

Abstract

Methods and apparatus for obtaining, eliminating, and verifying tensor data with digital signature information are disclosed. The method for obtaining tensor data with digital signature information may include: determining at least a portion of input data for a specified layer of the neural network; determining predetermined parameters related to the digital signature information and original convolution parameters of a designated layer including one or more first convolution kernels, the predetermined parameters including one or more second convolution kernels and corresponding one or more first offsets; and performing a convolution operation based on at least a portion of the input data, the predetermined parameter, and the original convolution parameter to obtain at least a portion of tensor data with the digital signature information. Related computer-readable storage media and electronic devices are also disclosed. By the method and the device, the watermark can be added while the conventional convolution operation is executed, so that the safety of the calculation process of the convolution network is ensured.

Description

Method and apparatus for obtaining and processing tensor data with digital signature information
Technical Field
The present disclosure relates generally to the field of deep learning technology, and in particular to methods and apparatus for obtaining, eliminating, and verifying tensor data with digital signature information.
Background
Deep learning techniques based on neural networks have been widely applied in various fields such as image recognition, video analysis, natural language processing, driving assistance, and the like. For example, a pre-designed neural network model may be compiled into executable code, and the obtained executable code is implemented in a device such as an artificial intelligence chip. The device may then be activated to run the neural network model against the input data to perform predetermined tasks such as image recognition, video analysis, and the like.
Disclosure of Invention
In one aspect, a method of obtaining tensor data with digital signature information is disclosed. The method may include: determining at least a portion of input data for a specified layer of the neural network; determining an original convolution parameter of the designated layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and performing a convolution operation based on at least a portion of the input data, the predetermined parameters, and the original convolution parameters to obtain at least a portion of the tensor data with digital signature information.
In another aspect, a method of eliminating digital signature information in tensor data is disclosed. The method may include: acquiring at least a portion of tensor data with digital signature information output by a designated layer of the neural network; determining an original convolution parameter of the designated layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and extracting data on each channel corresponding to the position of each first convolution kernel in the convolution kernel sequence from at least a portion of the tensor data with digital signature information according to a position parameter indicating the position of each first convolution kernel in the convolution kernel sequence including each first convolution kernel and each second convolution kernel.
In another aspect, a method of verifying digital signature information in tensor data is disclosed. The method may include: acquiring at least a portion of tensor data with digital signature information output by a designated layer of the neural network; determining an original convolution parameter of the designated layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; extracting data on each channel corresponding to the position of each second convolution kernel in the sequence of convolution kernels from at least a portion of the tensor data with digital signature information; and verifying, based on the one or more first offsets and the extracted data, whether at least a portion of the tensor data with digital signature information includes expected digital signature information.
In another aspect, an apparatus for obtaining tensor data with digital signature information is disclosed. The apparatus may include: a first data determination unit configured to determine at least a part of input data of a specified layer of the neural network; a second data determination unit configured to determine an original convolution parameter of the specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and a control unit configured to control performing a convolution operation based on at least a portion of the input data, the predetermined parameter, and the original convolution parameter to obtain at least a portion of the tensor data with digital signature information.
In another aspect, an apparatus for eliminating digital signature information in tensor data is disclosed. The apparatus may include: a data acquisition unit configured to acquire at least a part of tensor data with digital signature information output by a specified layer of the neural network; a second data determination unit configured to determine an original convolution parameter of the specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and a digital signature information elimination unit configured to extract data on each channel corresponding to a position of each first convolution kernel in the convolution kernel sequence including each second convolution kernel and each first convolution kernel from at least a part of the tensor data with digital signature information, based on a position parameter indicating the position of each first convolution kernel in the convolution kernel sequence.
In another aspect, an apparatus for verifying digital signature information in tensor data is disclosed. The apparatus may include: a data acquisition unit configured to acquire at least a part of tensor data with digital signature information output by a specified layer of the neural network; a second data determination unit configured to determine an original convolution parameter of the specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; a digital signature information extraction unit configured to extract data on each channel corresponding to a position of each second convolution kernel in the sequence of convolution kernels from at least a portion of the tensor data with digital signature information; and a digital signature information verification unit configured to verify whether at least a part of tensor data with digital signature information includes expected digital signature information based on the one or more first biases and the extracted data.
In another aspect, a computer-readable storage medium storing a computer program for performing any one or more of the methods described above is disclosed.
In another aspect, an electronic device is disclosed. The electronic device includes a processor and a memory for storing instructions executable by the processor. The processor is configured to read the executable instructions from the memory and execute the executable instructions to perform any one or more of the methods described above.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing embodiments thereof in more detail with reference to the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 illustrates an example method 100 for obtaining tensor data with digital signature information according to an embodiment of this disclosure.
Fig. 2 illustrates one example of obtaining tensor data with digital signature information by the example method 100.
Fig. 3 illustrates an example method 300 for eliminating digital signature information in tensor data according to an embodiment of this disclosure.
Fig. 4 illustrates one example of eliminating digital signature information in tensor data by the example method 300.
Fig. 5 illustrates one example of eliminating digital signature information in tensor data by the example method 300.
Fig. 6 illustrates one example of eliminating digital signature information in tensor data by the example method 300.
Fig. 7 illustrates an example method 700 for verifying digital signature information in tensor data according to an embodiment of this disclosure.
Fig. 8 illustrates one example of verifying digital signature information in tensor data by the example method 700.
Fig. 9 illustrates one example of verifying digital signature information in tensor data by the example method 700.
Fig. 10 illustrates an example apparatus 1000 for obtaining tensor data with digital signature information according to an embodiment of this disclosure.
Fig. 11 illustrates an example apparatus 1100 for eliminating digital signature information in tensor data according to an embodiment of this disclosure.
Fig. 12 illustrates an example apparatus 1200 for verifying digital signature information in tensor data according to an embodiment of this disclosure.
Fig. 13 illustrates an electronic device 1300 according to an embodiment of the disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
SUMMARY
As described previously, a pre-designed neural network model may be compiled into executable code and the resulting executable code implemented in a device such as an artificial intelligence chip. The device may then be activated to run the neural network model against the input data to perform predetermined tasks such as image recognition, video analysis, and the like. It is desirable to be able to provide an effective technical means in order to ensure that the neural network model being run is a legal model provided by the provider and to ensure the safety of the model execution process.
Exemplary method
Fig. 1 illustrates an example method 100 for obtaining tensor data with digital signature information according to an embodiment of this disclosure, the example method 100 may include:
step 110, determining at least one part of input data of a designated layer of the neural network;
step 120, determining an original convolution parameter of the designated layer and a predetermined parameter related to the digital signature information, wherein the original convolution parameter comprises one or more first convolution kernels, and the predetermined parameter comprises one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and
Step 130, performing a convolution operation based on at least a portion of the input data, the predetermined parameters and the original convolution parameters to obtain at least a portion of the tensor data with digital signature information.
In this disclosure, a "neural network" may be a convolutional neural network that includes at least one convolutional layer. As is generally understood in the technical field of deep learning, artificial intelligence, and the like, in each convolution layer, a convolution operation may be performed in a predetermined step size using a convolution parameter of the convolution layer with respect to input data of the convolution layer. For example, the specified layer of the neural network may be any one of the convolutional layers of the neural network specified at any suitable stage such as a design stage or an operation stage of the neural network, and the input data of the specified layer may be the convolutional layer of the neural network or any type of tensor data that can be processed by the convolution operation in the neural network. The present disclosure is not limited to specifying the location of layers in a neural network.
In this disclosure, "tensor data" may have a meaning generally understood in the technical field of deep learning or artificial intelligence, etc. For example, the tensor data may include at least one channel, where each channel may represent one or more types of information. For example, tensor data may be an image or image frame that can be viewed by the human eye, such as a color image (e.g., including color channels for representing different colors such as red, yellow, blue, etc.) or a grayscale image (e.g., including grayscale information for representing grayscale information), or any other type of data that cannot be viewed by the human eye, such as may include channels for representing sharpness, channels for representing contours, etc. The present disclosure is not limited to the type and form of input data of a given layer.
In this disclosure, the "digital signature information" may be any type and form of digital signature information that is predetermined. For example, the digital signature information may include any suitable information, visible or invisible, encrypted or unencrypted text, symbols, strings, images, and the like. For example, the digital signature information may include a watermark that may be visible or invisible.
In the example method 100, the convolution parameters of a given layer include original convolution parameters of the given layer. For example, the original convolution parameters may be convolution parameters designed for the convolution operation of the specified layer in order to ensure that the neural network is able to perform a predetermined task at the stage of designing the neural network (e.g., for a specific task such as image recognition or a general task). In addition to the original convolution parameters, in the example method 100, the specified layer's convolution parameters also include newly introduced predetermined parameters associated with the digital signature information. The predetermined parameter may be independent of the predetermined task itself to be performed by the neural network, but is used to add digital signature information to the output data obtained based on the original convolution parameters of the specified layer and the input data of the specified layer.
Then, whether the output data of the appointed layer carries preset digital signature information or not can be detected at the subsequent layer of the appointed layer or aiming at the final output of the neural network, and whether the operated neural network model is an original model provided by a provider or whether the model is stolen or used under the unauthorized condition can be judged based on the digital signature information, so that the safety of the model executing process is ensured, and the rights and interests of the neural network model provider and the artificial intelligence scheme provider are ensured.
In addition, since both the original convolution parameters and the predetermined parameters associated with the digital signature information are convolution parameters that may be used for the convolution operation of the specified layer, in step 130 of example method 100, the addition of the digital signature information may be directly implemented while completing the convolution operation associated with the predetermined task of the neural network in a conventional manner (e.g., using conventional convolution operation circuitry) based on the "extended" convolution parameters including the original convolution parameters and the predetermined parameters, performing the convolution operation on at least a portion of the input data of the specified layer, thereby allowing for no additional processing or hardware costs to be added. That is, with the example method 100, the addition of digital signature information can be achieved without increasing costs and without reducing execution efficiency.
In one embodiment, the output channel of digital signature information in the tensor data with digital signature information may depend on the position of the one or more second convolution kernels in a sequence of convolution kernels comprising the one or more first convolution kernels and the one or more second convolution kernels, and the digital signature information on the output channel corresponding to any one second convolution kernel may depend on a first offset corresponding to that second convolution kernel.
Thus, digital signature information can be added on a desired channel by adjusting the position of each second convolution kernel in the "extended" convolution parameters in the predetermined parameters associated with the digital signature information, and various desired digital signature information can be added by adjusting the first offset corresponding to each second convolution kernel. Thus, tensor data with digital signature information can be obtained in a simple and flexible manner by the example method 100.
In addition, in this embodiment, each element value in each second convolution kernel may be 0, so that the digital signature information in tensor data with the digital signature information may depend only on the respective first offsets in the predetermined parameters related to the digital signature information, thereby enabling setting of the data word signature information in a simple and efficient manner.
In further embodiments, the digital signature information may depend on each second convolution kernel and each first offset corresponding to each second convolution kernel in a predetermined parameter associated with the digital signature information. For example, in a convolution kernel sequence including the one or more first convolution kernels and the one or more second convolution kernels, one second convolution kernel may be set to be the same as one first convolution kernel, and a portion of predetermined digital signature information may be set according to a difference between a first offset corresponding to the second convolution kernel and a second offset corresponding to the first convolution kernel.
Fig. 2 shows an example of obtaining tensor data with digital signature information by the example method 100, where CONV represents a convolution operation of a specified layer of a neural network, the number of channels of input data 200 (represented as one data cube in fig. 2) of the specified layer being the same as the number of channels of each first convolution kernel 211, 212, and 213 (represented as a data cube, respectively, in fig. 2) of the original convolution parameters of the specified layer and the number of channels of each second convolution kernel 214, 215, 216, 217, and 218 (represented as a data cube, respectively, in fig. 2) of the predetermined parameters of the specified layer that are related to the digital signature information, each first convolution kernel 211, 212, and 213 and each second convolution kernel 214, 215, 216, 217, and 218 having the same width and height.
In step 110 of the example method 100, the input data 200 for the specified layer may be read from memory or received via an interface, for example, to determine the input data 200 for the specified layer. For example, depending on the processing power and buffering power of the processing hardware (e.g., a convolution operation circuit dedicated to convolution operations or a general purpose central processor) used to run the neural network or to perform convolution operations in the neural network, in step 110, only a portion of the input data 200 may be acquired at a time and processed in subsequent steps for the acquired portion, and then the example method 100 may be repeated for other various portions of the input data 200.
Then, in step 120, the "extended" convolution parameters 210 of the specified layer may be read or loaded. In the example of fig. 2, the "augmented" convolution parameters 210 may include, in top-down order as illustrated, first convolution kernels 211, 212, and 213 of the original convolution parameters and second convolution kernels 214, 215, 216, 217, and 218 of the predetermined parameters related to the digital signature information, wherein the element values of each of the second convolution kernels 214, 215, 216, 217, and 218 are 0, and the first offsets corresponding to the second convolution kernels 214, 215, 216, 217, and 218 are ASCII (American Standard Code for Information Interchange ) values 110, 117, 102, 117, and 124, respectively. Thus, the ASCII value sequence formed by the first offset (110) corresponding to the second convolution kernel 214, the first offset (117) corresponding to the second convolution kernel 215, the first offset (102) corresponding to the second convolution kernel 216, the first offset (117) corresponding to the second convolution kernel 217, and the first offset (124) corresponding to the second convolution kernel 218 may correspond to a character string "HOBOT" as watermark information to be added.
As shown in fig. 2, after passing through step 130 of the example method 100, the obtained output tensor 220 includes output channels 221, 222, 223, 224, 225, 226, 227, and 228 corresponding to the convolution kernels 211, 212, 213, 214, 215, 216, 217, and 218, respectively, wherein digital signature information corresponding to the character string "HOBOT" is included in the output channels 224, 225, 226, 227, and 228 corresponding to the second convolution kernels 214, 215, 216, 217, and 218. For example, for the second convolution kernel 214, since all element values of the convolution kernel are 0 and the value of the first offset corresponding to the convolution kernel is ASCII value 110, after performing the convolution operation CONV on the input data 200 using the second convolution kernel 214, the value of each element in the channel 224 of the output data 220 is integer value 110.
Thus, by way of example method 100, in accordance with the arrangement of the convolution kernels in the "extended" convolution parameters 210 in the example of FIG. 2, watermark information "HOBOT" may be carried in the last 5 output channels 224, 225, 226, 227, and 228 of output data 220, while the first 3 output channels 211, 212, and 213 still carry data relevant for performing neural network tasks.
It should be appreciated that the example method 100 of the present disclosure may not be limited to the example of fig. 2. In other embodiments, the second convolution kernel and the first offset value may be arranged in other desired ways.
Fig. 3 illustrates an example method 300 for eliminating digital signature information in tensor data, according to an embodiment of this disclosure, the example method 300 may include:
step 310, determining at least a part of tensor data with digital signature information;
step 320 of determining an original convolution parameter of the specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter comprising one or more first convolution kernels, the predetermined parameter comprising one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and
step 330 of extracting data on each channel corresponding to the position of each first convolution kernel in the convolution kernel sequence from at least a portion of the tensor data with digital signature information according to a position parameter indicating the position of each first convolution kernel in the convolution kernel sequence comprising each first convolution kernel and each second convolution kernel.
For tensor data with digital signature information obtained by the example method 100, the digital signature information in the tensor data may be eliminated by the example method 300.
Fig. 4 illustrates one example of eliminating digital signature information in tensor data by the example method 300, wherein tensor data 400 is tensor data with digital signature information output by a designated layer of a neural network by the example method 100, including channels 401, 402, 403, 404, 405, 406, 407, and 408.
In step 310 of the example method 300, tensor data 400 may be acquired, for example, tensor data 400 may be read from memory, or tensor data 400 may be received via an interface. Depending on the processing power and buffering capacity of the processing hardware (e.g., a convolution operation circuit dedicated to convolution operations or a general purpose central processor) used to run the neural network or to perform convolution operations in the neural network, in step 310, only a portion of the tensor data 400 may be acquired at a time and processed in subsequent steps for the acquired portion, and then the example method 300 may be repeated for other various portions of the tensor data 400.
Then, in step 320, an "extended" convolution kernel sequence 410 of the specified layer may be read or loaded. In the example of fig. 4, the "augmented" convolution kernel sequence 410 may include, in top-down order as illustrated, a first convolution kernel 411 of the original convolution parameters, a second convolution kernel 412 of the predetermined parameters related to the digital signature information, a first convolution kernel 413 of the original convolution parameters, second convolution kernels 414 and 415 of the predetermined parameters related to the digital signature information, a first convolution kernel 416 of the original convolution parameters, and second convolution kernels 417 and 418 of the predetermined parameters related to the digital signature information.
Then, as shown in FIG. 4, in step 330, a position parameter indicating the position of each of the first convolution kernels 411, 413 and 416 in the "extended" convolution kernel sequence may be obtained. For example, each first convolution kernel may be represented using a binary "1" and each second convolution kernel may be represented using binary data "0". Thus, from the binary number sequence "11100100", the position of each of the first convolution kernels 411, 413 and 416 in the "extended" convolution kernel sequence may be determined. In addition, corresponding index values 0 to 7 may be set for the convolution kernels 411 to 418, and then the position parameter may be an index value set {0,2,5}, thereby indicating each of the first convolution kernels 411, 413, and 416. In addition, instead of the index value described above, a storage address of each convolution kernel may be used.
The order of the channels of the tensor data and the order of the convolution kernels used to obtain the tensor data are identical. Thus, in step 330, it may be further determined that the information carried in channels 401, 403, and 406 of tensor data 400 is related to the task itself of the neural network, and that the information carried in channels 402, 404, 405, 407, and 408 is digital signature information added by example method 100, based on the location parameters described above. The data on channels 401, 403 and 406 may then be extracted from the tensor data 400 and tensor data 420 without digital signature information obtained, thereby enabling the elimination of digital signature information from the tensor data 400.
In one embodiment, the position parameters may include one or more third convolution kernels corresponding to the one or more first convolution kernels, each third convolution kernel having a unit height and a unit width, and for any third convolution kernel, the third convolution kernel has a unique non-zero channel whose position may depend on the position of the first convolution kernel corresponding to the third convolution kernel in the sequence of convolution kernels. Accordingly, step 330 may include: a convolution operation is performed based on at least a portion of the tensor data with digital signature information and the one or more third convolution kernels.
As shown in fig. 5, one convolution layer may be added to the tensor data 400 with digital signature output by a specified layer, and the position parameters may include 3 third convolution kernels 510, 520, and 530 corresponding to the first convolution kernels 411, 413, and 416, each of which has the same number of channels as the tensor data 400. There is only one element 0 or 1 on each channel of each of the third convolution kernels 510, 520 and 530. The third convolution kernel 510 in the position parameter may correspond to the first convolution kernel 411 in the convolution kernel sequence 410 in fig. 4, and thus in the channels 511-518 of the third convolution kernel 510 only non-zero elements 1 are included in the channel 511, while the elements in the other channels are all 0, wherein the relative position of the channel 511 in the channels 511-518 is the same as the relative position of the first convolution kernel 411 in the convolution kernel sequence 410. The third convolution kernel 520 in the position parameter may correspond to the first convolution kernel 413 in the convolution kernel sequence 410 in fig. 4, and thus, among the channels 521 to 528 of the third convolution kernel 520, only the non-zero element 1 is included in the channel 523, while the elements in the other channels are all 0, wherein the relative position of the channel 523 in the channels 521 to 528 is the same as the relative position of the first convolution kernel 413 in the convolution kernel sequence 410. The third convolution kernel 530 in the position parameter may correspond to the first convolution kernel 416 in the convolution kernel sequence 410 in fig. 4, and thus in the channels 531-538 of the third convolution kernel 530, only non-zero element 1 is included in the channel 536, while elements in the other channels are all 0, wherein the relative position of the channel 536 in the channels 531-538 is the same as the relative position of the first convolution kernel 416 in the convolution kernel sequence 410.
Then, as shown in fig. 5, a convolution operation CONV' may be performed on the tensor data 400 using the third convolution kernels 510, 520, and 533, thereby obtaining tensor data 420 without digital signature information.
In this embodiment, one convolution layer is added to the tensor data with digital signature output by the specified layer, and convolution operation is performed at the added convolution layer using the third convolution kernel tensor data with digital signature information, thereby realizing elimination of the digital signature information in the tensor data. Thus, the elimination of digital signature information can be realized with high efficiency by the existing convolution operation circuit without modifying the convolution parameters of the convolution layer of the designed neural network.
In further embodiments, the tensor data with digital signature information may also be provided directly to another convolution layer immediately after the specified layer in which the tensor data was generated. However, for the other convolution layer, the original convolution parameters of the other convolution layer may be modified because the number of channels of the input tensor data is not consistent with the number of convolution kernels in the original convolution parameters of the other convolution layer.
For example, for tensor data 400 in fig. 4, the channels in which the digital signature information is located in tensor data 410 may be determined to be channels 402, 404, 405, 407, and 408 according to the aforementioned position parameters for indicating the position of each of the aforementioned first convolution kernels 411, 413, and 416 in convolution kernel sequence 410, or according to the position parameters for indicating the position of each of the second convolution kernels 412, 414, 415, 417, and 418 in convolution kernel sequence 410.
Then, as shown in fig. 6, for the original convolution kernels 600 and 610 of the other convolution layer, the convolution kernel 600 includes channels 601, 602 and 603 and the convolution kernel 610 includes channels 611, 612 and 623 according to the design of the neural network. The number of channels (6 channels) of the tensor data 400 is different from the number of channels (3 channels) of the convolution kernels 600 and 610, and thus the convolution operation CONV of the other convolution layer cannot be performed based on the tensor data 400 and the convolution kernels 600, 610.
To this end, in this embodiment, as shown in fig. 6, the convolution kernels 600 may be filled with all-zero channels (i.e., all elements on the channels have values of 0) 621 to 625, and the convolution kernels 610 may be filled with all-zero channels 631 to 635, such that the positions in the convolution kernels 620 obtained after filling of all-zero channels 621 to 625 correspond to the positions in the tensor data 400 of the channels 402, 404, 405, 407, and 408 with digital signature information in the tensor data 400, respectively, and the positions in the convolution kernels 630 obtained after filling of all-zero channels 631 to 635 also correspond to the positions in the tensor data 400 of the channels 402, 404, 405, 407, and 408 with digital signature information in the tensor data 400, respectively. Then, a convolution operation CONV of another convolution layer may be performed based on the tensor data 400 and the convolution kernels 620, 630. The tensor data output by the convolution operation CONV "does not include digital signature information.
In this embodiment, it is allowed to directly supply tensor data with digital signature information output by a specified layer to another convolution layer immediately after the specified layer that generates the tensor data, and to use the same for convolution operation of the other convolution layer.
Fig. 7 illustrates an example method 700 for verifying digital signature information in tensor data according to an embodiment of this disclosure, the example method 700 may include:
step 710, obtaining at least one part of tensor data with digital signature information output by a designated layer of the neural network;
step 720, determining an original convolution parameter of the designated layer and a predetermined parameter related to the digital signature information, wherein the original convolution parameter comprises one or more first convolution kernels, and the predetermined parameter comprises one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels;
step 730 of extracting data on each channel corresponding to the position of each second convolution kernel in the sequence of convolution kernels from at least a portion of the tensor data with digital signature information; and
step 740 verifies whether at least a portion of the tensor data with digital signature information includes expected digital signature information based on the one or more first offsets and the extracted data.
For tensor data with digital signature information obtained by the example method 100, the digital signature information in the tensor data may be verified by the example method 700.
Fig. 8 illustrates one example of verifying digital signature information in tensor data by example method 700, where tensor data 800 is tensor data with digital signature information output by a designated layer of a neural network by example method 100, including channels 801, 802, 803, 804, 805, 806, 807, and 808.
In step 710 of the example method 700, tensor data 800 may be acquired. For example, the implementation of step 710 may be similar to the implementation of step 310 of the example method 300 described previously.
Then, in step 720, for example, an "extended" convolution kernel sequence 810 of the specified layer may be read or loaded in a similar manner to step 320 of the example method 300. In the example of fig. 8, the "augmented" convolution kernel sequence 810 may include, in top-down order as illustrated, a first convolution kernel 811 in the original convolution parameters, a second convolution kernel 812 in the predetermined parameters related to the digital signature information, a first convolution kernel 813 in the original convolution parameters, second convolution kernels 814 and 815 in the predetermined parameters related to the digital signature information, a first convolution kernel 816 in the original convolution parameters, and second convolution kernels 817 and 818 in the predetermined parameters related to the digital signature information.
Then, in step 730, for example, a position parameter indicating the position of each of the second convolution kernels 812, 814, 815, 817, and 818 in the convolution kernel sequence 810 may be obtained. For example, each first convolution kernel may be represented using a binary "1" and each second convolution kernel may be represented using binary data "0". The position of each of the second convolution kernels 812, 814, 815, 817, and 818 in the convolution kernel sequence 810 may then be determined from the binary number sequence "10100100". Each first convolution kernel may also be represented using a binary "0" and each second convolution kernel may be represented using binary data "1". The position of each of the second convolution kernels 812, 814, 815, 817, and 818 in the convolution kernel sequence 810 may then be determined from the binary number sequence "01011011". In addition, corresponding index values 0 to 7 may also be set for the convolution kernels 811 to 818, and then the position parameter may be the set of index values {0,2,5} associated with the current first convolution kernel or the set of index values {1,3,4,6,7} associated with the second convolution kernels, whereby the position of each second convolution kernel in the sequence of convolution kernels 810 may also be indicated. In addition, the memory address of each convolution kernel may be used instead of the index value described above, and so on.
The order of the channels of the tensor data and the order of the convolution kernels used to obtain the tensor data are identical. Thus, in step 730, it may be further determined that the information carried in channels 801, 803 and 806 of tensor data 800 is related to the task itself of the neural network, and the information carried in channels 802, 804, 805, 807 and 808 is digital signature information added by example method 100, based on the location parameters described above.
For example, as shown in fig. 9, for tensor data 800, fourth convolution kernels 910-950 may be set according to a location parameter (e.g., the aforementioned set of index values {1,3,4,6,7 }), where each of the fourth convolution kernels 910-950 has a unit height and a unit width, i.e., only one element is included on each channel of each of the fourth convolution kernels 910-950, and for each output channel of tensor data 800 carrying digital signature information, the relative location of the unique non-zero channel of the fourth convolution kernel corresponding to that output channel depends on the relative location of that output channel in tensor data 800. For example, the fourth convolution kernel 910 corresponding to the output channel 802 of the tensor data 800 includes channels 911 through 917 and has an element value of 1 on channel 912 and 0 on the other channels. Similarly, the fourth convolution kernel 920 corresponding to the output channel 804 of the tensor data 800 includes channels 921 through 927 and has an element value of 1 on channel 924 and 0 on the other channels; the fourth convolution kernel 940 corresponding to the output channel 807 of the tensor data 800 includes channels 941 through 947 with element values of 1 on channel 947 and 0 on the other channels; the fourth convolution kernel 950, corresponding to the output channel 808 of tensor data 800, includes channels 951 through 957 and has an element value of 1 on channel 958 and values of 0 on the other channels.
Then, a convolution operation CONV' "may be performed based on the tensor data 800 and the fourth convolution kernels 910 to 950, thereby obtaining tensor data 960. As shown in fig. 9, the obtained tensor data 960 includes only the output channels 802, 804, 805, 807, and 808 carrying the digital signature information in the tensor data 800.
Information carried in channels 802, 804, 805, 807, and 808 may then be extracted from tensor data 800. For example, in the example of fig. 8, the value of each element in channel 802 is 110, the value of each element in channel 804 is 117, the value of each element in channel 805 is 102, the value of each element in channel 807 is 117, and the value of each element in channel 808 is 124. Thus, the integer sequence {110,117,102,117,124} may be extracted from the channels 802, 804, 805, 807, and 808 in step 730. For example, in step 730, an element value may be extracted from each of the channels 802, 804, 805, 807, and 808, respectively, to obtain the foregoing sequence of integers. For example, one element value may be extracted from each channel by a pooling operation.
Then, in step 740, for example, in the case where each element value in each second convolution kernel may be 0, the above-described integer sequence {110,117,102,117,124} may be compared with an integer sequence formed by the first offset corresponding to the convolution kernel 812, the first offset corresponding to the convolution kernel 814, the first offset corresponding to the convolution kernel 815, the first offset corresponding to the convolution kernel 817, and the first offset corresponding to the convolution kernel 818. If the two are consistent, it may be determined that the tensor data 800 includes the expected digital signature information; otherwise, it may be determined that the expected digital signature information is not included in the tensor number 800.
As described above, the elements in the second convolution kernel used to generate tensor data 800 may also be other than 0, e.g., some second convolution kernel may be set to be the same as some first convolution kernel. In such a case, in step 740, the difference between the first offset corresponding to the second convolution kernel and the second offset corresponding to the first convolution kernel may be used instead of the first offset corresponding to the second convolution kernel as an element in the integer sequence for comparison with the integer sequence {110,117,102,117,124 }.
Exemplary apparatus
Fig. 10 illustrates an example apparatus 1000 for obtaining tensor data with digital signature information according to an embodiment of this disclosure. As shown in fig. 10, the example apparatus 1000 may include a first data determination unit 1010, a second data determination unit 1020, and a control unit 1030.
In one embodiment, the first data determination unit 1010 may be configured to determine at least a portion of input data for a specified layer of the neural network, thereby implementing step 110 of the example method 100. For example, the first data determination unit 1010 may be configured to read at least a portion of input data of a specified layer of the neural network from a memory according to a predetermined instruction, or to receive or acquire at least a portion of input data of a specified layer of the neural network from outside the example apparatus 1000 according to a predetermined instruction.
In one embodiment, the first data determination unit 1010 may include a data access controller of a memory to control data access to the memory storing at least a portion of input data of a specified layer, and may further include a buffer memory to buffer at least a portion of the acquired or determined input data of the specified layer. In further embodiments, the first data determination unit 1010 may comprise a data port for receiving data or instructions, and such an interface may be a wired or wireless interface.
In one embodiment, the second data determination unit 1020 may be configured to determine an original convolution parameter of the specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels, thereby implementing step 120 of the example method 100. For example, the second data determination unit 1020 may be configured to read convolution parameters related to a specified layer from a memory according to a predetermined instruction or to receive or retrieve related parameters of the specified layer from outside the example apparatus 1000 according to a predetermined instruction.
Similar to the first data determination unit 1010, in one embodiment, the second data determination unit 1020 may include a data access controller of a memory to control data access to the memory storing the relevant parameters of the specified layer, and may further include a buffer memory to buffer the retrieved or determined relevant parameters of the specified layer. In further embodiments, the second data determination unit 1020 may include a data port for receiving data or instructions, and such an interface may be a wired or wireless interface. In one embodiment, the second data determination unit 1020 may be implemented integrally with the first data determination unit 1010, e.g., the buffer memories in the second data determination unit 1020 and the first data determination unit 1010 may be two different buffers of the same buffer memory.
In one embodiment, the control unit 1030 may be configured to control performing a convolution operation based on at least a portion of the input data, the predetermined parameters, and the original convolution parameters to obtain at least a portion of the tensor data with digital signature information to implement step 130 of the example method 100. For example, the control unit 1030 may be configured to control, in accordance with a predetermined instruction, to supply at least a part of the input data of the specified layer determined by the first data determination unit 1010 and the relevant parameter of the specified layer determined by the second data determination unit 1020 to, for example, a convolution operation circuit (for example, a convolution operation engine or a convolution operation acceleration core) dedicated to performing a convolution operation. For example, such a convolution operation circuit may include a multiply-add unit array, and may be included in the control unit 1030 or independent of the control unit 1030. In further embodiments, the control unit 1030 may also be a processor or portion of a processor developed based on, for example, a field programmable gate array or the like.
By way of example apparatus 1000, example method 100 may be implemented to obtain tensor data with digital signature information.
Fig. 11 illustrates an example apparatus 1100 for eliminating digital signature information in tensor data according to an embodiment of this disclosure. As shown in fig. 11, the example apparatus 1100 may include a first data determination unit 1110, a second data determination unit 1120, and a digital signature information elimination unit 1130.
In one embodiment, the first data determination unit 1110 may be configured to determine at least a portion of tensor data with digital signature information, thereby implementing step 310 of the example method 300. For example, the first data determining unit 1110 may be configured to read at least a portion of the tensor data with digital signature information output by the specified layer of the neural network from the memory according to a predetermined instruction, or to receive or acquire at least a portion of the tensor data with digital signature information output by the specified layer of the neural network from outside the example apparatus 1100 according to a predetermined instruction.
In an embodiment, the first data determining unit 1110 may comprise a data access controller of a memory for controlling data access to the memory storing tensor data with digital signature information, and may further comprise a buffer memory for buffering at least a part of the acquired or determined tensor data with digital signature information. In further embodiments, the first data determination unit 1110 may include a data port for receiving data or instructions, and such an interface may be a wired or wireless interface.
In one embodiment, the second data determination unit 1120 may be configured to determine an original convolution parameter of the specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels, thereby implementing step 320 of the example method 300. For example, the second data determining unit 1120 may be configured to read convolution parameters related to a specified layer from a memory according to a predetermined instruction, or to receive or acquire related parameters of the specified layer from outside the example apparatus 1100 according to a predetermined instruction.
Similar to the first data determination unit 1110, in one embodiment, the second data determination unit 1120 may include a data access controller of a memory to control data access to the memory storing the relevant parameters of the specified layer, and may further include a buffer memory to buffer the retrieved or determined relevant parameters of the specified layer. In further embodiments, the second data determination unit 1120 may include a data port for receiving data or instructions, and such an interface may be a wired or wireless interface. In one embodiment, the second data determining unit 1120 may be implemented integrally with the first data determining unit 1110, e.g., the buffer memories in the second data determining unit 1120 and the first data determining unit 1110 may be two different buffers of the same buffer memory.
In one embodiment, the digital signature information cancellation unit 1130 may be configured to extract data on each channel corresponding to the position of each first convolution kernel in the sequence of convolution kernels including each second convolution kernel and each first convolution kernel from at least a portion of the tensor data with digital signature information according to a position parameter indicating the position of each first convolution kernel in the sequence of convolution kernels, thereby implementing step 330 of the example method 300.
In one embodiment, digital signature information cancellation unit 1130 may also be a processor or a portion of such a processor developed based on, for example, a field programmable gate array.
In another embodiment, the position parameters may include one or more third convolution kernels corresponding to the one or more first convolution kernels, each third convolution kernel having a unit height and a unit width, and for any third convolution kernel, the third convolution kernel has a unique non-zero channel whose position may depend on the position of the first convolution kernel corresponding to the third convolution kernel in the sequence of convolution kernels. Accordingly, the digital signature information cancellation unit 1130 may be configured to control, in accordance with a predetermined instruction, the supply of the determined one or more third convolution kernels and at least a portion of the tensor data determined by the first data determination unit 1110 to, for example, a convolution operation circuit (e.g., a convolution operation engine or a convolution operation acceleration kernel) dedicated to performing a convolution operation. For example, such a convolution operation circuit may include a multiply-add unit array, and may be included in the digital signature information elimination unit 1130 or independent of the digital signature information elimination unit 1130.
With the example apparatus 1100, the example method 300 may be implemented to eliminate digital signature information in tensor data.
Fig. 12 illustrates an example apparatus 1200 for verifying digital signature information in tensor data according to an embodiment of this disclosure. As shown in fig. 12, the example apparatus 1200 may include a first data determining unit 1210, a second data determining unit 1220, a digital signature information extracting unit 1230, and a digital signature information verifying unit 1240.
In one embodiment, the first data determination unit 1210 may be configured to determine at least a portion of tensor data with digital signature information, thereby implementing step 710 of the example method 700.
Similar to the first data determination unit 1110 of the example apparatus 1100, the first data determination unit 1210 may be configured to read at least a portion of tensor data with digital signature information output by a specified layer of the neural network from a memory according to a predetermined instruction, or to receive or acquire at least a portion of tensor data with digital signature information output by a specified layer of the neural network from outside the example apparatus 1200 according to a predetermined instruction, for example. Similar to the first data determination unit 1110 of the example apparatus 1100, the first data determination unit 1210 may include a data access controller of a memory to control data access to the memory storing tensor data with digital signature information, and may further include a buffer memory to buffer at least a portion of the acquired or determined tensor data with digital signature information. In further embodiments, the first data determination unit 1210 may include a data port for receiving data or instructions, and such an interface may be a wired or wireless interface.
In one embodiment, the second data determination unit 1220 may be configured to determine an original convolution parameter of the specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels, thereby implementing step 720 of the example method 700.
Similar to the second data determination unit 1120 of the example apparatus 1100, the second data determination unit 1220 may include a data access controller of a memory to control data access to the memory storing the relevant parameters of the specified layer, and may further include a buffer memory to buffer the obtained or determined relevant parameters of the specified layer. In further embodiments, the second data determination unit 1220 may include a data port for receiving data or instructions, and such an interface may be a wired or wireless interface. In one embodiment, the second data determining unit 1220 may be integrally implemented with the first data determining unit 1210, for example, the buffer memories in the second data determining unit 1220 and the first data determining unit 1210 may be two different buffers of the same buffer memory.
In one embodiment, the digital signature information extraction unit 1230 may be configured to extract data on each channel corresponding to the position of each second convolution kernel in the sequence of convolution kernels from at least a portion of the tensor data with digital signature information, thereby implementing step 730 of the example method 700.
In one embodiment, digital signature information extraction unit 1230 may also be a processor or a portion of such a processor developed based on, for example, a field programmable gate array.
In another embodiment, referring to fig. 9, a suitable fourth convolution kernel may be provided and the data on each channel corresponding to the position of each second convolution kernel in the sequence of convolution kernels is extracted using convolution operations (possibly including pooling operations). In this case, the digital signature information extraction unit 1230 may be configured to control, according to a predetermined instruction, the supply of at least a part of the determined one or more fourth convolution kernels and the tensor data determined by the first data determination unit 1210 to, for example, a convolution operation circuit (for example, a convolution operation engine or a convolution operation acceleration kernel) dedicated to performing a convolution operation. For example, such a convolution operation circuit may include a multiply-add unit array, and may be included in the digital signature information extraction unit 1230 or independent of the digital signature information extraction unit 1230. In addition, the digital signature information extraction unit 1230 may further include a circuit for performing a pooling operation.
In one embodiment, the digital signature information verification unit 1240 may be configured to verify from the one or more first offsets and the extracted data whether at least a portion of the tensor data with digital signature information includes expected digital signature information, thereby implementing step 740 of the example method 700. For example, the digital signature information verification unit 1240 may include one or more numerical value or data comparison units. For example, such one or more numerical or data comparison units may be formed by a logic circuit including a plurality of exclusive or gate elements. In addition, the digital signature information verification unit 1240 may also be a processor developed based on, for example, a field programmable gate array or a part of such a processor.
The example method 700 may be implemented by the example apparatus 1200 to verify digital signature information in tensor data.
It should be appreciated that an apparatus according to an embodiment of the present disclosure is not limited to the above examples. The various blocks in the example apparatus shown may be connected or coupled together in any suitable manner, with the arrows between the blocks being used only to indicate the flow of data or signals of interest, and not to indicate that the flow of data or signals between the blocks may only be in the direction of the arrows.
Exemplary electronic device
Fig. 13 illustrates an electronic device 1300 according to an embodiment of the disclosure, which electronic device 1300 may include one or more processors 1310 and memory 1320.
Processor 1310 may be a central processor or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in electronic device 1300 to perform desired functions.
Memory 1320 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium that can be executed by the processor 1310 to implement the example method 200 described above and/or other desired functions.
In one example, electronic device 1300 may also include an input 1330 and an output 1340, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown). For example, the input device 1330 may also include, for example, a keyboard, a mouse, and the like. The output device 1340 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto. In addition, electronic device 1300 may also include any other suitable components or modules.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the example methods described in the "example methods" section above.
The computer program product may write program code for performing operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform steps in the respective example methods described in the "example methods" section above in this specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A method of obtaining tensor data with digitally signed information, comprising:
determining at least a portion of input data for a specified layer of the neural network;
determining an original convolution parameter of the designated layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and
a convolution operation is performed based on at least a portion of the input data, the predetermined parameter, and the original convolution parameter to obtain at least a portion of the tensor data with digital signature information.
2. The method of claim 1, wherein the output channels of digital signature information in the tensor data with digital signature information depend on the position of the one or more second convolution kernels in a sequence of convolution kernels comprising the one or more first convolution kernels and the one or more second convolution kernels, and the digital signature information on the output channel corresponding to any one second convolution kernel depends on a first offset corresponding to that second convolution kernel.
3. A method of eliminating digital signature information in tensor data, comprising:
acquiring at least a portion of tensor data with digital signature information output by a designated layer of the neural network;
determining an original convolution parameter of the designated layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and
data on each channel corresponding to the position of each first convolution kernel in the convolution kernel sequence is extracted from at least a portion of the tensor data with digital signature information according to a position parameter indicating the position of each first convolution kernel in the convolution kernel sequence including each first convolution kernel and each second convolution kernel.
4. The method of claim 3, wherein,
the position parameters include one or more third convolution kernels corresponding to the one or more first convolution kernels, each third convolution kernel having a unit height and a unit width, the third convolution kernel having a unique non-zero channel for any third convolution kernel, and the position of the unique non-zero channel being dependent on the position of the first convolution kernel corresponding to the third convolution kernel in the sequence of convolution kernels, and
Extracting data on each channel corresponding to the position of each first convolution kernel in the sequence of convolution kernels comprises: a convolution operation is performed based on at least a portion of the tensor data with digital signature information and the one or more third convolution kernels.
5. A method of verifying digital signature information in tensor data, comprising:
acquiring at least a portion of tensor data with digital signature information output by a designated layer of the neural network;
determining an original convolution parameter of the designated layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels;
extracting data on each channel corresponding to the position of each second convolution kernel in the sequence of convolution kernels from at least a portion of the tensor data with digital signature information; and
based on the one or more first offsets and the extracted data, it is verified whether at least a portion of the tensor data with digital signature information includes expected digital signature information.
6. An apparatus for obtaining tensor data with digital signature information, comprising:
a first data determination unit configured to determine at least a part of input data of a specified layer of the neural network;
a second data determination unit configured to determine an original convolution parameter of the specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and
a control unit configured to control performing a convolution operation based on at least a portion of the input data, the predetermined parameter, and the original convolution parameter to obtain at least a portion of the tensor data with digital signature information.
7. An apparatus for removing digital signature information from tensor data, comprising:
a first data determination unit configured to determine at least a part of tensor data with digital signature information;
a second data determination unit configured to determine an original convolution parameter of a specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels; and
A digital signature information cancellation unit configured to extract data on each channel corresponding to a position of each first convolution kernel in a convolution kernel sequence including each second convolution kernel and each first convolution kernel from at least a portion of the tensor data with digital signature information, based on a position parameter indicating the position of each first convolution kernel in the convolution kernel sequence.
8. An apparatus for verifying digital signature information in tensor data, comprising:
a first data determination unit configured to determine at least a part of tensor data with digital signature information;
a second data determination unit configured to determine an original convolution parameter of a specified layer and a predetermined parameter related to the digital signature information, the original convolution parameter including one or more first convolution kernels, the predetermined parameter including one or more second convolution kernels and one or more first offsets corresponding to the one or more second convolution kernels;
a digital signature information extraction unit configured to extract data on each channel corresponding to a position of each second convolution kernel in the sequence of convolution kernels from at least a portion of the tensor data with digital signature information; and
A digital signature information verification unit configured to verify whether at least a portion of tensor data with digital signature information includes expected digital signature information based on the one or more first offsets and the extracted data.
9. A computer readable storage medium storing a computer program for executing the method according to any one of claims 1 to 5.
10. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement performing the method according to any one of claims 1 to 5.
CN202110339163.2A 2021-03-30 2021-03-30 Method and apparatus for obtaining and processing tensor data with digital signature information Active CN113032843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110339163.2A CN113032843B (en) 2021-03-30 2021-03-30 Method and apparatus for obtaining and processing tensor data with digital signature information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110339163.2A CN113032843B (en) 2021-03-30 2021-03-30 Method and apparatus for obtaining and processing tensor data with digital signature information

Publications (2)

Publication Number Publication Date
CN113032843A CN113032843A (en) 2021-06-25
CN113032843B true CN113032843B (en) 2023-09-15

Family

ID=76452935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110339163.2A Active CN113032843B (en) 2021-03-30 2021-03-30 Method and apparatus for obtaining and processing tensor data with digital signature information

Country Status (1)

Country Link
CN (1) CN113032843B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023065A (en) * 2016-05-13 2016-10-12 中国矿业大学 Tensor hyperspectral image spectrum-space dimensionality reduction method based on deep convolutional neural network
CN108226889A (en) * 2018-01-19 2018-06-29 中国人民解放军陆军装甲兵学院 A kind of sorter model training method of radar target recognition
CN110188865A (en) * 2019-05-21 2019-08-30 深圳市商汤科技有限公司 Information processing method and device, electronic equipment and storage medium
CN110399972A (en) * 2019-07-22 2019-11-01 上海商汤智能科技有限公司 Data processing method, device and electronic equipment
CN112116083A (en) * 2019-06-20 2020-12-22 地平线(上海)人工智能技术有限公司 Neural network accelerator and detection method and device thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10963787B2 (en) * 2018-05-31 2021-03-30 Neuralmagic Inc. Systems and methods for generation of sparse code for convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023065A (en) * 2016-05-13 2016-10-12 中国矿业大学 Tensor hyperspectral image spectrum-space dimensionality reduction method based on deep convolutional neural network
CN108226889A (en) * 2018-01-19 2018-06-29 中国人民解放军陆军装甲兵学院 A kind of sorter model training method of radar target recognition
CN110188865A (en) * 2019-05-21 2019-08-30 深圳市商汤科技有限公司 Information processing method and device, electronic equipment and storage medium
CN112116083A (en) * 2019-06-20 2020-12-22 地平线(上海)人工智能技术有限公司 Neural network accelerator and detection method and device thereof
CN110399972A (en) * 2019-07-22 2019-11-01 上海商汤智能科技有限公司 Data processing method, device and electronic equipment

Also Published As

Publication number Publication date
CN113032843A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
US20230154176A1 (en) Analyzing data using a hierarchical structure
KR101873619B1 (en) Boolean logic in a state machine lattice
TWI502502B (en) Methods and systems for handling data received by a state machine engine
TWI492062B (en) Methods and devices for programming a state machine engine
TWI497418B (en) State machine engine, method for handling state vector data in a state machine engine and method for configuring a state machine lattice of a state machine engine
CN108021806B (en) Malicious installation package identification method and device
KR101558652B1 (en) Malware analysis and variants detection methods using visualization of binary information, apparatus for processing the same method
KR102327026B1 (en) Device and method for learning assembly code and detecting software weakness based on graph convolution network
US8327452B2 (en) Program obfuscation apparatus, program obfuscation method and computer readable medium
KR102516366B1 (en) Method and apparatuis for acquaring feature data
CN110414522A (en) A kind of character identifying method and device
CN113032843B (en) Method and apparatus for obtaining and processing tensor data with digital signature information
KR101826828B1 (en) System and method for managing log data
CN117315758A (en) Facial expression detection method and device, electronic equipment and storage medium
KR20230082792A (en) Electronic apparatus that supports to check the types of security vulnerabilities in the source code through the creation of an artificial intelligence-based discrimination model that distinguishes the types of source code security vulnerabilities, and operating method thereof
KR102570131B1 (en) Method and Apparatus for Providing an HDR Environment Map from an LDR Image Based on Deep Learning
KR20210044003A (en) Method and apparatus for word embedding, method for word search
JP6994253B2 (en) Information processing equipment, information processing methods, and programs
CN112150337B (en) Image processing method and device and electronic equipment
KR102380187B1 (en) Method and apparatus for inserting identification information on image
CN111414904B (en) Method and device for processing data of region of interest
US11288547B2 (en) Method for inserting domain information, method and apparatus for learning of generative model
KR102581211B1 (en) Apparatus and method for transforming style of sketch image
KR101810765B1 (en) Static Software Watermarking Method by Encoding Constant
CN118133092A (en) Method and equipment for constructing network information hiding model and extraction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant