WO2022228105A1 - 图像数据的处理方法和装置、存储介质及电子设备 - Google Patents

图像数据的处理方法和装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2022228105A1
WO2022228105A1 PCT/CN2022/086217 CN2022086217W WO2022228105A1 WO 2022228105 A1 WO2022228105 A1 WO 2022228105A1 CN 2022086217 W CN2022086217 W CN 2022086217W WO 2022228105 A1 WO2022228105 A1 WO 2022228105A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
data
data set
weight
weight data
Prior art date
Application number
PCT/CN2022/086217
Other languages
English (en)
French (fr)
Inventor
艾通
李峰
李昊沅
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP22794576.3A priority Critical patent/EP4296891A4/en
Priority to JP2023524148A priority patent/JP2023547831A/ja
Publication of WO2022228105A1 publication Critical patent/WO2022228105A1/zh
Priority to US17/991,416 priority patent/US20230083565A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding

Definitions

  • the present application relates to the field of computers, and in particular, to a method and apparatus for processing image data, a storage medium, and an electronic device.
  • the traditional computing mode is generally that each output is a thread group, and uses SIMD (Single Instruction Multiple Data, single instruction multiple data structure) to process image data, for example, in the use of SIMD for convolution operation.
  • Cout represents the number of output channels of the convolution kernel, Cin represents the number of input channels of the convolution kernel, kernel_h represents the height of the convolution kernel, and kernel_w represents the width of the convolution kernel.
  • the data arrangement of the existing technical solution is usually only for the dimension of [N, C, H, W], and compared with the traditional data expression method, for the convolution operation, according to the characteristics of the computer memory arrangement, if the convolution kernel is If the size is small and the input space size is large, in order to obtain the integrity of the information, operations such as edge complementing need to be performed on the input. Moreover, due to cross-channel data acquisition, cache miss and additional data copy overhead are caused, so It seriously affects the performance of the device in the calculation process, and further reduces the efficiency of processing image data.
  • Embodiments of the present application provide an image data processing method and apparatus, a storage medium, and an electronic device, so as to at least solve the technical problem of low efficiency in processing image data in the related art.
  • a method for processing image data which is performed by an electronic device and includes: acquiring a first image data set to be processed, wherein the image data in the first image data set is according to the first image data set.
  • a data format is arranged; the data in the first image data set is interleaved and rearranged to obtain a second image data set, wherein the image data in the second image data set are arranged according to the second data format, and the The way of interleaving and rearranging matches the convolution operation, and the dimension of the second data format is smaller than that of the first data format; the convolution operation is performed on the second image data set and the pre-acquired second weight data set , to get the target output result.
  • an apparatus for processing image data including: an acquisition module configured to acquire a first image data set to be processed, wherein the image data in the first image data set Arrange according to the first data format; the processing module is used for interleaving and rearranging the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set is in accordance with the second image data set.
  • the data format is arranged, the interleaving and rearranging method is matched with the convolution operation, and the dimension of the second data format is smaller than that of the first data format; the execution module is used for the second image data set and pre-acquisition The convolution operation is performed on the second weight dataset of , and the target output result is obtained.
  • a computer-readable storage medium is also provided, where a computer program is stored in the computer-readable storage medium, wherein the computer program is configured to execute the above image data when running. Approach.
  • an electronic device including a memory and a processor, the memory stores a computer program, and the processor is configured to execute the above-mentioned image data processing through the computer program method.
  • a computer program product including computer instructions, which, when the computer instructions are read and executed by a processor of a computer device, cause the computer device to execute the above-mentioned image data processing method.
  • the input image data, output image data and calculation weights in the calculation process are correspondingly rearranged, which reduces the cost compared to the traditional calculation mode.
  • the additional data copy overhead and the probability of Cache Miss appearing are reduced, thereby realizing the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image data, thereby solving the relatively low efficiency of processing image data in related technologies. question.
  • FIG. 1 is a schematic diagram of an application environment of an image data processing method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for processing image data according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a method for processing image data according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an apparatus for processing image data according to an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • CNN Convolutional Neural Networks (convolutional neural network).
  • SIMD Single Instruction Multiple Data (single instruction multiple data structure);
  • GPUs graphics processing units
  • Metal Buffer Metal represents conventional memory
  • Metal Texture Metal represents the memory of the texture
  • Cin the number of input channels of the convolution kernel
  • kernel_h the height of the convolution kernel
  • kernel_w The width of the convolution kernel.
  • a method for processing image data is provided.
  • the above-mentioned image data processing method can be applied to the hardware environment formed by the server 101 and the user terminal 103 as shown in FIG. 1 .
  • the server 101 is connected to the terminal 103 through the network, and can be used to provide services for the user terminal or the client installed on the user terminal.
  • the client can be a video client, an instant messaging client, a browser client, Education client, game client, etc.
  • the database 105 may be provided on the server or independent of the server for providing the server 101 with data storage services, eg, image data storage services.
  • the above-mentioned networks may include, but are not limited to, wired networks and wireless networks, wherein the wired networks include local area networks, metropolitan area networks, and wide area networks, and the wireless networks include Bluetooth, WIFI, and other networks that implement wireless communication.
  • the user terminal 103 may be a terminal configured with an application program 107, and may include, but is not limited to, at least one of the following: a mobile phone (such as an Android mobile phone, an iOS mobile phone, etc.), a notebook computer, a tablet computer, a handheld computer, MID (Mobile Internet Devices, mobile Internet equipment), PAD, desktop computer, smart TV and other computer equipment.
  • the above-mentioned server may be a single server, a server cluster composed of multiple servers, or a cloud server, and the application 107 using the above-mentioned image data processing method is displayed through the user terminal 103.
  • the above-mentioned image data processing method can be implemented in the user terminal 103 through the following steps:
  • the data in the first image data set is interleaved and rearranged in the application program 107 of the user terminal 103 to obtain a second image data set, wherein the image data in the second image data set are arranged according to the second data format, and the interleaved
  • the rearrangement method matches the convolution operation, and the dimension of the second data format is smaller than that of the first data format;
  • a convolution operation is performed on the second image data set and the pre-acquired second weight data set to obtain a target output result.
  • the above-mentioned image data processing method may also include, but is not limited to, being used by a client configured in the server.
  • the above-mentioned image data processing method may include, but is not limited to, asynchronous use by the user terminal 103 and a client set on the server 101.
  • the application program 107 of the user terminal 103 executes the above steps S1 and S2,
  • the above step S3 is executed by the client set in the server 101, the above is only an example, and this embodiment does not make a specific limitation.
  • the above-mentioned processing method of image data includes:
  • S204 Interleave and rearrange the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set is arranged according to the second data format, and the interleaving and rearrangement method matches the convolution operation , the dimension of the second data format is smaller than that of the first data format;
  • the image data in the first data format may include, but is not limited to, be arranged in a data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , where N 1 represents the image data subgroups included in the first image data set.
  • N 1 represents the image data subgroups included in the first image data set.
  • the number of sets, C1 represents the number of channels in each image data subset, H1 represents the data height in each image data subset in the first image data set, and W1 represents the first image data set in each image data subset. data width.
  • FIG. 3 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
  • the image data in the above-mentioned first image data set is arranged in a data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , which may include, but is not limited to, the example shown in FIG. 3 , where N 1 represents the images included in the first image data set.
  • the application scenarios of the above-mentioned image data processing method may include, but are not limited to, medical treatment, finance, credit reporting, banking, games, energy, education, buildings, games, transportation, IoT, industry, and artificial intelligence.
  • An application scenario that requires image data processing, and the above application scenario may include, but is not limited to, application in a neural network forward computing library. Since the neural network forward computing library provides the computing power of all neural network algorithms, the application scenarios of this application can cover all application scenarios using the neural network forward computing library, for example, including but not limited to AI algorithms related to cloud technology. applications, such as virtual backgrounds, etc.
  • Cloud technology refers to a hosting technology that unifies a series of resources such as hardware, software, and network in a wide area network or a local area network to realize the calculation, storage, processing and sharing of data.
  • cloud conference is an efficient, convenient and low-cost conference form based on cloud computing technology. Users only need to perform simple and easy-to-use operations through the Internet interface, and can quickly and efficiently share voice, data files and videos with teams and customers around the world, and complex technologies such as data transmission and processing in the conference are provided by cloud conference services. The dealer helps the user to operate.
  • cloud conferences mainly focus on the service content of SaaS (Software as a Service) mode, including telephone, network, video and other service forms.
  • Video conferences based on cloud computing are called cloud conferences.
  • the cloud conference system supports multi-server dynamic cluster deployment and provides multiple high-performance servers, which greatly improves the stability, security and availability of conferences.
  • video conferencing has been welcomed by many users because it can greatly improve communication efficiency, continuously reduce communication costs, and upgrade internal management levels, and has been widely used in various fields such as transportation, transportation, finance, operators, education, and enterprises. .
  • cloud computing for video conferencing, it will be more attractive in terms of convenience, speed, and ease of use, which will surely stimulate the arrival of a new upsurge in video conferencing applications.
  • FIG. 4 is a schematic diagram of an image data processing method according to an embodiment of the present application. As shown in FIG. 4 , the method specifically includes but is not limited to follow the steps below:
  • the user terminal 402 obtains the first image data set to be processed
  • the processor 404 located inside the user terminal 402 or connected to the user terminal 402 interleaves and rearranges the data in the first image data set to obtain a second image data set;
  • the above-mentioned first image data set may include but not limited to the first image data set to be processed stored in the database as shown in The virtual background displayed in the virtual background display area 408 of the application 406, or other image data obtained after processing using the above-mentioned image data processing method.
  • performing the above-mentioned convolution operation on the second image dataset and the pre-acquired second weight dataset to obtain the target output result may include, but is not limited to, performing a convolution operation on the second image dataset and the second weight dataset.
  • product operation to obtain a third image data set wherein the convolution operation includes but is not limited to the convolution operation, the target output result includes but is not limited to the third image data set, and the second image data set is the The image data of M 1 channels in each group of image data is interleaved and rearranged, and the obtained image data set, S 1 group of image data is the image data of every M 1 channel in the first image data set is divided into one group , the obtained image data, M 1 ⁇ C 1 .
  • a first image data set to be processed is acquired, wherein the image data in the first image data set are arranged in a data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , and the first image data set is arranged in a data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1.
  • the data is interleaved and rearranged to obtain a second image data set, wherein the image data in the second image data set is arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , and the way of interleaving and rearrangement matches the convolution operation.
  • the input image data, output image data and calculation weights in the calculation process are correspondingly rearranged, so as to rearrange higher-dimensional data into lower-dimensional data.
  • the data of multiple channels can be grouped, so that data can be extracted from different channels through cross-grouping, which can effectively reduce cross-channel extraction. number of data.
  • the technical solution described in this application reduces the extra data copying overhead and the probability of Cache Miss compared to the traditional computing mode. , thereby realizing the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image data, thereby solving the technical problem of low efficiency of processing image data in the related art.
  • the image data in the first image data set is arranged according to the first data format, including: the image data in the first image data set is arranged according to the data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 Arrangement, N1 represents the number of image data subsets included in the first image data set, C1 represents the number of channels in each image data subset, H1 represents the data height in each image data subset in the first image data set, W 1 represents the data width in each image data subset in the first image data set;
  • the image data in the second image data set is arranged according to the second data format, including: interleaving and rearranging the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set is Arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , H 2 represents the data height in each image data subset in the second image data set, and W 2 represents the data width in each image data subset in the second image data set .
  • the above-mentioned interleaving and rearranging the data in the first image data set to obtain the second image data set may include, but is not limited to, interleaving and rearranging the image data in the first image data set, so as to reduce the complexity of the image data. dimension, which is convenient for subsequent convolution operations.
  • FIG. 5 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
  • Each image data is divided into C 1 /4 groups according to each group of four channels, and then the image data of 4 channel dimensions in each group is rearranged in the form of interleaving (mixing), as shown in Figure 5
  • the second image dataset is shown with a data structure of [N, H, C/4, W, C 4 ], where "A, B, C" represent image data within different channels.
  • the data in the first image data set is interleaved and rearranged to obtain a second image data set, including:
  • the image data of M 1 channels in each group of image data in the S 1 groups of image data is interleaved and rearranged to obtain a second image data set.
  • the image data of every M 1 channel in the first image data set is divided into one group, and obtaining S 1 group of image data may include, but is not limited to, when C 1 is an integer multiple of M 1 , divide the image data of every M 1 channel in the first image data set into one group to obtain S 1 groups of image data, wherein, In the case that C 1 is not an integer multiple of M 1 , the number of channels in the first image data set is increased from C 1 to C 2 to obtain a third image data set, where C 2 is an integer multiple of M 1 , and the third image data set is obtained.
  • the image data on the channels added in the three-image data set is 0 (that is, the number of channels is padded to an integer multiple of M 1 ); the image data of every M 1 channel in the third image data set is divided into a group, and S 1 is obtained set of image data, where,
  • FIG. 6 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
  • the above-mentioned second image data set may include, but is not limited to, the example shown in FIG. 6 , where “A, B, C” represents image data in different channels, A(1,1), B(1,1) , C(1,1), D(1,1) are interleaved and rearranged into image data of the same height, and the data of different channels in the image data are sequentially sorted, which improves the local performance of data access and can greatly reduce the probability of Cache Miss. .
  • the data to be processed is re-interleaved and arranged in the embodiment of the present application, in the process of using the convolution kernel to extract the boundary data, the number of edge-filling operations can be reduced, and the data in different channels can be placed in the same dimension, so as to achieve the effect of saving the extra overhead of data copying.
  • the image data of every M 1 channel in the first image data set is divided into one group to obtain the S 1 group of image data, where M 1 ⁇ C 1 , and each of the S 1 groups of image data is divided into one group.
  • the image data of M 1 channels in the set of image data are interleaved and rearranged to obtain a second image data set.
  • the second image data set is obtained by interleaving and rearranging the data in the first image data set.
  • the image data of every M 1 channel in the first image data set is divided into one group to obtain S 1 groups of image data, including:
  • the number of channels in the first image data set is increased from C 1 to C 2 to obtain a third image data set, where C 2 is an integer multiple of M 1 , and the third image data set is obtained.
  • the image data on the channels added in the three-image data set is 0; the image data of every M 1 channel in the third image data set is divided into one group to obtain S 1 group of image data, wherein,
  • increasing the number of channels in the first image data set from C 1 to C 2 may include, but is not limited to, Round up ⁇ M 1 , of course, it can also be rounded down or rounded in other ways.
  • the image data of M 1 channels in each group of image data in the S 1 groups of image data are interleaved and rearranged to obtain a second image data set, including:
  • SIMD can be used to speed up the convolution operation without the need to perform edge complementation on the data, so as to avoid the problems caused by the edge complementation during the convolution operation. Additional overhead for data copying.
  • the method further includes:
  • the data in the first weight data set is interleaved and rearranged to obtain a second weight data set, wherein the weight data in the second weight data set is arranged according to the data format of H 4 ⁇ W 4 , and H4 represents the data in the second weight data set.
  • the data height of the weight data, W4 represents the data width of the weight data in the second weight data set.
  • the above-mentioned first weight data set may include, but is not limited to, weight data used when using convolution kernels to process image data during convolution calculation.
  • the above-mentioned image data processing method is applied to cloud computing Taking a conference scene as an example, FIG. 7 is a schematic diagram of another image data processing method according to an embodiment of the present application. As shown in FIG. 7 , the method specifically includes but is not limited to the following steps:
  • the processor 704 located inside the user terminal 702 or connected to the user terminal 702 obtains a preset first weight data set
  • the processor 704 located inside the user terminal 702 or connected to the user terminal 702 interleaves and rearranges the data in the first weight data set to obtain a second weight data set.
  • the above-mentioned first weight data set may include but is not limited to being stored in the database as shown in FIG. 7
  • the above-mentioned second weight data set may include but is not limited to being used in association with the second image data set to be processed, to A virtual background is generated in the virtual background display area 708 of the cloud conference application 706 shown in FIG. 7 .
  • FIG. 8 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
  • H 3 kernel_h
  • W 3 represents each weight
  • the second weighted data set is obtained by interleaving and rearranging the data in the first weighted data set.
  • this embodiment reduces the The probability of occurrence of Cache Miss is realized, thereby realizing the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image data, thereby solving the technical problem of low efficiency of processing image data in related technologies.
  • the data in the first weight data set is interleaved and rearranged to obtain a second weight data set, including:
  • FIG. 9 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
  • the above-mentioned interleaving and rearranging the data in the first weight data set to obtain the second weight data set may include but not limited to the example shown in FIG. 9 .
  • the first weight data set is The weight data of each M 2 weight data subsets in the weight data set is divided into one group to obtain S 2 groups of weight data, and the M 2 weight data in each group of weight data in the S 2 groups of weight data are interleaved and rearranged to obtain
  • the weight data is divided into C/4 groups according to every four output channels as a group. If the number of output channels is not divisible by 4, the The number of channels is filled to an integer multiple of 4, and all the added activation values are filled with 0.
  • the weight data of the 4 channel dimensions in each group is rearranged in the form of interleaving (mixing), and the input channel dimensions are ordered in the subsequent dimensions. Arrange to get the data structure of [Cout/4, kernel_h, kernel_w, Cin, Cout4], OC4 is Cout4, and IC is Cin.
  • the weight data of every M 2 weight data subsets in the first weight data set is divided into one group to obtain S 2 groups of weight data, including but not limited to the case where N 2 is an integer multiple of M 2
  • the weight data of every M 2 weight data subsets in the first weight data set is divided into one group, and S 2 groups of weight data are obtained, wherein,
  • N 2 is not an integer multiple of M 2
  • the number of weight data subsets in the first weight data set is increased from N 2 to N 3 to obtain a third weight data set, where N 3 is the number of M 2 Integer multiples, the weight data in the weight data subset added in the third weight data set is 0; the weight data of every M 2 weight data subsets in the third weight data set is divided into one group to obtain S 2 groups of weight data, wherein,
  • the weight data of every M 2 weight data subsets in the first weight data set is divided into one group, and S 2 groups of weight data are obtained, including:
  • N 2 is an integer multiple of M 2
  • the weight data of every M 2 weight data subsets in the first weight data set is divided into one group to obtain S 2 groups of weight data, wherein,
  • N 2 is not an integer multiple of M 2
  • the number of weight data subsets in the first weight data set is increased from N 2 to N 3 , and the third weight data set can be obtained.
  • the third weight data set can be obtained. including but not limited to Round down ⁇ M 2 , and of course round up or round up in other ways.
  • N2 3
  • the added activation value (corresponding to the aforementioned image data) is all filled with 0, as shown in Figure 9, the fourth column , the eighth column, the twelfth column, and so on are all "0.0f".
  • the weight data of every M 2 weight data subsets in the first weight data set is divided into one group to obtain S 2 groups of weight data, wherein, In the case that N 2 is not an integer multiple of M 2 , increase the number of weight data subsets in the first weight data set from N 2 to N 3 to obtain a third weight data set, where N 3 is M 2 Integer multiples of , the weight data in the added weight data subset in the third weight data set is 0; the weight data of every M 2 weight data subsets in the third weight data set is divided into one group to obtain S 2 groups of weight data, where , In the method of interleaving and rearranging the data in the first weighted data set, the second weighted data set is obtained, which reduces the Cache Miss and additional data copy overhead that are easily caused in the traditional computing mode, and reduces the probability of Cache Miss occurrence. Therefore, the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image
  • M 2 pieces of weight data in each group of weight data in the S 2 groups of weight data are interleaved and rearranged to obtain a second weight data set, including:
  • N 2 is an integer multiple of M 2
  • N 2 is not an integer multiple of M 2 .
  • H 4 H 3 ⁇ W 3 .
  • H 3 kernel_h*kernel_w
  • W 3 IC*OC 4
  • the height of each set of weight data is kernel_w
  • the width is OC 4 .
  • the value of N 2 is the number of output channels of the convolution kernel
  • the value of C 2 is the number of input channels of the convolution kernel
  • the convolution operation is the convolution operation performed using the convolution kernel
  • Each subset of weight data includes weight data on C 2 input channels
  • each output channel includes C 2 input channels.
  • each of the above-mentioned weight data subsets includes weight data on C 2 input channels, and a convolution operation is performed on C 2 second image data sets to be processed based on the second weight data set using a convolution kernel , to get the target output result.
  • a convolution operation is performed on the second image data set and the pre-acquired second weight data set to obtain a target output result, including:
  • the target output result includes a fourth image data set
  • the second image data set is a group of S 1
  • the image data of M 1 channels in each group of image data in the image data is interleaved and rearranged to obtain an image data set, and the S 1 group of image data is obtained by combining every M 1 channel of the first image data set.
  • the image data is divided into a group to obtain the image data, M 1 ⁇ C 1 .
  • performing the above-mentioned convolution operation on the second image data set and the second weight data set includes, but is not limited to, acquiring C 2 groups of image data in the second image data set, wherein each group of image data includes A plurality of image data located in the same channel in the first image data set, each group of image data is image data obtained by offsetting 1 address from the storage address of the previous group of image data in the C2 groups of image data.
  • C 2 groups of image data and N 2 ⁇ C 2 groups of weight data in the second weight data set perform a convolution operation to obtain N 2 groups of image data in the fourth image data set, wherein each group of weight data is associated with each group of image data.
  • Like data has the same data structure.
  • a convolution operation is performed on the second image dataset and the second weight dataset to obtain a third image dataset, including:
  • each set of image data includes multiple image data located in the same channel in the first image data set, and each set of image data is obtained from C2 sets of image data The image data obtained by offsetting the storage address of the previous group of image data by 1 address;
  • the image data has the same data structure
  • the second weight data set is obtained by interleaving and rearranging the data in the first weight data set to obtain the second weight data set.
  • the weight data in the first weight data set is based on the data of N 2 ⁇ C 2 ⁇ H 3 ⁇ W 3
  • N 2 represents the number of weight data subsets included in the first weight data set
  • C 2 represents the number of channels in each weight data subset
  • H 3 represents the data height in each weight data subset
  • W 3 represents The width of the data in each weight data subset.
  • the image data of each group of image data described above is obtained by offsetting 1 address from the storage address of the previous group of image data in the C2 groups of image data may include, but not limited to, according to a predetermined step size.
  • each group of weight data and each group of image data have the same data structure, which may include, but not limited to, the same M 1 and M 2 above.
  • the image data is obtained by offsetting 1 address from the storage address of the previous group of image data in the C 2 groups of image data, so as to obtain each group of image data.
  • the frequency of acquiring data across channels during the convolution calculation process is reduced.
  • the probability of Cache Miss occurrence is reduced, thereby realizing the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image data, and then solving the related technology.
  • the efficiency of processing image data is relatively low.
  • a convolution operation is performed on the C 2 groups of image data and the N 2 ⁇ C 2 groups of weight data in the second weight data set to obtain N 2 groups of image data in the third image data set, including:
  • a weighted sum operation is performed on each C 2 group of weight data in the N 2 ⁇ C 2 groups of weight data and the C 2 groups of image data, respectively, to obtain N 2 groups of image data.
  • the method may include, but is not limited to, using convolution kernels one by one according to a predetermined sliding step size to combine each C 2 group of weight data in the N 2 ⁇ C 2 groups of weight data with the C 2 groups of image data respectively.
  • a weighted sum operation is performed to obtain N 2 sets of image data.
  • FIG. 10 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
  • the size of the convolution kernel as 3x3 as an example
  • the convolution kernel based on the corresponding weight parameters recorded in the second weight data set
  • the data in the second image data set at the same position is weighted and summed
  • a group of image data in N 2 groups of image data is obtained, and the processing is continued according to a sliding window with a step size of 1, so as to obtain the above N 2 groups of image data.
  • the method further includes: storing the first image data set and the second image data set in the first memory space;
  • the second weight data set is stored in the second memory space, wherein the first memory space and the second memory space are mutually independent memory spaces.
  • the above-mentioned first memory space may include, but is not limited to, a storage space for storing image data, such as Texture resources
  • the above-mentioned second memory space may include, but is not limited to, a storage space for storing weight data, such as , Buffer resource.
  • FIG. 11 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
  • the existing technical solution generally only uses one type of memory (Buffer/Texture) as the data loading/storing space when using Metal for GPU computing.
  • the current model design is getting lighter and lighter.
  • the access limit of the memory bandwidth often becomes the bottleneck of the final performance.
  • the Data Buffer resource and the Texture resource in Metal are independent memory spaces. Therefore, compared with the traditional data that only uses one memory structure (Buffer/Texture) to express data, the input/output uses Texture to save data, and the weight/bias parameter uses Buffer to represent storage, which can be obtained by distinguishing between Texture and Buffer. Higher memory bandwidth reduces the probability of Cache Miss and improves memory access performance.
  • an image data processing apparatus for implementing the above image data processing method.
  • the device includes:
  • an acquisition module 1202 configured to acquire a first image data set to be processed, wherein the image data in the first image data set is arranged according to a first data format;
  • the processing module 1204 is used for interleaving and rearranging the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set is arranged according to the second data format, and the interleaving and rearranging manner is the same as that of the second image data set.
  • the convolution operation is matched, and the dimension of the second data format is smaller than that of the first data format;
  • the execution module 1206 is configured to perform a convolution operation on the second image data set and the pre-acquired second weight data set to obtain a target output result.
  • the obtaining module includes: an obtaining unit, configured to obtain a first image data set to be processed, wherein the image data in the first image data set is according to N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 Arranged in the data format, N1 represents the number of image data subsets included in the first image data set, C1 represents the number of channels in each image data subset, H1 represents each image data subset in the first image data set The data height, W 1 represents the data width in each image data subset in the first image data set; the processing module includes: a processing unit for interleaving and rearranging the data in the first image data set to obtain a second An image data set, wherein the image data in the second image data set are arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , H 2 represents the data height in each image data subset in the second image data set, and W 2 represents The width of the data in each subset of image data in the second image dataset.
  • the processing module includes: a grouping unit, configured to divide the image data of every M 1 channel in the first image data set into one group to obtain S 1 groups of image data, where M 1 ⁇ C 1 ; an arrangement unit, configured to interleave and rearrange the image data of M 1 channels in each group of image data in the S 1 groups of image data to obtain a second image data set.
  • the grouping unit is configured to divide the image data of every M 1 channel in the first image data set into one group to obtain the S 1 group of image data in the following manner: where C 1 is not an integer of M 1 In the case of multiple times, the number of channels in the first image data set is increased from C 1 to C 2 to obtain a third image data set, where C 2 is an integer multiple of M 1 , and the channels on the increased channels in the third image data set are The image data is 0; the image data of every M 1 channel in the third image data set is divided into one group to obtain S 1 groups of image data, wherein,
  • the arranging unit is configured to interleave and rearrange the image data of M 1 channels in each group of image data in the S 1 groups of image data in the following manner to obtain a second image data set:
  • C 1 is an integer multiple of M 1
  • the image data of M 1 channels in each group of image data in the S 1 groups of image data are interleaved and rearranged to obtain a second image data set, wherein,
  • C 1 is not an integer multiple of M 1
  • the image data of M 1 channels in each group of image data in the S 1 groups of image data are interleaved and rearranged to obtain a second image data set, wherein ,
  • the apparatus is further configured to: acquire a preset first weight data set, wherein the weight data in the first weight data set is arranged in a data format of N 2 ⁇ C 2 ⁇ H 3 ⁇ W 3 , N 2 represents the number of weight data subsets included in the first weight data set, C 2 represents the number of channels in each weight data subset, H 3 represents the data height in each weight data subset, W 3 represents each weight The data width in the data subset; the data in the first weight data set is interleaved and rearranged to obtain a second weight data set, wherein the weight data in the second weight data set are arranged according to the data format of H 4 ⁇ W 4 , H4 represents the data height of the weight data in the second weight data set, and W4 represents the data width of the weight data in the second weight data set.
  • the apparatus is further configured to interleave and rearrange the data in the first weight data set to obtain the second weight data set: Divide the weight data into one group to obtain S 2 groups of weight data, where M 2 ⁇ N 2 ; perform interleaving and rearrangement of M 2 weight data in each group of weight data in the S 2 groups of weight data to obtain second weight data set.
  • the device is further configured to divide the weight data of every M 2 weight data subsets in the first weight data set into one group in the following manner to obtain S 2 groups of weight data, including:
  • N 2 is an integer multiple of M 2
  • the weight data of every M 2 weight data subsets in the first weight data set is divided into one group, and the S 2 groups of weight data are obtained, wherein,
  • N 2 is not an integer multiple of M 2
  • the apparatus is further configured to interleave and rearrange M 2 weight data in each group of weight data in the S 2 groups of weight data in the following manner to obtain a second weight data set, including:
  • N 2 is an integer multiple of M 2
  • N 2 is not an integer multiple of M 2 .
  • H 4 H 3 ⁇ W 3 .
  • the value of N 2 is the number of output channels of the convolution kernel
  • the value of C 2 is the number of input channels of the convolution kernel
  • the convolution operation is the convolution operation performed using the convolution kernel
  • Each subset of weight data includes weight data on C 2 input channels
  • each output channel includes C 2 input channels.
  • the apparatus is further configured to perform the convolution operation on the second image data set and the pre-acquired second weight data set in the following manner to obtain the target output result, including:
  • the image data of M 1 channels in each group of image data is interleaved and rearranged, and the obtained image data set, S 1 group of image data is the image data of every M 1 channel in the first image data set is divided into one group , the obtained image data, M 1 ⁇ C 1 .
  • the device is further configured to perform a convolution operation on the second image dataset and the second weight dataset to obtain a fourth image dataset, including:
  • each set of image data includes multiple image data located in the same channel in the first image data set, and each set of image data is obtained from C2 sets of image data The image data obtained by offsetting the storage address of the previous group of image data by 1 address;
  • the image data has the same data structure
  • the second weight data set is obtained by interleaving and rearranging the data in the first weight data set to obtain the second weight data set.
  • the weight data in the first weight data set is based on the data of N 2 ⁇ C 2 ⁇ H 3 ⁇ W 3
  • N 2 represents the number of weight data subsets included in the first weight data set
  • C 2 represents the number of channels in each weight data subset
  • H 3 represents the data height in each weight data subset
  • W 3 represents The width of the data in each weight data subset.
  • the apparatus is further configured to perform a convolution operation on the C 2 sets of image data and the N 2 ⁇ C 2 sets of weight data in the second weight data set in the following manner to obtain N in the third image data set 2 sets of image data, including:
  • a weighted sum operation is performed on each C 2 group of weight data in the N 2 ⁇ C 2 groups of weight data and the C 2 groups of image data, respectively, to obtain N 2 groups of image data.
  • the apparatus is further configured to: store the first image data set and the second image data set in the first memory space; store the second weight data set in the second memory space, wherein the first memory The space and the second memory space are independent memory spaces.
  • an electronic device for implementing the above image data processing method is also provided, where the electronic device may be the terminal device or the server shown in FIG. 1 .
  • the electronic device may be the terminal device or the server shown in FIG. 1 .
  • This embodiment is described by taking the electronic device as a server as an example.
  • the electronic device includes a memory 1302 and a processor 1304, where a computer program is stored in the memory 1302, and the processor 1304 is configured to execute the steps in any of the above method embodiments by running the computer program.
  • the above electronic device may be located in at least one network device among multiple network devices of a computer network.
  • the above-mentioned processor may be configured to execute the following steps through a computer program:
  • S1 Acquire a first image data set to be processed, wherein the image data in the first image data set are arranged according to the data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , and N 1 indicates that the first image data set includes the number of image data subsets, C 1 represents the number of channels in each image data subset, H 1 represents the data height in each image data subset, and W 1 represents the data width in each image data subset;
  • S2 Interleave and rearrange the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set are arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , and the interleaving and rearrangement is performed.
  • the way matches the convolution operation
  • FIG. 13 is for illustration only, and the electronic device can also be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a handheld computer, and a Mobile Internet Devices (MID). ), PAD and other terminal equipment.
  • FIG. 13 does not limit the structure of the above-mentioned electronic device.
  • the electronic device may also include more or fewer components than those shown in FIG. 13 (eg, network interfaces, etc.), or have a different configuration than that shown in FIG. 13 .
  • the memory 1302 may be used to store software programs and modules, such as program instructions/modules corresponding to the image data processing method and apparatus in the embodiments of the present application, and the processor 1304 runs the software programs and modules stored in the memory 1302, thereby Executing various functional applications and data processing, that is, implementing the above-described processing method of image data.
  • Memory 1302 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, memory 1302 may further include memory located remotely from processor 1304, and these remote memories may be connected to the terminal through a network.
  • the memory 1302 can be specifically used for, but not limited to, storing information such as image data to be processed.
  • the above-mentioned memory 1302 may include, but is not limited to, the acquisition module 1202 , the processing module 1204 , and the execution module 1206 in the above-mentioned image data processing apparatus.
  • it may also include but not limited to other modules or units in the above-mentioned image data processing apparatus, which will not be repeated in this example.
  • the above-mentioned transmission means 1306 is used to receive or transmit data via a network.
  • Specific examples of the above-mentioned networks may include wired networks and wireless networks.
  • the transmission device 1306 includes a network adapter (Network Interface Controller, NIC), which can be connected with other network devices and routers through a network cable, so as to communicate with the Internet or a local area network.
  • the transmission device 1306 is a radio frequency (RF) module, which is used for wirelessly communicating with the Internet.
  • RF radio frequency
  • the above electronic device further includes: a display 1308 for displaying image data; and a connection bus 1310 for connecting various module components in the above electronic device.
  • the above-mentioned terminal device or server may be a node in a distributed system, wherein the distributed system may be a blockchain system, and the blockchain system may be communicated by the multiple nodes through a network A distributed system formed by connection in the form of.
  • a peer-to-peer (P2P, Peer To Peer) network can be formed between nodes, and any form of computing equipment, such as servers, terminals and other electronic devices can become a node in the blockchain system by joining the peer-to-peer network.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided in the various implementations of the processing aspects of the image data described above.
  • the computer program is configured to execute the steps in any one of the above method embodiments when running.
  • the above-mentioned computer-readable storage medium may be configured to store a computer program for performing the following steps:
  • S1 Acquire a first image data set to be processed, wherein the image data in the first image data set are arranged according to the data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , and N 1 indicates that the first image data set includes the number of image data subsets, C 1 represents the number of channels in each image data subset, H 1 represents the data height in each image data subset, and W 1 represents the data width in each image data subset;
  • S2 Interleave and rearrange the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set are arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , and the interleaving and rearrangement is performed.
  • the way matches the convolution operation
  • the storage medium may include: a flash disk, a read-only memory (Read-Only Memory, ROM), a random access device (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • the integrated units in the above-mentioned embodiments are implemented in the form of software functional units and sold or used as independent products, they may be stored in the above-mentioned computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art, or all or part of the technical solution, and the computer software product is stored in a storage medium,
  • Several instructions are included to cause one or more computer devices (which may be personal computers, servers, or network devices, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed clients may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative, for example, the division of the units is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to implement the technical solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Neurology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Processing (AREA)
  • Complex Calculations (AREA)

Abstract

本申请公开了一种图像数据的处理方法和装置、存储介质及电子设备。其中,该方法包括:采用获取待处理的第一图像数据集,其中,第一图像数据集中的图像数据按照第一数据格式进行排列,将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照第二数据格式进行排列,交织重排的方式与卷积操作匹配,第二数据格式的维度小于第一数据格式,对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果。

Description

图像数据的处理方法和装置、存储介质及电子设备
本申请要求于2021年4月26日提交中国专利局、申请号为202110451609.0、名称为“图像数据的处理方法和装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及一种图像数据的处理方法和装置、存储介质及电子设备。
背景技术
目前的相关技术中,传统计算模式一般为每一个输出即是一个线程组,利用SIMD(Single Instruction Multiple Data,单指令多数据结构)对图像数据进行处理,例如,在利用SIMD进行卷积运算的过程中,输入维度为[N,C,H,W]=[1,1,5,5],卷积核维度为[Cout,Cin,kernel_h,kernel_w]=[1,1,3,3],进行卷积计算并最终生成维度为[N,C,H,W]=[1,1,5,5]的输出,其中,N,C,H,W分别表示批次、通道数、高度、宽度;Cout表示卷积核的输出通道数,Cin表示卷积核的输入通道数,kernel_h表示卷积核的高度,kernel_w表示卷积核的宽度。
现有的技术方案的数据排列通常只针对[N,C,H,W]的维度,且相比传统的数据表达方式对比,对于卷积操作,根据计算机内存排布特点,如果卷积核的尺寸较小而输入空间尺寸较大的话,为了获取信息的完整,需要对输入进行例如补边等操作,而且,由于跨通道获取数据,造成缓存缺失(Cache Miss)以及额外的数据拷贝开销,从而严重影响了计算过程中的设备运行性能,进而降低了处理图像数据的效率。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种图像数据的处理方法和装置、存储介质及电子设备,以至少解决相关技术中存在的处理图像数据的效率比较低的技术问题。
根据本申请实施例的一个方面,提供了一种图像数据的处理方法,由电子设备执行,包括:获取待处理的第一图像数据集,其中,所述第一图像数据集中的图像数据按照第一数据格式进行排列;将所述第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,所述第二图像数据集中的图像数据按照第二数据格式进行排列,所述交织重排的方式与卷积操作匹配,所述第二数据格式的维度小于所述第一数据格式;对所述第二图像数据集和预先获取的第二权重数据集执行所述卷积操作,得到目标输出结果。
根据本申请实施例的另一方面,还提供了一种图像数据的处理装置,包括:获取模块,用于获取待处理的第一图像数据集,其中,所述第一图像数据集中的图像数据按照第一数据格式进行排列;处理模块,用于将所述第一图像数据集中的数据进行交织重排,得到第 二图像数据集,其中,所述第二图像数据集中的图像数据按照第二数据格式进行排列,所述交织重排的方式与卷积操作匹配,所述第二数据格式的维度小于所述第一数据格式;执行模块,用于对所述第二图像数据集和预先获取的第二权重数据集执行所述卷积操作,得到目标输出结果。
根据本申请实施例的又一方面,还提供了一种计算机可读的存储介质,该计算机可读的存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述图像数据的处理方法。
根据本申请实施例的又一方面,还提供了一种电子设备,包括存储器和处理器,上述存储器中存储有计算机程序,上述处理器被设置为通过所述计算机程序执行上述的图像数据的处理方法。
根据本申请实施例的又一方面,还提供了一种计算机程序产品,包括计算机指令,当所述计算机指令由计算机设备的处理器读取并执行时,使得所述计算机设备执行上述的图像数据的处理方法。
在本申请实施例中,通过对图像数据处理过程中的计算过程实现深度优化,将计算过程中的输入图像数据、输出图像数据以及计算权重进行相应的数据重排,相比传统计算模式降低了额外的数据拷贝开销,并降低了出现Cache Miss的概率,从而实现了优化设备计算性能,提高图像数据的处理效率的技术效果,进而解决了相关技术中存在的处理图像数据的效率比较低的技术问题。
附图简要说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种图像数据的处理方法的应用环境的示意图;
图2是根据本申请实施例的一种图像数据的处理方法的流程示意图;
图3是根据本申请实施例的一种图像数据的处理方法的示意图;
图4是根据本申请实施例的又一种图像数据的处理方法的示意图;
图5是根据本申请实施例的又一种图像数据的处理方法的示意图;
图6是根据本申请实施例的又一种图像数据的处理方法的示意图;
图7是根据本申请实施例的又一种图像数据的处理方法的示意图;
图8是根据本申请实施例的又一种图像数据的处理方法的示意图;
图9是根据本申请实施例的又一种图像数据的处理方法的示意图;
图10是根据本申请实施例的又一种图像数据的处理方法的示意图;
图11是根据本申请实施例的又一种图像数据的处理方法的示意图;
图12是根据本申请实施例的一种图像数据的处理装置的结构示意图;
图13是根据本申请实施例的一种电子设备的结构示意图。
具体实施方式
为了使本领域的技术人员更好地理解本申请的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解,这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,在对本申请实施例进行描述的过程中出现的部分名词或者术语适用于如下解释:
CNN:Convolutional Neural Networks(卷积神经网络);
SIMD:Single Instruction Multiple Data(单指令多数据结构);
Metal:一种可以访问硬件图像处理单元(GPU)的抽象的框架;
Buffer:Metal Buffer(Metal表示常规的内存);
Texture:Metal Texture(Metal表示纹理的内存);
[N,C,H,W]:数据的维度表示方式,[batch,channel,height,width]分别表示[批次,通道数,高度,宽度];
Cout:卷积核的输出通道数;
Cin:卷积核的输入通道数;
kernel_h:卷积核的高度;
kernel_w:卷积核的宽度。
下面结合实施例对本申请的技术方案进行说明:
根据本申请实施例的一个方面,提供了一种图像数据的处理方法。在本实施例中,上述图像数据的处理方法可以应用于如图1所示的由服务器101和用户终端103所构成的硬件环境中。如图1所示,服务器101通过网络与终端103进行连接,可用于为用户终端或用户终端上安装的客户端提供服务,客户端可以是视频客户端、即时通信客户端、浏览器客户端、教育客户端、游戏客户端等。可在服务器上或独立于服务器设置数据库105,用于为服务器101提供数据存储服务,例如,图像数据存储服务。上述网络可以包括但不限于:有线网络和无线网络,其中,该有线网络包括:局域网、城域网和广域网,该无线网络包括:蓝牙、WIFI及其他实现无线通信的网络。用户终端103可以是配置有应用程序107的终端,可以包括但不限于以下至少之一:手机(如Android手机、iOS手机等)、笔记本电脑、平板电脑、掌上电脑、MID(Mobile Internet Devices,移动互联网设备)、PAD、台式电脑、智能电视等计算机设备。上述服务器可以是单一服务器,也可以是由多个服务 器组成的服务器集群,或者是云服务器,使用上述图像数据的处理方法的应用程序107通过用户终端103进行显示。
结合图1所示,上述图像数据的处理方法可以在用户终端103通过如下步骤实现:
S1,在用户终端103的应用程序107中获取待处理的第一图像数据集,其中,第一图像数据集中的图像数据按照第一数据格式进行排列;
S2,在用户终端103的应用程序107中将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照第二数据格式进行排列,交织重排的方式与卷积操作匹配,第二数据格式的维度小于第一数据格式;
S3,在用户终端103的应用程序107中对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果。
在本实施例中,上述图像数据的处理方法还可以包括但不限于由配置于服务器的客户端使用。
在本实施例中,上述图像数据的处理方法可以包括但不限于由用户终端103和设置于服务器101的客户端进行异步使用,例如,通过用户终端103的应用程序107执行上述步骤S1、S2,通过设置于服务器101的客户端执行上述步骤S3,上述仅是一种示例,本实施例不做具体的限定。
作为一种实施方式,如图2所示,上述图像数据的处理方法包括:
S202,获取待处理的第一图像数据集,其中,第一图像数据集中的图像数据按照第一数据格式进行排列;
S204,将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照第二数据格式进行排列,交织重排的方式与卷积操作匹配,第二数据格式的维度小于第一数据格式;
S206,对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果。
在本实施例中,上述第一数据格式的图像数据可以包括但不限于按照N 1×C 1×H 1×W 1的数据格式进行排列,N 1表示第一图像数据集包括的图像数据子集的数量,C 1表示每个图像数据子集中的通道数量,H 1表示第一图像数据集中每个图像数据子集中的数据高度,W 1表示第一图像数据集中每个图像数据子集中的数据宽度。
例如,图3是根据本申请实施例的又一种图像数据的处理方法的示意图。上述第一图像数据集中的图像数据按照N 1×C 1×H 1×W 1的数据格式进行排列可以包括但不限于如图3所示的示例,N 1表示第一图像数据集包括的图像数据子集的数量,图3中示出N 1=2,换言之,在如图3所示的图像数据处理过程中,包括两个批次的待处理数据,其中,一个批次等于一个图像数据子集,C 1表示每个图像数据子集中的通道数量,图3中示出C 1=5,表示该批次的待处理数据包括5个通道的待处理数据,H 1表示每个图像数据子集中的数据高度,图3中示出H 1=h,W 1表示每个图像数据子集中的数据宽度,图3中示出W 1=w,“A,B,C”表示不同通道内的图像数据。
在本实施例中,上述图像数据的处理方法的应用场景可以包括但不限于医疗、金融、征信、银行、游戏、能源、教育、楼宇、游戏、交通、物联、工业以及人工智能等多种需 要进行图像数据处理的应用场景,上述应用场景可以包括但不限于应用在神经网络前向计算库中。由于神经网络前向计算库提供了所有神经网络算法的计算能力,因此本申请的应用场景可以涵盖所有使用神经网络前向库的应用场景,例如,包括但不限于与云技术关联的AI算法相关应用,如虚拟背景等。
云技术(Cloud technology)是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
其中,云会议是基于云计算技术的一种高效、便捷、低成本的会议形式。使用者只需要通过互联网界面,进行简单易用的操作,便可快速高效地与全球各地团队及客户同步分享语音、数据文件及视频,而会议中数据的传输、处理等复杂技术由云会议服务商帮助使用者进行操作。
目前云会议主要集中在以SaaS(Software as a Service,软件即服务)模式为主体的服务内容,包括电话、网络、视频等服务形式,基于云计算的视频会议就叫云会议。
在云会议时代,数据的传输、处理、存储全部由视频会议厂家的计算机资源处理,用户完全无需再购置昂贵的硬件和安装繁琐的软件,只需打开浏览器,登录相应界面,就能进行高效的远程会议。
云会议系统支持多服务器动态集群部署,并提供多台高性能服务器,大大提升了会议的稳定性、安全性、可用性。近年来,视频会议因能大幅提高沟通效率,持续降低沟通成本,带来内部管理水平升级,而获得众多用户欢迎,并已广泛应用在交通、运输、金融、运营商、教育、企业等各个领域。毫无疑问,视频会议运用云计算以后,在方便性、快捷性、易用性上具有更强的吸引力,必将激发视频会议应用新高潮的到来。
例如,以上述图像数据的处理方法应用在云会议场景中为例,图4是根据本申请实施例的一种图像数据的处理方法的示意图,如图4所示,该方法具体包括但不限于如下步骤:
S1,用户终端402获取待处理的第一图像数据集;
S2,位于用户终端402内部或与用户终端402相连接的处理器404将第一图像数据集中的数据进行交织重排,得到第二图像数据集;
S3,对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果。
其中,上述第一图像数据集可以包括但不限于如图4所示数据库中存储的待处理的第一图像数据集,上述目标输出结果可以包括但不限于用于在图4所示的云会议应用406的虚拟背景显示区域408中进行显示的虚拟背景,或,其他使用上述图像数据处理方法处理之后得到的图像数据。
上述仅是一种示例,本实施例不做任何具体的限定。
在本实施例中,上述对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果可以包括但不限于对第二图像数据集和第二权重数据集执行卷积操作,得到第三图像数据集,其中,卷积操作包括但不限于卷积操作,目标输出结果包括但不限于第三图像数据集,第二图像数据集是将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到的图像数据集,S 1组图像数据是将第一图像数据集中每M 1个通道的图像数据分成一组,得到的图像数据,M 1≤C 1
在本实施例中,获取待处理的第一图像数据集,其中,第一图像数据集中的图像数据按照N 1×C 1×H 1×W 1的数据格式进行排列,将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照N 1×H 2×W 2的数据格式进行排列,交织重排的方式与卷积操作匹配。通过对图像数据处理过程中的计算过程实现深度优化,将计算过程中的输入图像数据、输出图像数据以及计算权重进行相应的数据重排,从而将较高维度的数据重新排列为较低维度的数据,并且在后续进行卷积计算的过程中,处理不同通道的数据时,能够将多个通道的数据进行分组,从而能够通过跨分组形式从不同通道内提取数据,这能够有效降低跨通道提取数据的次数。而在现有技术中,每次从不同通道内提取数据均需要跨通道提取,因此本申请中记载的技术方案相比传统计算模式降低了额外的数据拷贝开销,并降低了出现Cache Miss的概率,从而实现了优化设备计算性能,提高图像数据的处理效率的技术效果,进而解决了相关技术中存在的处理图像数据的效率比较低的技术问题。
作为一种实施例,所述第一图像数据集中的图像数据按照第一数据格式进行排列,包括:第一图像数据集中的图像数据按照N 1×C 1×H 1×W 1的数据格式进行排列,N 1表示第一图像数据集包括的图像数据子集的数量,C 1表示每个图像数据子集中的通道数量,H 1表示第一图像数据集中每个图像数据子集中的数据高度,W 1第一图像数据集中表示每个图像数据子集中的数据宽度;
所述第二图像数据集中的图像数据按照第二数据格式进行排列,包括:将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照N 1×H 2×W 2的数据格式进行排列,H 2表示第二图像数据集中每个图像数据子集中的数据高度,W 2表示第二图像数据集中每个图像数据子集中的数据宽度。
在本实施例中,上述将第一图像数据集中的数据进行交织重排,得到第二图像数据集可以包括但不限于将第一图像数据集中的图像数据进行交织重排,以降低图像数据的维度,便于后续卷积操作。
图5是根据本申请实施例的又一种图像数据的处理方法的示意图。上述将第一图像数据集中的数据进行交织重排,得到第二图像数据集可以包括但不限于如图5所示的示例,将第一图像数据集中每M 1个通道的图像数据分成一组,其中,M 1≤C 1,将每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集,以M 1=4为例,则如图5中示出的将每个图像数据按每四个通道为一组分成C 1/4个组,再将每组内的4个通道维度的图像数据按照交织(混合)的形式重新排列,得到如图5所示的[N,H,C/4,W,C 4]的数据结构的第二图像数据集,其中,“A,B,C”表示不同通道内的图像数据。
例如,以图5中的示例进行如下说明:
位于第一图像数据集左上角的A(1,1)数据,其位于第一个通道内的第一行、第一列,而B(1,1)数据,其位于第二个通道内的第一行、第一列。当进行卷积运算时,在提取A(1,1)数据后,需要提取B(1,1)数据的情况下,现有技术中需要由提取A通道数据的模式切换为提取B通道数据的模式,也就是说,需要跨通道来进行数据提取,而采用本实施例,通过将A(1,1)的数据排列在第二图像数据集中的第一行、第一列,B(1,1)的数据排列在第二图像数据集中的第一行、第二列,在提取了A(1,1)数据之 后,直接可以提取B(1,1)的数据,避免了跨通道提取数据造成的cache miss等问题。
作为一种实施例,将第一图像数据集中的数据进行交织重排,得到第二图像数据集,包括:
将第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,M 1≤C 1
将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集。
在本实施例中,上述将第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据可以包括但不限于,在C 1为M 1的整数倍的情况下,将第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,
Figure PCTCN2022086217-appb-000001
在C 1不为M 1的整数倍的情况下,将第一图像数据集中的通道数量从C 1增加到C 2,得到第三图像数据集,其中,C 2为M 1的整数倍,第三图像数据集中增加的通道上的图像数据为0(即,将通道数量补齐为M 1的整数倍);将第三图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,
Figure PCTCN2022086217-appb-000002
以N=1为例,图6是根据本申请实施例的又一种图像数据的处理方法的示意图。上述第二图像数据集可以包括但不限于如图6所示的示例,其中,“A,B,C”表示不同通道内的图像数据,将A(1,1)、B(1,1)、C(1,1)、D(1,1)交织重排为同一高度的图像数据,图像数据中不同通道的数据连续排序,提高了数据访问的局部性能,可以极大地降低Cache Miss的概率。在现有技术中,在卷积核的尺寸为非1x1尺寸的情况下,在对输入的图片进行特征提取时,对于边界区域的特征提取,需要将一些额外的区域配置为0,进而实现对边界区域的特征提取。而在本实施例中,因为M 1个通道打包处理,因此在使用SIMD进行卷积运算时,不需要对边界区域进行补边操作,从而节省了数据拷贝的额外开销。
以图5为例,在卷积核的尺寸为3x3的情况下,在提取B(2,1)为中心点的特征数据时,现有技术需要在第一图像数据集中B(2,1)的左侧,补入三个数值为0的特征点,以实现B(2,1)数据的特征提取。而在本实施例中重新排列数据之后,第二图像数据集中B(2,1)左侧的数据为A(2,1)的数据,因此,无需进行补边操作,直接基于B(2,1)进行特征提取即可。也就是说,由于在本申请实施例中对待处理数据进行了重新交织排布,在使用卷积核提取边界数据的过程中,可以减少补边操作的次数,将不同通道内的数据置于同一维度,从而能够达到节省数据拷贝的额外开销的效果。
在本实施例中,将第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,M 1≤C 1,将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集。通过将第一图像数据集中的数据进行交织重排,得到第二图像数据集,相比传统计算模式下容易造成Cache Miss以及额外的数据拷贝开销的情况,本实施例降低了出现Cache Miss的概率,从而实现了优化设备计算性能,提高图像数据的处理效率的技术效果,进而解决了相关技术中存在的处理图像数据的效率比较低的技术问题。
作为一种实施例,将第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,包括:
在C 1为M 1的整数倍的情况下,将第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,
Figure PCTCN2022086217-appb-000003
在C 1不为M 1的整数倍的情况下,将第一图像数据集中的通道数量从C 1增加到C 2,得到第三图像数据集,其中,C 2为M 1的整数倍,第三图像数据集中增加的通道上的图像数据为0;将第三图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,
Figure PCTCN2022086217-appb-000004
在本实施例中,上述在C 1不为M 1的整数倍的情况下,将第一图像数据集中的通道数量从C 1增加到C 2可以包括但不限于
Figure PCTCN2022086217-appb-000005
向上取整×M 1,当然也可以向下取整或以其他方式进行取整。
以N=1,C 1=5,M 1=4为例,如果通道数量C 1不能整除4则将通道数量补齐到4的整倍数,补上的激活值(对应于前述的图像数据)全部填0,如图5所示,第二行第二列至第四列均为“0.0f”。
上述仅是一种示例,本实施例不做任何具体限定。
作为一种实施例,将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集,包括:
在C 1为M 1的整数倍的情况下,将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集,其中,W 2=M 1×W 1
Figure PCTCN2022086217-appb-000006
在C 1不为M 1的整数倍的情况下,将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集,其中,W 2=M 1×W 1
Figure PCTCN2022086217-appb-000007
在本实施例中,如图5所示,以N=1,C 1=5,M 1=4为例,则上述W 2=M 1×W 1即为图5所示的W 2=w*C 4
Figure PCTCN2022086217-appb-000008
即为图5所示的H 2=H*C/4。
通过本实施例,将数据结构进行重排为N 1×H 2×W 2后,不需要对数据进行补边也可以利用SIMD来加速卷积运算,避免卷积运算时进行补边而导致的数据拷贝的额外开销。
作为一种实施例,该方法还包括:
获取预设的第一权重数据集,其中,第一权重数据集中的权重数据按照N 2×C 2×H 3×W 3的数据格式进行排列,N 2表示第一权重数据集包括的权重数据子集的数量,C 2表示每个权重数据子集中的通道数量,H 3表示每个权重数据子集中的数据高度,W 3表示每个权重数据子集中的数据宽度;
将第一权重数据集中的数据进行交织重排,得到第二权重数据集,其中,第二权重数据集中的权重数据按照H 4×W 4的数据格式进行排列,H4表示第二权重数据集中的权重数据的数据高度,W4表示第二权重数据集中的权重数据的数据宽度。
在本实施例中,上述第一权重数据集可以包括但不限于进行卷积计算过程中,使用卷积核处理图像数据时所采用的权重数据,例如,以上述图像数据的处理方法应用在云会议 场景中为例,图7是根据本申请实施例的又一种图像数据的处理方法的示意图,如图7所示,该方法具体包括但不限于如下步骤:
S1,位于用户终端702内部或与用户终端702相连接的处理器704获取预设的第一权重数据集;
S2,位于用户终端702内部或与用户终端702相连接的处理器704将第一权重数据集中的数据进行交织重排,得到第二权重数据集。
其中,上述第一权重数据集可以包括但不限于在如图7所示数据库中存储,上述第二权重数据集可以包括但不限于用于与待处理的第二图像数据集关联使用,以在图7所示的云会议应用706的虚拟背景显示区域708中生成虚拟背景。
图8是根据本申请实施例的又一种图像数据的处理方法的示意图。在本实施例中,上述第一图像数据集中的图像数据按照N 2×C 2×H 3×W 3的数据格式进行排列可以包括但不限于如图8所示的示例,N 2表示第一权重数据集包括的权重数据子集的数量,图8中示出的N 2=3,也即N 2=C out,C 2表示每个权重数据子集中的通道数量,图8中示出的C 2=5,也即C 2=C in,H 3表示每个权重数据子集中的数据高度,图8中示出的H 3=4,也即H 3=kernel_h,W 3表示每个权重数据子集中的数据宽度,图8中示出的W 3=3,也即W 3=kernel_w,“A,B,C”表示不同通道内的图像数据。
上述仅是一种示例,本实施例不做任何具体的限定。
在本实施例中,通过将第一权重数据集中的数据进行交织重排,得到第二权重数据集,相比传统计算模式下容易造成Cache Miss以及额外的数据拷贝开销的情况,本实施例降低了出现Cache Miss的概率,从而实现了优化设备计算性能,提高图像数据的处理效率的技术效果,进而解决了相关技术中存在的处理图像数据的效率比较低的技术问题。
作为一种实施例,将第一权重数据集中的数据进行交织重排,得到第二权重数据集,包括:
将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,M 2≤N 2
将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集。
图9是根据本申请实施例的又一种图像数据的处理方法的示意图。在本实施例中,上述将第一权重数据集中的数据进行交织重排,得到第二权重数据集可以包括但不限于如图9所示的示例,以N 2=1为例,将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集,以M 2=4为例,则如图9中示出的将权重数据按每四个输出通道为一组分成C/4个组,如果输出通道数量不能整除4则将通道数量补齐到4的整倍数,补上的激活值全部填0,每组内的4个通道维度的权重数据按照交织(混合)的形式重新排列,另外在随后的维度将输入通道维度顺序排列,即可得到[Cout/4,kernel_h,kernel_w,Cin,Cout4]的数据结构,OC4即为Cout4,IC即为Cin。
在本实施例中,上述将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,包括但不限于在N 2为M 2的整数倍的情况下,将第一权重数据集中每M 2 个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,
Figure PCTCN2022086217-appb-000009
Figure PCTCN2022086217-appb-000010
在N 2不为M 2的整数倍的情况下,将第一权重数据集中的权重数据子集的数量从N 2增加到N 3,得到第三权重数据集,其中,N 3为M 2的整数倍,第三权重数据集中增加的权重数据子集中的权重数据为0;将第三权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,
Figure PCTCN2022086217-appb-000011
作为一种实施例,将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,包括:
在N 2为M 2的整数倍的情况下,将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,
Figure PCTCN2022086217-appb-000012
在N 2不为M 2的整数倍的情况下,将第一权重数据集中的权重数据子集的数量的从N 2增加到N 3,得到第三权重数据集,其中,N 3为M 2的整数倍,第三权重数据集中增加的权重数据子集中的权重数据为0;将第三权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,
Figure PCTCN2022086217-appb-000013
在本实施例中,上述在N 2不为M 2的整数倍的情况下,将第一权重数据集中的权重数据子集的数量的从N 2增加到N 3,得到第三权重数据集可以包括但不限于
Figure PCTCN2022086217-appb-000014
向下取整×M 2,当然也可以向上取整或以其他方式进行取整。
以N2=3,M2=4为例,如果N2不能整除4则补齐到4的倍数,补上的激活值(对应于前述的图像数据)全部填0,如图9所示,第四列、第八列、第十二列,以此类推均为“0.0f”。
上述仅是一种示例,本实施例不做任何具体限定。
通过本实施例,采用在N 2为M 2的整数倍的情况下,将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,
Figure PCTCN2022086217-appb-000015
在N 2不为M 2的整数倍的情况下,将第一权重数据集中的权重数据子集的数量的从N 2增加到N 3,得到第三权重数据集,其中,N 3为M 2的整数倍,第三权重数据集中增加的权重数据子集中的权重数据为0;将第三权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,
Figure PCTCN2022086217-appb-000016
的方式,通过将第一权重数据集中的数据进行交织重排,得到第二权重数据集,降低了传统计算模式下容易造成的Cache Miss以及额外的数据拷贝开销,达到了降低出现Cache Miss的概率的目的,从而实现了优化设备计算性能,提高图像数据的处理效率的技术效果,进而解决了相关技术中存在的处理图像数据的效率比较低的技术问题。
作为一种实施例,将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集,包括:
在N 2为M 2的整数倍的情况下,将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集,其中,
Figure PCTCN2022086217-appb-000017
H 4=H 3×W 3
在N 2不为M 2的整数倍的情况下,将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集,其中,
Figure PCTCN2022086217-appb-000018
H 4=H 3×W 3
在本实施例中,以M 2=4为例,则如图9所示,H 3=kernel_h*kernel_w,W 3=IC*OC 4,每组权重数据的高为kernel_w,宽为OC 4。
上述仅是一种示例,本实施例不做任何具体限定。
作为一种实施例,N 2的取值为卷积核的输出通道的数量,C 2的取值为卷积核的输入通道的数量,卷积操作为使用卷积核执行的卷积操作,每个权重数据子集包括C 2个输入通道上的权重数据,每个输出通道包括C 2个输入通道。
在本实施例中,上述每个权重数据子集包括C 2个输入通道上的权重数据,使用卷积核基于第二权重数据集将C 2个待处理的第二图像数据集执行卷积操作,得到目标输出结果。
作为一种实施例,对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果,包括:
对第二图像数据集和第二权重数据集执行卷积操作,得到第四图像数据集,其中,所述目标输出结果包括第四图像数据集,所述第二图像数据集是将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到的图像数据集,所述S 1组图像数据是将第一图像数据集中每M 1个通道的图像数据分成一组,得到的图像数据,M 1≤C 1
在本实施例中,上述对第二图像数据集和第二权重数据集执行卷积操作包括但不限于,在第二图像数据集中获取C 2组图像数据,其中,每组图像数据包括第一图像数据集中位于同一个通道的多个图像数据,每组图像数据是从C 2组图像数据中的上一组图像数据的存储地址偏移1个地址得到的图像数据,对C 2组图像数据和第二权重数据集中的N 2×C 2组权重数据执行卷积操作,得到第四图像数据集中的N 2组图像数据,其中,每组权重数据与每组图像数据具有相同的数据结构。
在本实施例中,通过将第一权重数据集中的数据以及待处理的第一图像数据集中的数据进行交织重排,相比传统计算模式下容易造成Cache Miss以及额外的数据拷贝开销的情况,降低了出现Cache Miss的概率,从而实现了优化设备计算性能,提高图像数据的处理效率的技术效果,进而解决了相关技术中存在的处理图像数据的效率比较低的技术问题。
作为一种实施例,对第二图像数据集和第二权重数据集执行卷积操作,得到第三图像数据集,包括:
在第二图像数据集中获取C 2组图像数据,其中,每组图像数据包括第一图像数据集中位于同一个通道的多个图像数据,每组图像数据是从C 2组图像数据中的上一组图像数据的存储地址偏移1个地址得到的图像数据;
对C 2组图像数据和第二权重数据集中的N 2×C 2组权重数据执行卷积操作,得到第三图像数据集中的N 2组图像数据,其中,每组权重数据与每组图像数据具有相同的数据结构;
其中,第二权重数据集是将第一权重数据集中的数据进行交织重排,得到第二权重数据集,第一权重数据集中的权重数据按照N 2×C 2×H 3×W 3的数据格式进行排列,N 2表示第一权重数据集包括的权重数据子集的数量,C 2表示每个权重数据子集中的通道数量,H 3 表示每个权重数据子集中的数据高度,W 3表示每个权重数据子集中的数据宽度。
在本实施例中,上述每组图像数据是从C 2组图像数据中的上一组图像数据的存储地址偏移1个地址得到的图像数据可以包括但不限于,按照预定步长的滑动窗口处理图像数据,也即,从C 2组图像数据中的上一组图像数据的存储地址偏移1个地址即为步长=1。
在本实施例中,上述每组权重数据与每组图像数据具有相同的数据结构可以包括但不限于上述M 1与M 2相同。
在本实施例中,从C 2组图像数据中的上一组图像数据的存储地址偏移1个地址得到图像数据,以得到每组图像数据。通过将第一权重数据集中的数据以及待处理的第一图像数据集中的数据进行交织重排,降低了卷积计算过程中,跨通道获取数据的频率。相比传统计算模式下容易造成Cache Miss以及额外的数据拷贝开销的情况,降低了出现Cache Miss的概率,从而实现了优化设备计算性能,提高图像数据的处理效率的技术效果,进而解决了相关技术中存在的处理图像数据的效率比较低的技术问题。
作为一种实施例,对C 2组图像数据和第二权重数据集中的N 2×C 2组权重数据执行卷积操作,得到第三图像数据集中的N 2组图像数据,包括:
将N 2×C 2组权重数据中的每C 2组权重数据分别与C 2组图像数据执行加权求和操作,得到N 2组图像数据。
在本实施例中,该方法可以包括但不限于,按照预定的滑动步长逐个使用卷积核将N 2×C 2组权重数据中的每C 2组权重数据分别与C 2组图像数据执行加权求和操作,得到N 2组图像数据。
图10是根据本申请实施例的又一种图像数据的处理方法的示意图。如图10所示,以卷积核尺寸为3x3为例,通过使用卷积核基于第二权重数据集中记录的对应的权重参数,对相同位置的第二图像数据集中的数据进行加权求和,得到N 2组图像数据中的一组图像数据,按照步长为1的滑动窗口继续处理,以得到上述N 2组图像数据。
作为一种实施例,该方法还包括:在第一内存空间中存储第一图像数据集和第二图像数据集;
在第二内存空间中存储第二权重数据集,其中,第一内存空间与第二内存空间为相互独立的内存空间。
在本实施例中,上述第一内存空间可以包括但不限于用于存储图像数据的存储空间,例如,Texture资源,上述第二内存空间可以包括但不限于用于存储权重数据的存储空间,例如,Buffer资源。
图11是根据本申请实施例的又一种图像数据的处理方法的示意图。如图11所示,现有的技术方案在使用例如Metal做GPU运算的时候,一般只会使用一种内存(Buffer/Texture)作为数据加载/存储的空间,然而在目前模型设计越来越轻量级的计算模式下,内存带宽的访问限制往往会成为最终性能的瓶颈。而在本申请实施例中,Metal中Data Buffer资源和Texture资源是独立的内存空间。因此,相比传统的数据只使用一种内存结构(Buffer/Texture)表达方式对比,输入/输出使用Texture保存数据,权重/偏置参数使用Buffer来表示存储,区分开使用Texture和Buffer可以获取到更高的内存带宽,降低Cache Miss的概率,使内存访问的性能得到提升。
对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一个方面,还提供了一种用于实施上述图像数据的处理方法的图像数据的处理装置。如图12所示,该装置包括:
获取模块1202,用于获取待处理的第一图像数据集,其中,第一图像数据集中的图像数据按照第一数据格式进行排列;
处理模块1204,用于将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照第二数据格式进行排列,交织重排的方式与卷积操作匹配,第二数据格式的维度小于第一数据格式;
执行模块1206,用于对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果。
作为一种实施例,所述获取模块,包括:获取单元,用于获取待处理的第一图像数据集,其中,第一图像数据集中的图像数据按照N 1×C 1×H 1×W 1的数据格式进行排列,N 1表示第一图像数据集包括的图像数据子集的数量,C 1表示每个图像数据子集中的通道数量,H 1表示第一图像数据集中每个图像数据子集中的数据高度,W 1表示第一图像数据集中每个图像数据子集中的数据宽度;所述处理模块,包括:处理单元,用于将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照N 1×H 2×W 2的数据格式进行排列,H 2表示第二图像数据集中每个图像数据子集中的数据高度,W 2表示第二图像数据集中每个图像数据子集中的数据宽度。
作为一种实施例,所述处理模块,包括:分组单元,用于将第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,M 1≤C 1;排列单元,用于将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集。
作为一种实施例,所述分组单元用于通过如下方式将第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据:在C 1不为M 1的整数倍的情况下,将第一图像数据集中的通道数量从C 1增加到C 2,得到第三图像数据集,其中,C 2为M 1的整数倍,第三图像数据集中增加的通道上的图像数据为0;将第三图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,
Figure PCTCN2022086217-appb-000019
作为一种实施例,所述排列单元用于通过如下方式将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集:在C 1为M 1的整数倍的情况下,将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到第二图像数据集,其中,
W 2=M 1×W 1
Figure PCTCN2022086217-appb-000020
在C 1不为M 1的整数倍的情况下,将S 1组图像数据中的每组图像数据中的M 1个通道 的图像数据进行交织重排,得到第二图像数据集,其中,
W 2=M 1×W 1
Figure PCTCN2022086217-appb-000021
作为一种实施例,所述装置还用于:获取预设的第一权重数据集,其中,第一权重数据集中的权重数据按照N 2×C 2×H 3×W 3的数据格式进行排列,N 2表示第一权重数据集包括的权重数据子集的数量,C 2表示每个权重数据子集中的通道数量,H 3表示每个权重数据子集中的数据高度,W 3表示每个权重数据子集中的数据宽度;将第一权重数据集中的数据进行交织重排,得到第二权重数据集,其中,第二权重数据集中的权重数据按照H 4×W 4的数据格式进行排列,H4表示第二权重数据集中的权重数据的数据高度,W4表示第二权重数据集中的权重数据的数据宽度。
作为一种实施例,所述装置还用于通过如下方式将第一权重数据集中的数据进行交织重排,得到第二权重数据集:将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,M 2≤N 2;将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集。
作为一种实施例,所述装置还用于通过如下方式将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,包括:
在N 2为M 2的整数倍的情况下,将第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到所述S 2组权重数据,其中,
Figure PCTCN2022086217-appb-000022
在N 2不为M 2的整数倍的情况下,将第一权重数据集中的权重数据子集的数量的从N 2增加到N 3,得到第三权重数据集,其中,N 3为M 2的整数倍,第三权重数据集中增加的权重数据子集中的权重数据为0;将第三权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,
Figure PCTCN2022086217-appb-000023
作为一种实施例,所述装置还用于通过如下方式将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集,包括:
在N 2为M 2的整数倍的情况下,将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集,其中,
Figure PCTCN2022086217-appb-000024
H 4=H 3×W 3
在N 2不为M 2的整数倍的情况下,将S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到第二权重数据集,其中,
Figure PCTCN2022086217-appb-000025
H 4=H 3×W 3
作为一种实施例,N 2的取值为卷积核的输出通道的数量,C 2的取值为卷积核的输入通道的数量,卷积操作为使用卷积核执行的卷积操作,每个权重数据子集包括C 2个输入通道上的权重数据,每个输出通道包括C 2个输入通道。
作为一种实施例,所述装置还用于通过如下方式对第二图像数据集和预先获取的第二权重数据集执行所述卷积操作,得到目标输出结果,包括:
对第二图像数据集和第二权重数据集执行卷积操作,得到第四图像数据集,其中,目 标输出结果包括第四图像数据集,第二图像数据集是将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到的图像数据集,S 1组图像数据是将第一图像数据集中每M 1个通道的图像数据分成一组,得到的图像数据,M 1≤C 1
作为一种实施例,所述装置还用于通过如下方式对第二图像数据集和第二权重数据集执行卷积操作,得到第四图像数据集,包括:
在第二图像数据集中获取C 2组图像数据,其中,每组图像数据包括第一图像数据集中位于同一个通道的多个图像数据,每组图像数据是从C 2组图像数据中的上一组图像数据的存储地址偏移1个地址得到的图像数据;
对C 2组图像数据和第二权重数据集中的N 2×C 2组权重数据执行卷积操作,得到第四图像数据集中的N 2组图像数据,其中,每组权重数据与每组图像数据具有相同的数据结构;
其中,第二权重数据集是将第一权重数据集中的数据进行交织重排,得到第二权重数据集,第一权重数据集中的权重数据按照N 2×C 2×H 3×W 3的数据格式进行排列,N 2表示第一权重数据集包括的权重数据子集的数量,C 2表示每个权重数据子集中的通道数量,H 3表示每个权重数据子集中的数据高度,W 3表示每个权重数据子集中的数据宽度。
作为一种实施例,所述装置还用于通过如下方式对C 2组图像数据和第二权重数据集中的N 2×C 2组权重数据执行卷积操作,得到第三图像数据集中的N 2组图像数据,包括:
将N 2×C 2组权重数据中的每C 2组权重数据分别与C 2组图像数据执行加权求和操作,得到N 2组图像数据。
作为一种实施例,所述装置还用于:在第一内存空间中存储第一图像数据集和第二图像数据集;在第二内存空间中存储第二权重数据集,其中,第一内存空间与第二内存空间为相互独立的内存空间。
根据本申请实施例的又一个方面,还提供了一种用于实施上述图像数据的处理方法的电子设备,该电子设备可以是图1所示的终端设备或服务器。本实施例以该电子设备为服务器为例来说明。如图13所示,该电子设备包括存储器1302和处理器1304,该存储器1302中存储有计算机程序,该处理器1304被设置为通过运行计算机程序执行上述任一项方法实施例中的步骤。
在本实施例中,上述电子设备可以位于计算机网络的多个网络设备中的至少一个网络设备。
在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,获取待处理的第一图像数据集,其中,第一图像数据集中的图像数据按照N 1×C 1×H 1×W 1的数据格式进行排列,N 1表示第一图像数据集包括的图像数据子集的数量,C 1表示每个图像数据子集中的通道数量,H 1表示每个图像数据子集中的数据高度,W 1表示每个图像数据子集中的数据宽度;
S2,将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照N 1×H 2×W 2的数据格式进行排列,交织重排的方式与卷积操作匹配;
S3,对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果。
本领域普通技术人员可以理解,图13所示的结构仅为示意,电子设备也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图13并不对上述电子设备的结构造成限定。例如,电子设备还可包括比图13中所示的更多或者更少的组件(如网络接口等),或者具有与图13所示的不同的配置。
其中,存储器1302可用于存储软件程序以及模块,如本申请实施例中的图像数据的处理方法和装置对应的程序指令/模块,处理器1304通过运行存储在存储器1302内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的图像数据的处理方法。存储器1302可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1302可进一步包括相对于处理器1304远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。其中,存储器1302具体可以用于但不限于存储待处理的图像数据等信息。作为一种示例,如图13所示,上述存储器1302中可以包括但不限于上述图像数据的处理装置中的获取模块1202、处理模块1204以及执行模块1206。此外,还可以包括但不限于上述图像数据的处理装置中的其他模块或单元,本示例中不再赘述。
在实施例中,上述的传输装置1306用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1306包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连,从而可与互联网或局域网进行通讯。在一个实例中,传输装置1306为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子设备还包括:显示器1308,用于显示图像数据;和连接总线1310,用于连接上述电子设备中的各个模块部件。
在其他实施例中,上述终端设备或者服务器可以是一个分布式系统中的一个节点,其中,该分布式系统可以为区块链系统,该区块链系统可以是由该多个节点通过网络通信的形式连接形成的分布式系统。其中,节点之间可以组成点对点(P2P,Peer To Peer)网络,任意形式的计算设备,比如服务器、终端等电子设备都可以通过加入该点对点网络而成为该区块链系统中的一个节点。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述图像数据的处理方面的各种实现方式中提供的方法。其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在本实施例中,上述计算机可读的存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,获取待处理的第一图像数据集,其中,第一图像数据集中的图像数据按照N 1×C 1×H 1×W 1的数据格式进行排列,N 1表示第一图像数据集包括的图像数据子集的数量,C 1表示每个图像数据子集中的通道数量,H 1表示每个图像数据子集中的数据高度,W 1表 示每个图像数据子集中的数据宽度;
S2,将第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,第二图像数据集中的图像数据按照N 1×H 2×W 2的数据格式进行排列,交织重排的方式与卷积操作匹配;
S3,对第二图像数据集和预先获取的第二权重数据集执行卷积操作,得到目标输出结果。
在本实施例中,本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过程序指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,该存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解,所揭露的客户端可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例的技术方案。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的几个实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (16)

  1. 一种图像数据的处理方法,由电子设备执行,包括:
    获取待处理的第一图像数据集,其中,所述第一图像数据集中的图像数据按照第一数据格式进行排列;
    将所述第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,所述第二图像数据集中的图像数据按照第二数据格式进行排列,所述交织重排的方式与卷积操作匹配,所述第二数据格式的维度小于所述第一数据格式;
    对所述第二图像数据集和预先获取的第二权重数据集执行所述卷积操作,得到目标输出结果。
  2. 根据权利要求1所述的方法,其中,
    所述第一图像数据集中的图像数据按照第一数据格式进行排列,包括:所述第一图像数据集中的图像数据按照N 1×C 1×H 1×W 1的数据格式进行排列,N 1表示所述第一图像数据集包括的图像数据子集的数量,C 1表示每个所述图像数据子集中的通道数量,H 1表示所述第一图像数据集中每个所述图像数据子集中的数据高度,W 1表示所述第一图像数据集中表示每个所述图像数据子集中的数据宽度;
    所述第二图像数据集中的图像数据按照第二数据格式进行排列,包括:所述第二图像数据集中的图像数据按照N 1×H 2×W 2的数据格式进行排列,H 2表示所述第二图像数据集中每个所述图像数据子集中的数据高度,W 2表示所述第二图像数据集中每个所述图像数据子集中的数据宽度。
  3. 根据权利要求2所述的方法,其中,所述将所述第一图像数据集中的数据进行交织重排,得到第二图像数据集,包括:
    将所述第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,M 1≤C 1
    将所述S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到所述第二图像数据集。
  4. 根据权利要求3所述的方法,其中,所述将所述第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,包括:
    在C 1为M 1的整数倍的情况下,将所述第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,
    Figure PCTCN2022086217-appb-100001
    在C 1不为M 1的整数倍的情况下,将所述第一图像数据集中的通道数量的从C 1增加到C 2,得到第三图像数据集,其中,C 2为M 1的整数倍,所述第三图像数据集中增加的通道上的图像数据为0;将所述第三图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,
    Figure PCTCN2022086217-appb-100002
  5. 根据权利要求4所述的方法,其中,所述将所述S 1组图像数据中的每组图像数据 中的M 1个通道的图像数据进行交织重排,得到所述第二图像数据集,包括:
    在C 1为M 1的整数倍的情况下,将所述S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到所述第二图像数据集,其中,
    Figure PCTCN2022086217-appb-100003
    在C 1不为M 1的整数倍的情况下,将所述S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到所述第二图像数据集,其中,
    Figure PCTCN2022086217-appb-100004
  6. 根据权利要求2所述的方法,其中,所述方法还包括:
    获取预设的第一权重数据集,其中,所述第一权重数据集中的权重数据按照N 2×C 2×H 3×W 3的数据格式进行排列,N 2表示所述第一权重数据集包括的权重数据子集的数量,C 2表示每个所述权重数据子集中的通道数量,H 3表示每个所述权重数据子集中的数据高度,W 3表示每个所述权重数据子集中的数据宽度;
    将所述第一权重数据集中的数据进行交织重排,得到所述第二权重数据集,其中,所述第二权重数据集中的权重数据按照H 4×W 4的数据格式进行排列,H4表示第二权重数据集中的权重数据的数据高度,W4表示第二权重数据集中的权重数据的数据宽度。
  7. 根据权利要求6所述的方法,其中,所述将所述第一权重数据集中的数据进行交织重排,得到所述第二权重数据集,包括:
    将所述第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,M 2≤N 2
    将所述S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到所述第二权重数据集。
  8. 根据权利要求7所述的方法,其中,所述将所述第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,包括:
    在N 2为M 2的整数倍的情况下,将所述第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到所述S 2组权重数据,其中,
    Figure PCTCN2022086217-appb-100005
    在N 2不为M 2的整数倍的情况下,将所述第一权重数据集中的权重数据子集的数量的从N 2增加到N 3,得到第三权重数据集,其中,N 3为M 2的整数倍,所述第三权重数据集中增加的权重数据子集中的权重数据为0;将所述第三权重数据集中每M 2个权重数据子集的权重数据分成一组,得到所述S 2组权重数据,其中,
    Figure PCTCN2022086217-appb-100006
  9. 根据权利要求8所述的方法,其中,所述将所述S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到所述第二权重数据集,包括:
    在N 2为M 2的整数倍的情况下,将所述S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到所述第二权重数据集,其中,
    Figure PCTCN2022086217-appb-100007
    在N 2不为M 2的整数倍的情况下,将所述S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到所述第二权重数据集,其中,
    Figure PCTCN2022086217-appb-100008
  10. 根据权利要求6所述的方法,其中,所述N 2的取值为卷积核的输出通道的数量,所述C 2的取值为所述卷积核的输入通道的数量,所述卷积操作为使用所述卷积核执行的卷积操作,每个所述权重数据子集包括C 2个所述输入通道上的权重数据。
  11. 根据权利要求1至10中任一项所述的方法,其中,所述对所述第二图像数据集和预先获取的第二权重数据集执行所述卷积操作,得到目标输出结果,包括:
    对所述第二图像数据集和所述第二权重数据集执行所述卷积操作,得到第四图像数据集,其中,所述目标输出结果包括所述第四图像数据集,所述第二图像数据集是将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到图像数据集,所述S 1组图像数据是将所述第一图像数据集中每M 1个通道的图像数据分成一组,得到图像数据,M 1≤C 1
  12. 根据权利要求11所述的方法,其中,所述对所述第二图像数据集和所述第二权重数据集执行卷积操作,得到第四图像数据集,包括:
    在所述第二图像数据集中获取C 2组图像数据,其中,每组图像数据包括所述第一图像数据集中位于同一个通道的多个图像数据,所述每组图像数据是从所述C 2组图像数据中的上一组图像数据的存储地址偏移1个地址得到的图像数据;
    对所述C 2组图像数据和所述第二权重数据集中的N 2×C 2组权重数据执行卷积操作,得到所述第四图像数据集中的N 2组图像数据,其中,每组权重数据与所述每组图像数据具有相同的数据结构;
    其中,所述第二权重数据集是将第一权重数据集中的数据进行交织重排,得到第二权重数据集,所述第一权重数据集中的权重数据按照N 2×C 2×H 3×W 3的数据格式进行排列,N 2表示所述第一权重数据集包括的权重数据子集的数量,C 2表示每个所述权重数据子集中的通道数量,H 3表示每个所述权重数据子集中的数据高度,W 3表示每个所述权重数据子集中的数据宽度。
  13. 根据权利要求1至10中任一项所述的方法,其中,所述方法还包括:
    在第一内存空间中存储所述第一图像数据集和所述第二图像数据集;
    在第二内存空间中存储所述第二权重数据集,其中,所述第一内存空间与所述第二内存空间为相互独立的内存空间。
  14. 一种计算机可读的存储介质,所述计算机可读的存储介质包括存储的程序,其中,所述程序运行时执行所述权利要求1至13任一项中所述的方法。
  15. 一种电子设备,包括存储器和处理器,其中,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至13任一项中所述的方法。
  16. 一种计算机程序产品,包括计算机指令,当所述计算机指令由计算机设备的处理器读取并执行时,使得所述计算机设备执行权利要求1至13任一项中所述的方法。
PCT/CN2022/086217 2021-04-26 2022-04-12 图像数据的处理方法和装置、存储介质及电子设备 WO2022228105A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP22794576.3A EP4296891A4 (en) 2021-04-26 2022-04-12 IMAGE DATA PROCESSING METHOD AND DEVICE, STORAGE MEDIUM AND ELECTRONIC DEVICE
JP2023524148A JP2023547831A (ja) 2021-04-26 2022-04-12 画像データの処理方法及び装置並びに電子機器及びコンピュータプログラム
US17/991,416 US20230083565A1 (en) 2021-04-26 2022-11-21 Image data processing method and apparatus, storage medium, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110451609.0A CN112990370B (zh) 2021-04-26 2021-04-26 图像数据的处理方法和装置、存储介质及电子设备
CN202110451609.0 2021-04-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/991,416 Continuation US20230083565A1 (en) 2021-04-26 2022-11-21 Image data processing method and apparatus, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2022228105A1 true WO2022228105A1 (zh) 2022-11-03

Family

ID=76340137

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/086217 WO2022228105A1 (zh) 2021-04-26 2022-04-12 图像数据的处理方法和装置、存储介质及电子设备

Country Status (5)

Country Link
US (1) US20230083565A1 (zh)
EP (1) EP4296891A4 (zh)
JP (1) JP2023547831A (zh)
CN (1) CN112990370B (zh)
WO (1) WO2022228105A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990370B (zh) * 2021-04-26 2021-09-10 腾讯科技(深圳)有限公司 图像数据的处理方法和装置、存储介质及电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309837A (zh) * 2019-07-05 2019-10-08 北京迈格威科技有限公司 基于卷积神经网络特征图的数据处理方法及图像处理方法
CN112215754A (zh) * 2020-10-26 2021-01-12 北京达佳互联信息技术有限公司 图像放大方法、装置、电子设备和存储介质
CN112990370A (zh) * 2021-04-26 2021-06-18 腾讯科技(深圳)有限公司 图像数据的处理方法和装置、存储介质及电子设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489680B2 (en) * 2016-10-04 2019-11-26 Magic Leap, Inc. Efficient data layouts for convolutional neural networks
CN106779057B (zh) * 2016-11-11 2020-04-17 北京旷视科技有限公司 基于gpu的计算二值神经网络卷积的方法及装置
CN109426858B (zh) * 2017-08-29 2021-04-06 京东方科技集团股份有限公司 神经网络、训练方法、图像处理方法及图像处理装置
CN111860815A (zh) * 2017-08-31 2020-10-30 中科寒武纪科技股份有限公司 一种卷积运算方法及装置
CN108875904A (zh) * 2018-04-04 2018-11-23 北京迈格威科技有限公司 图像处理方法、图像处理装置和计算机可读存储介质
CN110557579B (zh) * 2018-05-31 2021-11-02 杭州海康威视数字技术股份有限公司 一种图像处理方法、装置及设备、可读介质
CN110163790B (zh) * 2018-06-11 2024-08-16 腾讯科技(深圳)有限公司 图像处理方法、装置、系统、存储介质和计算机设备
WO2020069449A1 (en) * 2018-09-27 2020-04-02 Deepmind Technologies Limited Image generation using subscaling and depth up-scaling
CN111695682B (zh) * 2019-03-15 2022-11-01 上海寒武纪信息科技有限公司 数据处理方法及装置
US11645512B2 (en) * 2019-04-30 2023-05-09 Baidu Usa Llc Memory layouts and conversion to improve neural network inference performance
CN111310115B (zh) * 2020-01-22 2024-05-24 深圳市商汤科技有限公司 数据处理方法、装置及芯片、电子设备、存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309837A (zh) * 2019-07-05 2019-10-08 北京迈格威科技有限公司 基于卷积神经网络特征图的数据处理方法及图像处理方法
CN112215754A (zh) * 2020-10-26 2021-01-12 北京达佳互联信息技术有限公司 图像放大方法、装置、电子设备和存储介质
CN112990370A (zh) * 2021-04-26 2021-06-18 腾讯科技(深圳)有限公司 图像数据的处理方法和装置、存储介质及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4296891A4 *

Also Published As

Publication number Publication date
EP4296891A1 (en) 2023-12-27
EP4296891A4 (en) 2024-09-11
US20230083565A1 (en) 2023-03-16
CN112990370A (zh) 2021-06-18
CN112990370B (zh) 2021-09-10
JP2023547831A (ja) 2023-11-14

Similar Documents

Publication Publication Date Title
US11960566B1 (en) Reducing computations for data including padding
WO2021109699A1 (zh) 人工智能加速器、设备、芯片及数据处理方法
US20230244749A1 (en) Gpu communication method and device, and medium
US20200134435A1 (en) Computation apparatus, circuit and relevant method for neural network
EP3979589A1 (en) Image acquisition method, device, server and storage medium
WO2022033241A1 (zh) 对象的处理方法及装置、存储介质和电子设备
US8832158B2 (en) Fast predicate table scans using single instruction, multiple data architecture
WO2022228105A1 (zh) 图像数据的处理方法和装置、存储介质及电子设备
WO2022012119A1 (zh) 数据处理方法、装置、电子设备及存储介质
WO2021147276A1 (zh) 数据处理方法、装置及芯片、电子设备、存储介质
WO2020014893A1 (zh) 反卷积实现方法及相关产品
CN111182332B (zh) 视频处理方法、装置、服务器及存储介质
US10311557B2 (en) Automated tonal balancing
CN106649377A (zh) 一种图像处理系统及图像处理的方法
TWI798591B (zh) 卷積神經網路運算方法及裝置
CN111767246B (zh) 数据处理方法、相关设备及计算机可读介质
CN112261023A (zh) 一种卷积神经网络的数据传输方法和装置
JP6003032B2 (ja) 画像圧縮方法、画像圧縮装置およびシステム
US20220292344A1 (en) Processing data in pixel-to-pixel neural networks
WO2024198986A1 (zh) 一种数据处理的方法及相应装置
US20230376562A1 (en) Integrated circuit apparatus for matrix multiplication operation, computing device, system, and method
US12112265B2 (en) Architecture for running convolutional networks on memory and mips constrained embedded devices
US20240004719A1 (en) Just-In-Time Re-Partitioning of Feature Maps for Efficient Balancing of Compute Core Workloads
WO2023116312A1 (zh) 数据处理方法、装置、计算机设备及存储介质
CN116650942A (zh) 业务处理方法、装置、计算机设备和业务处理系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794576

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023524148

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2022794576

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022794576

Country of ref document: EP

Effective date: 20230918

NENP Non-entry into the national phase

Ref country code: DE