WO2022228105A1 - 图像数据的处理方法和装置、存储介质及电子设备 - Google Patents
图像数据的处理方法和装置、存储介质及电子设备 Download PDFInfo
- Publication number
- WO2022228105A1 WO2022228105A1 PCT/CN2022/086217 CN2022086217W WO2022228105A1 WO 2022228105 A1 WO2022228105 A1 WO 2022228105A1 CN 2022086217 W CN2022086217 W CN 2022086217W WO 2022228105 A1 WO2022228105 A1 WO 2022228105A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- data
- data set
- weight
- weight data
- Prior art date
Links
- 238000003860 storage Methods 0.000 title claims abstract description 28
- 238000003672 processing method Methods 0.000 title abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 62
- 230000008707 rearrangement Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 49
- 230000015654 memory Effects 0.000 claims description 44
- 238000004590 computer program Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 17
- 238000004364 calculation method Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 239000002184 metal Substances 0.000 description 7
- 229910052751 metal Inorganic materials 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
Definitions
- the present application relates to the field of computers, and in particular, to a method and apparatus for processing image data, a storage medium, and an electronic device.
- the traditional computing mode is generally that each output is a thread group, and uses SIMD (Single Instruction Multiple Data, single instruction multiple data structure) to process image data, for example, in the use of SIMD for convolution operation.
- Cout represents the number of output channels of the convolution kernel, Cin represents the number of input channels of the convolution kernel, kernel_h represents the height of the convolution kernel, and kernel_w represents the width of the convolution kernel.
- the data arrangement of the existing technical solution is usually only for the dimension of [N, C, H, W], and compared with the traditional data expression method, for the convolution operation, according to the characteristics of the computer memory arrangement, if the convolution kernel is If the size is small and the input space size is large, in order to obtain the integrity of the information, operations such as edge complementing need to be performed on the input. Moreover, due to cross-channel data acquisition, cache miss and additional data copy overhead are caused, so It seriously affects the performance of the device in the calculation process, and further reduces the efficiency of processing image data.
- Embodiments of the present application provide an image data processing method and apparatus, a storage medium, and an electronic device, so as to at least solve the technical problem of low efficiency in processing image data in the related art.
- a method for processing image data which is performed by an electronic device and includes: acquiring a first image data set to be processed, wherein the image data in the first image data set is according to the first image data set.
- a data format is arranged; the data in the first image data set is interleaved and rearranged to obtain a second image data set, wherein the image data in the second image data set are arranged according to the second data format, and the The way of interleaving and rearranging matches the convolution operation, and the dimension of the second data format is smaller than that of the first data format; the convolution operation is performed on the second image data set and the pre-acquired second weight data set , to get the target output result.
- an apparatus for processing image data including: an acquisition module configured to acquire a first image data set to be processed, wherein the image data in the first image data set Arrange according to the first data format; the processing module is used for interleaving and rearranging the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set is in accordance with the second image data set.
- the data format is arranged, the interleaving and rearranging method is matched with the convolution operation, and the dimension of the second data format is smaller than that of the first data format; the execution module is used for the second image data set and pre-acquisition The convolution operation is performed on the second weight dataset of , and the target output result is obtained.
- a computer-readable storage medium is also provided, where a computer program is stored in the computer-readable storage medium, wherein the computer program is configured to execute the above image data when running. Approach.
- an electronic device including a memory and a processor, the memory stores a computer program, and the processor is configured to execute the above-mentioned image data processing through the computer program method.
- a computer program product including computer instructions, which, when the computer instructions are read and executed by a processor of a computer device, cause the computer device to execute the above-mentioned image data processing method.
- the input image data, output image data and calculation weights in the calculation process are correspondingly rearranged, which reduces the cost compared to the traditional calculation mode.
- the additional data copy overhead and the probability of Cache Miss appearing are reduced, thereby realizing the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image data, thereby solving the relatively low efficiency of processing image data in related technologies. question.
- FIG. 1 is a schematic diagram of an application environment of an image data processing method according to an embodiment of the present application
- FIG. 2 is a schematic flowchart of a method for processing image data according to an embodiment of the present application
- FIG. 3 is a schematic diagram of a method for processing image data according to an embodiment of the present application.
- FIG. 4 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
- FIG. 6 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
- FIG. 7 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
- FIG. 8 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
- FIG. 9 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
- FIG. 10 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
- FIG. 11 is a schematic diagram of yet another image data processing method according to an embodiment of the present application.
- FIG. 12 is a schematic structural diagram of an apparatus for processing image data according to an embodiment of the present application.
- FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
- CNN Convolutional Neural Networks (convolutional neural network).
- SIMD Single Instruction Multiple Data (single instruction multiple data structure);
- GPUs graphics processing units
- Metal Buffer Metal represents conventional memory
- Metal Texture Metal represents the memory of the texture
- Cin the number of input channels of the convolution kernel
- kernel_h the height of the convolution kernel
- kernel_w The width of the convolution kernel.
- a method for processing image data is provided.
- the above-mentioned image data processing method can be applied to the hardware environment formed by the server 101 and the user terminal 103 as shown in FIG. 1 .
- the server 101 is connected to the terminal 103 through the network, and can be used to provide services for the user terminal or the client installed on the user terminal.
- the client can be a video client, an instant messaging client, a browser client, Education client, game client, etc.
- the database 105 may be provided on the server or independent of the server for providing the server 101 with data storage services, eg, image data storage services.
- the above-mentioned networks may include, but are not limited to, wired networks and wireless networks, wherein the wired networks include local area networks, metropolitan area networks, and wide area networks, and the wireless networks include Bluetooth, WIFI, and other networks that implement wireless communication.
- the user terminal 103 may be a terminal configured with an application program 107, and may include, but is not limited to, at least one of the following: a mobile phone (such as an Android mobile phone, an iOS mobile phone, etc.), a notebook computer, a tablet computer, a handheld computer, MID (Mobile Internet Devices, mobile Internet equipment), PAD, desktop computer, smart TV and other computer equipment.
- the above-mentioned server may be a single server, a server cluster composed of multiple servers, or a cloud server, and the application 107 using the above-mentioned image data processing method is displayed through the user terminal 103.
- the above-mentioned image data processing method can be implemented in the user terminal 103 through the following steps:
- the data in the first image data set is interleaved and rearranged in the application program 107 of the user terminal 103 to obtain a second image data set, wherein the image data in the second image data set are arranged according to the second data format, and the interleaved
- the rearrangement method matches the convolution operation, and the dimension of the second data format is smaller than that of the first data format;
- a convolution operation is performed on the second image data set and the pre-acquired second weight data set to obtain a target output result.
- the above-mentioned image data processing method may also include, but is not limited to, being used by a client configured in the server.
- the above-mentioned image data processing method may include, but is not limited to, asynchronous use by the user terminal 103 and a client set on the server 101.
- the application program 107 of the user terminal 103 executes the above steps S1 and S2,
- the above step S3 is executed by the client set in the server 101, the above is only an example, and this embodiment does not make a specific limitation.
- the above-mentioned processing method of image data includes:
- S204 Interleave and rearrange the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set is arranged according to the second data format, and the interleaving and rearrangement method matches the convolution operation , the dimension of the second data format is smaller than that of the first data format;
- the image data in the first data format may include, but is not limited to, be arranged in a data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , where N 1 represents the image data subgroups included in the first image data set.
- N 1 represents the image data subgroups included in the first image data set.
- the number of sets, C1 represents the number of channels in each image data subset, H1 represents the data height in each image data subset in the first image data set, and W1 represents the first image data set in each image data subset. data width.
- FIG. 3 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
- the image data in the above-mentioned first image data set is arranged in a data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , which may include, but is not limited to, the example shown in FIG. 3 , where N 1 represents the images included in the first image data set.
- the application scenarios of the above-mentioned image data processing method may include, but are not limited to, medical treatment, finance, credit reporting, banking, games, energy, education, buildings, games, transportation, IoT, industry, and artificial intelligence.
- An application scenario that requires image data processing, and the above application scenario may include, but is not limited to, application in a neural network forward computing library. Since the neural network forward computing library provides the computing power of all neural network algorithms, the application scenarios of this application can cover all application scenarios using the neural network forward computing library, for example, including but not limited to AI algorithms related to cloud technology. applications, such as virtual backgrounds, etc.
- Cloud technology refers to a hosting technology that unifies a series of resources such as hardware, software, and network in a wide area network or a local area network to realize the calculation, storage, processing and sharing of data.
- cloud conference is an efficient, convenient and low-cost conference form based on cloud computing technology. Users only need to perform simple and easy-to-use operations through the Internet interface, and can quickly and efficiently share voice, data files and videos with teams and customers around the world, and complex technologies such as data transmission and processing in the conference are provided by cloud conference services. The dealer helps the user to operate.
- cloud conferences mainly focus on the service content of SaaS (Software as a Service) mode, including telephone, network, video and other service forms.
- Video conferences based on cloud computing are called cloud conferences.
- the cloud conference system supports multi-server dynamic cluster deployment and provides multiple high-performance servers, which greatly improves the stability, security and availability of conferences.
- video conferencing has been welcomed by many users because it can greatly improve communication efficiency, continuously reduce communication costs, and upgrade internal management levels, and has been widely used in various fields such as transportation, transportation, finance, operators, education, and enterprises. .
- cloud computing for video conferencing, it will be more attractive in terms of convenience, speed, and ease of use, which will surely stimulate the arrival of a new upsurge in video conferencing applications.
- FIG. 4 is a schematic diagram of an image data processing method according to an embodiment of the present application. As shown in FIG. 4 , the method specifically includes but is not limited to follow the steps below:
- the user terminal 402 obtains the first image data set to be processed
- the processor 404 located inside the user terminal 402 or connected to the user terminal 402 interleaves and rearranges the data in the first image data set to obtain a second image data set;
- the above-mentioned first image data set may include but not limited to the first image data set to be processed stored in the database as shown in The virtual background displayed in the virtual background display area 408 of the application 406, or other image data obtained after processing using the above-mentioned image data processing method.
- performing the above-mentioned convolution operation on the second image dataset and the pre-acquired second weight dataset to obtain the target output result may include, but is not limited to, performing a convolution operation on the second image dataset and the second weight dataset.
- product operation to obtain a third image data set wherein the convolution operation includes but is not limited to the convolution operation, the target output result includes but is not limited to the third image data set, and the second image data set is the The image data of M 1 channels in each group of image data is interleaved and rearranged, and the obtained image data set, S 1 group of image data is the image data of every M 1 channel in the first image data set is divided into one group , the obtained image data, M 1 ⁇ C 1 .
- a first image data set to be processed is acquired, wherein the image data in the first image data set are arranged in a data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , and the first image data set is arranged in a data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1.
- the data is interleaved and rearranged to obtain a second image data set, wherein the image data in the second image data set is arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , and the way of interleaving and rearrangement matches the convolution operation.
- the input image data, output image data and calculation weights in the calculation process are correspondingly rearranged, so as to rearrange higher-dimensional data into lower-dimensional data.
- the data of multiple channels can be grouped, so that data can be extracted from different channels through cross-grouping, which can effectively reduce cross-channel extraction. number of data.
- the technical solution described in this application reduces the extra data copying overhead and the probability of Cache Miss compared to the traditional computing mode. , thereby realizing the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image data, thereby solving the technical problem of low efficiency of processing image data in the related art.
- the image data in the first image data set is arranged according to the first data format, including: the image data in the first image data set is arranged according to the data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 Arrangement, N1 represents the number of image data subsets included in the first image data set, C1 represents the number of channels in each image data subset, H1 represents the data height in each image data subset in the first image data set, W 1 represents the data width in each image data subset in the first image data set;
- the image data in the second image data set is arranged according to the second data format, including: interleaving and rearranging the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set is Arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , H 2 represents the data height in each image data subset in the second image data set, and W 2 represents the data width in each image data subset in the second image data set .
- the above-mentioned interleaving and rearranging the data in the first image data set to obtain the second image data set may include, but is not limited to, interleaving and rearranging the image data in the first image data set, so as to reduce the complexity of the image data. dimension, which is convenient for subsequent convolution operations.
- FIG. 5 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
- Each image data is divided into C 1 /4 groups according to each group of four channels, and then the image data of 4 channel dimensions in each group is rearranged in the form of interleaving (mixing), as shown in Figure 5
- the second image dataset is shown with a data structure of [N, H, C/4, W, C 4 ], where "A, B, C" represent image data within different channels.
- the data in the first image data set is interleaved and rearranged to obtain a second image data set, including:
- the image data of M 1 channels in each group of image data in the S 1 groups of image data is interleaved and rearranged to obtain a second image data set.
- the image data of every M 1 channel in the first image data set is divided into one group, and obtaining S 1 group of image data may include, but is not limited to, when C 1 is an integer multiple of M 1 , divide the image data of every M 1 channel in the first image data set into one group to obtain S 1 groups of image data, wherein, In the case that C 1 is not an integer multiple of M 1 , the number of channels in the first image data set is increased from C 1 to C 2 to obtain a third image data set, where C 2 is an integer multiple of M 1 , and the third image data set is obtained.
- the image data on the channels added in the three-image data set is 0 (that is, the number of channels is padded to an integer multiple of M 1 ); the image data of every M 1 channel in the third image data set is divided into a group, and S 1 is obtained set of image data, where,
- FIG. 6 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
- the above-mentioned second image data set may include, but is not limited to, the example shown in FIG. 6 , where “A, B, C” represents image data in different channels, A(1,1), B(1,1) , C(1,1), D(1,1) are interleaved and rearranged into image data of the same height, and the data of different channels in the image data are sequentially sorted, which improves the local performance of data access and can greatly reduce the probability of Cache Miss. .
- the data to be processed is re-interleaved and arranged in the embodiment of the present application, in the process of using the convolution kernel to extract the boundary data, the number of edge-filling operations can be reduced, and the data in different channels can be placed in the same dimension, so as to achieve the effect of saving the extra overhead of data copying.
- the image data of every M 1 channel in the first image data set is divided into one group to obtain the S 1 group of image data, where M 1 ⁇ C 1 , and each of the S 1 groups of image data is divided into one group.
- the image data of M 1 channels in the set of image data are interleaved and rearranged to obtain a second image data set.
- the second image data set is obtained by interleaving and rearranging the data in the first image data set.
- the image data of every M 1 channel in the first image data set is divided into one group to obtain S 1 groups of image data, including:
- the number of channels in the first image data set is increased from C 1 to C 2 to obtain a third image data set, where C 2 is an integer multiple of M 1 , and the third image data set is obtained.
- the image data on the channels added in the three-image data set is 0; the image data of every M 1 channel in the third image data set is divided into one group to obtain S 1 group of image data, wherein,
- increasing the number of channels in the first image data set from C 1 to C 2 may include, but is not limited to, Round up ⁇ M 1 , of course, it can also be rounded down or rounded in other ways.
- the image data of M 1 channels in each group of image data in the S 1 groups of image data are interleaved and rearranged to obtain a second image data set, including:
- SIMD can be used to speed up the convolution operation without the need to perform edge complementation on the data, so as to avoid the problems caused by the edge complementation during the convolution operation. Additional overhead for data copying.
- the method further includes:
- the data in the first weight data set is interleaved and rearranged to obtain a second weight data set, wherein the weight data in the second weight data set is arranged according to the data format of H 4 ⁇ W 4 , and H4 represents the data in the second weight data set.
- the data height of the weight data, W4 represents the data width of the weight data in the second weight data set.
- the above-mentioned first weight data set may include, but is not limited to, weight data used when using convolution kernels to process image data during convolution calculation.
- the above-mentioned image data processing method is applied to cloud computing Taking a conference scene as an example, FIG. 7 is a schematic diagram of another image data processing method according to an embodiment of the present application. As shown in FIG. 7 , the method specifically includes but is not limited to the following steps:
- the processor 704 located inside the user terminal 702 or connected to the user terminal 702 obtains a preset first weight data set
- the processor 704 located inside the user terminal 702 or connected to the user terminal 702 interleaves and rearranges the data in the first weight data set to obtain a second weight data set.
- the above-mentioned first weight data set may include but is not limited to being stored in the database as shown in FIG. 7
- the above-mentioned second weight data set may include but is not limited to being used in association with the second image data set to be processed, to A virtual background is generated in the virtual background display area 708 of the cloud conference application 706 shown in FIG. 7 .
- FIG. 8 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
- H 3 kernel_h
- W 3 represents each weight
- the second weighted data set is obtained by interleaving and rearranging the data in the first weighted data set.
- this embodiment reduces the The probability of occurrence of Cache Miss is realized, thereby realizing the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image data, thereby solving the technical problem of low efficiency of processing image data in related technologies.
- the data in the first weight data set is interleaved and rearranged to obtain a second weight data set, including:
- FIG. 9 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
- the above-mentioned interleaving and rearranging the data in the first weight data set to obtain the second weight data set may include but not limited to the example shown in FIG. 9 .
- the first weight data set is The weight data of each M 2 weight data subsets in the weight data set is divided into one group to obtain S 2 groups of weight data, and the M 2 weight data in each group of weight data in the S 2 groups of weight data are interleaved and rearranged to obtain
- the weight data is divided into C/4 groups according to every four output channels as a group. If the number of output channels is not divisible by 4, the The number of channels is filled to an integer multiple of 4, and all the added activation values are filled with 0.
- the weight data of the 4 channel dimensions in each group is rearranged in the form of interleaving (mixing), and the input channel dimensions are ordered in the subsequent dimensions. Arrange to get the data structure of [Cout/4, kernel_h, kernel_w, Cin, Cout4], OC4 is Cout4, and IC is Cin.
- the weight data of every M 2 weight data subsets in the first weight data set is divided into one group to obtain S 2 groups of weight data, including but not limited to the case where N 2 is an integer multiple of M 2
- the weight data of every M 2 weight data subsets in the first weight data set is divided into one group, and S 2 groups of weight data are obtained, wherein,
- N 2 is not an integer multiple of M 2
- the number of weight data subsets in the first weight data set is increased from N 2 to N 3 to obtain a third weight data set, where N 3 is the number of M 2 Integer multiples, the weight data in the weight data subset added in the third weight data set is 0; the weight data of every M 2 weight data subsets in the third weight data set is divided into one group to obtain S 2 groups of weight data, wherein,
- the weight data of every M 2 weight data subsets in the first weight data set is divided into one group, and S 2 groups of weight data are obtained, including:
- N 2 is an integer multiple of M 2
- the weight data of every M 2 weight data subsets in the first weight data set is divided into one group to obtain S 2 groups of weight data, wherein,
- N 2 is not an integer multiple of M 2
- the number of weight data subsets in the first weight data set is increased from N 2 to N 3 , and the third weight data set can be obtained.
- the third weight data set can be obtained. including but not limited to Round down ⁇ M 2 , and of course round up or round up in other ways.
- N2 3
- the added activation value (corresponding to the aforementioned image data) is all filled with 0, as shown in Figure 9, the fourth column , the eighth column, the twelfth column, and so on are all "0.0f".
- the weight data of every M 2 weight data subsets in the first weight data set is divided into one group to obtain S 2 groups of weight data, wherein, In the case that N 2 is not an integer multiple of M 2 , increase the number of weight data subsets in the first weight data set from N 2 to N 3 to obtain a third weight data set, where N 3 is M 2 Integer multiples of , the weight data in the added weight data subset in the third weight data set is 0; the weight data of every M 2 weight data subsets in the third weight data set is divided into one group to obtain S 2 groups of weight data, where , In the method of interleaving and rearranging the data in the first weighted data set, the second weighted data set is obtained, which reduces the Cache Miss and additional data copy overhead that are easily caused in the traditional computing mode, and reduces the probability of Cache Miss occurrence. Therefore, the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image
- M 2 pieces of weight data in each group of weight data in the S 2 groups of weight data are interleaved and rearranged to obtain a second weight data set, including:
- N 2 is an integer multiple of M 2
- N 2 is not an integer multiple of M 2 .
- H 4 H 3 ⁇ W 3 .
- H 3 kernel_h*kernel_w
- W 3 IC*OC 4
- the height of each set of weight data is kernel_w
- the width is OC 4 .
- the value of N 2 is the number of output channels of the convolution kernel
- the value of C 2 is the number of input channels of the convolution kernel
- the convolution operation is the convolution operation performed using the convolution kernel
- Each subset of weight data includes weight data on C 2 input channels
- each output channel includes C 2 input channels.
- each of the above-mentioned weight data subsets includes weight data on C 2 input channels, and a convolution operation is performed on C 2 second image data sets to be processed based on the second weight data set using a convolution kernel , to get the target output result.
- a convolution operation is performed on the second image data set and the pre-acquired second weight data set to obtain a target output result, including:
- the target output result includes a fourth image data set
- the second image data set is a group of S 1
- the image data of M 1 channels in each group of image data in the image data is interleaved and rearranged to obtain an image data set, and the S 1 group of image data is obtained by combining every M 1 channel of the first image data set.
- the image data is divided into a group to obtain the image data, M 1 ⁇ C 1 .
- performing the above-mentioned convolution operation on the second image data set and the second weight data set includes, but is not limited to, acquiring C 2 groups of image data in the second image data set, wherein each group of image data includes A plurality of image data located in the same channel in the first image data set, each group of image data is image data obtained by offsetting 1 address from the storage address of the previous group of image data in the C2 groups of image data.
- C 2 groups of image data and N 2 ⁇ C 2 groups of weight data in the second weight data set perform a convolution operation to obtain N 2 groups of image data in the fourth image data set, wherein each group of weight data is associated with each group of image data.
- Like data has the same data structure.
- a convolution operation is performed on the second image dataset and the second weight dataset to obtain a third image dataset, including:
- each set of image data includes multiple image data located in the same channel in the first image data set, and each set of image data is obtained from C2 sets of image data The image data obtained by offsetting the storage address of the previous group of image data by 1 address;
- the image data has the same data structure
- the second weight data set is obtained by interleaving and rearranging the data in the first weight data set to obtain the second weight data set.
- the weight data in the first weight data set is based on the data of N 2 ⁇ C 2 ⁇ H 3 ⁇ W 3
- N 2 represents the number of weight data subsets included in the first weight data set
- C 2 represents the number of channels in each weight data subset
- H 3 represents the data height in each weight data subset
- W 3 represents The width of the data in each weight data subset.
- the image data of each group of image data described above is obtained by offsetting 1 address from the storage address of the previous group of image data in the C2 groups of image data may include, but not limited to, according to a predetermined step size.
- each group of weight data and each group of image data have the same data structure, which may include, but not limited to, the same M 1 and M 2 above.
- the image data is obtained by offsetting 1 address from the storage address of the previous group of image data in the C 2 groups of image data, so as to obtain each group of image data.
- the frequency of acquiring data across channels during the convolution calculation process is reduced.
- the probability of Cache Miss occurrence is reduced, thereby realizing the technical effect of optimizing the computing performance of the device and improving the processing efficiency of image data, and then solving the related technology.
- the efficiency of processing image data is relatively low.
- a convolution operation is performed on the C 2 groups of image data and the N 2 ⁇ C 2 groups of weight data in the second weight data set to obtain N 2 groups of image data in the third image data set, including:
- a weighted sum operation is performed on each C 2 group of weight data in the N 2 ⁇ C 2 groups of weight data and the C 2 groups of image data, respectively, to obtain N 2 groups of image data.
- the method may include, but is not limited to, using convolution kernels one by one according to a predetermined sliding step size to combine each C 2 group of weight data in the N 2 ⁇ C 2 groups of weight data with the C 2 groups of image data respectively.
- a weighted sum operation is performed to obtain N 2 sets of image data.
- FIG. 10 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
- the size of the convolution kernel as 3x3 as an example
- the convolution kernel based on the corresponding weight parameters recorded in the second weight data set
- the data in the second image data set at the same position is weighted and summed
- a group of image data in N 2 groups of image data is obtained, and the processing is continued according to a sliding window with a step size of 1, so as to obtain the above N 2 groups of image data.
- the method further includes: storing the first image data set and the second image data set in the first memory space;
- the second weight data set is stored in the second memory space, wherein the first memory space and the second memory space are mutually independent memory spaces.
- the above-mentioned first memory space may include, but is not limited to, a storage space for storing image data, such as Texture resources
- the above-mentioned second memory space may include, but is not limited to, a storage space for storing weight data, such as , Buffer resource.
- FIG. 11 is a schematic diagram of still another image data processing method according to an embodiment of the present application.
- the existing technical solution generally only uses one type of memory (Buffer/Texture) as the data loading/storing space when using Metal for GPU computing.
- the current model design is getting lighter and lighter.
- the access limit of the memory bandwidth often becomes the bottleneck of the final performance.
- the Data Buffer resource and the Texture resource in Metal are independent memory spaces. Therefore, compared with the traditional data that only uses one memory structure (Buffer/Texture) to express data, the input/output uses Texture to save data, and the weight/bias parameter uses Buffer to represent storage, which can be obtained by distinguishing between Texture and Buffer. Higher memory bandwidth reduces the probability of Cache Miss and improves memory access performance.
- an image data processing apparatus for implementing the above image data processing method.
- the device includes:
- an acquisition module 1202 configured to acquire a first image data set to be processed, wherein the image data in the first image data set is arranged according to a first data format;
- the processing module 1204 is used for interleaving and rearranging the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set is arranged according to the second data format, and the interleaving and rearranging manner is the same as that of the second image data set.
- the convolution operation is matched, and the dimension of the second data format is smaller than that of the first data format;
- the execution module 1206 is configured to perform a convolution operation on the second image data set and the pre-acquired second weight data set to obtain a target output result.
- the obtaining module includes: an obtaining unit, configured to obtain a first image data set to be processed, wherein the image data in the first image data set is according to N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 Arranged in the data format, N1 represents the number of image data subsets included in the first image data set, C1 represents the number of channels in each image data subset, H1 represents each image data subset in the first image data set The data height, W 1 represents the data width in each image data subset in the first image data set; the processing module includes: a processing unit for interleaving and rearranging the data in the first image data set to obtain a second An image data set, wherein the image data in the second image data set are arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , H 2 represents the data height in each image data subset in the second image data set, and W 2 represents The width of the data in each subset of image data in the second image dataset.
- the processing module includes: a grouping unit, configured to divide the image data of every M 1 channel in the first image data set into one group to obtain S 1 groups of image data, where M 1 ⁇ C 1 ; an arrangement unit, configured to interleave and rearrange the image data of M 1 channels in each group of image data in the S 1 groups of image data to obtain a second image data set.
- the grouping unit is configured to divide the image data of every M 1 channel in the first image data set into one group to obtain the S 1 group of image data in the following manner: where C 1 is not an integer of M 1 In the case of multiple times, the number of channels in the first image data set is increased from C 1 to C 2 to obtain a third image data set, where C 2 is an integer multiple of M 1 , and the channels on the increased channels in the third image data set are The image data is 0; the image data of every M 1 channel in the third image data set is divided into one group to obtain S 1 groups of image data, wherein,
- the arranging unit is configured to interleave and rearrange the image data of M 1 channels in each group of image data in the S 1 groups of image data in the following manner to obtain a second image data set:
- C 1 is an integer multiple of M 1
- the image data of M 1 channels in each group of image data in the S 1 groups of image data are interleaved and rearranged to obtain a second image data set, wherein,
- C 1 is not an integer multiple of M 1
- the image data of M 1 channels in each group of image data in the S 1 groups of image data are interleaved and rearranged to obtain a second image data set, wherein ,
- the apparatus is further configured to: acquire a preset first weight data set, wherein the weight data in the first weight data set is arranged in a data format of N 2 ⁇ C 2 ⁇ H 3 ⁇ W 3 , N 2 represents the number of weight data subsets included in the first weight data set, C 2 represents the number of channels in each weight data subset, H 3 represents the data height in each weight data subset, W 3 represents each weight The data width in the data subset; the data in the first weight data set is interleaved and rearranged to obtain a second weight data set, wherein the weight data in the second weight data set are arranged according to the data format of H 4 ⁇ W 4 , H4 represents the data height of the weight data in the second weight data set, and W4 represents the data width of the weight data in the second weight data set.
- the apparatus is further configured to interleave and rearrange the data in the first weight data set to obtain the second weight data set: Divide the weight data into one group to obtain S 2 groups of weight data, where M 2 ⁇ N 2 ; perform interleaving and rearrangement of M 2 weight data in each group of weight data in the S 2 groups of weight data to obtain second weight data set.
- the device is further configured to divide the weight data of every M 2 weight data subsets in the first weight data set into one group in the following manner to obtain S 2 groups of weight data, including:
- N 2 is an integer multiple of M 2
- the weight data of every M 2 weight data subsets in the first weight data set is divided into one group, and the S 2 groups of weight data are obtained, wherein,
- N 2 is not an integer multiple of M 2
- the apparatus is further configured to interleave and rearrange M 2 weight data in each group of weight data in the S 2 groups of weight data in the following manner to obtain a second weight data set, including:
- N 2 is an integer multiple of M 2
- N 2 is not an integer multiple of M 2 .
- H 4 H 3 ⁇ W 3 .
- the value of N 2 is the number of output channels of the convolution kernel
- the value of C 2 is the number of input channels of the convolution kernel
- the convolution operation is the convolution operation performed using the convolution kernel
- Each subset of weight data includes weight data on C 2 input channels
- each output channel includes C 2 input channels.
- the apparatus is further configured to perform the convolution operation on the second image data set and the pre-acquired second weight data set in the following manner to obtain the target output result, including:
- the image data of M 1 channels in each group of image data is interleaved and rearranged, and the obtained image data set, S 1 group of image data is the image data of every M 1 channel in the first image data set is divided into one group , the obtained image data, M 1 ⁇ C 1 .
- the device is further configured to perform a convolution operation on the second image dataset and the second weight dataset to obtain a fourth image dataset, including:
- each set of image data includes multiple image data located in the same channel in the first image data set, and each set of image data is obtained from C2 sets of image data The image data obtained by offsetting the storage address of the previous group of image data by 1 address;
- the image data has the same data structure
- the second weight data set is obtained by interleaving and rearranging the data in the first weight data set to obtain the second weight data set.
- the weight data in the first weight data set is based on the data of N 2 ⁇ C 2 ⁇ H 3 ⁇ W 3
- N 2 represents the number of weight data subsets included in the first weight data set
- C 2 represents the number of channels in each weight data subset
- H 3 represents the data height in each weight data subset
- W 3 represents The width of the data in each weight data subset.
- the apparatus is further configured to perform a convolution operation on the C 2 sets of image data and the N 2 ⁇ C 2 sets of weight data in the second weight data set in the following manner to obtain N in the third image data set 2 sets of image data, including:
- a weighted sum operation is performed on each C 2 group of weight data in the N 2 ⁇ C 2 groups of weight data and the C 2 groups of image data, respectively, to obtain N 2 groups of image data.
- the apparatus is further configured to: store the first image data set and the second image data set in the first memory space; store the second weight data set in the second memory space, wherein the first memory The space and the second memory space are independent memory spaces.
- an electronic device for implementing the above image data processing method is also provided, where the electronic device may be the terminal device or the server shown in FIG. 1 .
- the electronic device may be the terminal device or the server shown in FIG. 1 .
- This embodiment is described by taking the electronic device as a server as an example.
- the electronic device includes a memory 1302 and a processor 1304, where a computer program is stored in the memory 1302, and the processor 1304 is configured to execute the steps in any of the above method embodiments by running the computer program.
- the above electronic device may be located in at least one network device among multiple network devices of a computer network.
- the above-mentioned processor may be configured to execute the following steps through a computer program:
- S1 Acquire a first image data set to be processed, wherein the image data in the first image data set are arranged according to the data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , and N 1 indicates that the first image data set includes the number of image data subsets, C 1 represents the number of channels in each image data subset, H 1 represents the data height in each image data subset, and W 1 represents the data width in each image data subset;
- S2 Interleave and rearrange the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set are arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , and the interleaving and rearrangement is performed.
- the way matches the convolution operation
- FIG. 13 is for illustration only, and the electronic device can also be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a handheld computer, and a Mobile Internet Devices (MID). ), PAD and other terminal equipment.
- FIG. 13 does not limit the structure of the above-mentioned electronic device.
- the electronic device may also include more or fewer components than those shown in FIG. 13 (eg, network interfaces, etc.), or have a different configuration than that shown in FIG. 13 .
- the memory 1302 may be used to store software programs and modules, such as program instructions/modules corresponding to the image data processing method and apparatus in the embodiments of the present application, and the processor 1304 runs the software programs and modules stored in the memory 1302, thereby Executing various functional applications and data processing, that is, implementing the above-described processing method of image data.
- Memory 1302 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, memory 1302 may further include memory located remotely from processor 1304, and these remote memories may be connected to the terminal through a network.
- the memory 1302 can be specifically used for, but not limited to, storing information such as image data to be processed.
- the above-mentioned memory 1302 may include, but is not limited to, the acquisition module 1202 , the processing module 1204 , and the execution module 1206 in the above-mentioned image data processing apparatus.
- it may also include but not limited to other modules or units in the above-mentioned image data processing apparatus, which will not be repeated in this example.
- the above-mentioned transmission means 1306 is used to receive or transmit data via a network.
- Specific examples of the above-mentioned networks may include wired networks and wireless networks.
- the transmission device 1306 includes a network adapter (Network Interface Controller, NIC), which can be connected with other network devices and routers through a network cable, so as to communicate with the Internet or a local area network.
- the transmission device 1306 is a radio frequency (RF) module, which is used for wirelessly communicating with the Internet.
- RF radio frequency
- the above electronic device further includes: a display 1308 for displaying image data; and a connection bus 1310 for connecting various module components in the above electronic device.
- the above-mentioned terminal device or server may be a node in a distributed system, wherein the distributed system may be a blockchain system, and the blockchain system may be communicated by the multiple nodes through a network A distributed system formed by connection in the form of.
- a peer-to-peer (P2P, Peer To Peer) network can be formed between nodes, and any form of computing equipment, such as servers, terminals and other electronic devices can become a node in the blockchain system by joining the peer-to-peer network.
- a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
- the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided in the various implementations of the processing aspects of the image data described above.
- the computer program is configured to execute the steps in any one of the above method embodiments when running.
- the above-mentioned computer-readable storage medium may be configured to store a computer program for performing the following steps:
- S1 Acquire a first image data set to be processed, wherein the image data in the first image data set are arranged according to the data format of N 1 ⁇ C 1 ⁇ H 1 ⁇ W 1 , and N 1 indicates that the first image data set includes the number of image data subsets, C 1 represents the number of channels in each image data subset, H 1 represents the data height in each image data subset, and W 1 represents the data width in each image data subset;
- S2 Interleave and rearrange the data in the first image data set to obtain a second image data set, wherein the image data in the second image data set are arranged according to the data format of N 1 ⁇ H 2 ⁇ W 2 , and the interleaving and rearrangement is performed.
- the way matches the convolution operation
- the storage medium may include: a flash disk, a read-only memory (Read-Only Memory, ROM), a random access device (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
- the integrated units in the above-mentioned embodiments are implemented in the form of software functional units and sold or used as independent products, they may be stored in the above-mentioned computer-readable storage medium.
- the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art, or all or part of the technical solution, and the computer software product is stored in a storage medium,
- Several instructions are included to cause one or more computer devices (which may be personal computers, servers, or network devices, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the disclosed clients may be implemented in other manners.
- the apparatus embodiments described above are only illustrative, for example, the division of the units is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to implement the technical solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Neurology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Processing (AREA)
- Complex Calculations (AREA)
Abstract
Description
Claims (16)
- 一种图像数据的处理方法,由电子设备执行,包括:获取待处理的第一图像数据集,其中,所述第一图像数据集中的图像数据按照第一数据格式进行排列;将所述第一图像数据集中的数据进行交织重排,得到第二图像数据集,其中,所述第二图像数据集中的图像数据按照第二数据格式进行排列,所述交织重排的方式与卷积操作匹配,所述第二数据格式的维度小于所述第一数据格式;对所述第二图像数据集和预先获取的第二权重数据集执行所述卷积操作,得到目标输出结果。
- 根据权利要求1所述的方法,其中,所述第一图像数据集中的图像数据按照第一数据格式进行排列,包括:所述第一图像数据集中的图像数据按照N 1×C 1×H 1×W 1的数据格式进行排列,N 1表示所述第一图像数据集包括的图像数据子集的数量,C 1表示每个所述图像数据子集中的通道数量,H 1表示所述第一图像数据集中每个所述图像数据子集中的数据高度,W 1表示所述第一图像数据集中表示每个所述图像数据子集中的数据宽度;所述第二图像数据集中的图像数据按照第二数据格式进行排列,包括:所述第二图像数据集中的图像数据按照N 1×H 2×W 2的数据格式进行排列,H 2表示所述第二图像数据集中每个所述图像数据子集中的数据高度,W 2表示所述第二图像数据集中每个所述图像数据子集中的数据宽度。
- 根据权利要求2所述的方法,其中,所述将所述第一图像数据集中的数据进行交织重排,得到第二图像数据集,包括:将所述第一图像数据集中每M 1个通道的图像数据分成一组,得到S 1组图像数据,其中,M 1≤C 1;将所述S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到所述第二图像数据集。
- 根据权利要求2所述的方法,其中,所述方法还包括:获取预设的第一权重数据集,其中,所述第一权重数据集中的权重数据按照N 2×C 2×H 3×W 3的数据格式进行排列,N 2表示所述第一权重数据集包括的权重数据子集的数量,C 2表示每个所述权重数据子集中的通道数量,H 3表示每个所述权重数据子集中的数据高度,W 3表示每个所述权重数据子集中的数据宽度;将所述第一权重数据集中的数据进行交织重排,得到所述第二权重数据集,其中,所述第二权重数据集中的权重数据按照H 4×W 4的数据格式进行排列,H4表示第二权重数据集中的权重数据的数据高度,W4表示第二权重数据集中的权重数据的数据宽度。
- 根据权利要求6所述的方法,其中,所述将所述第一权重数据集中的数据进行交织重排,得到所述第二权重数据集,包括:将所述第一权重数据集中每M 2个权重数据子集的权重数据分成一组,得到S 2组权重数据,其中,M 2≤N 2;将所述S 2组权重数据中的每组权重数据中的M 2个权重数据进行交织重排,得到所述第二权重数据集。
- 根据权利要求6所述的方法,其中,所述N 2的取值为卷积核的输出通道的数量,所述C 2的取值为所述卷积核的输入通道的数量,所述卷积操作为使用所述卷积核执行的卷积操作,每个所述权重数据子集包括C 2个所述输入通道上的权重数据。
- 根据权利要求1至10中任一项所述的方法,其中,所述对所述第二图像数据集和预先获取的第二权重数据集执行所述卷积操作,得到目标输出结果,包括:对所述第二图像数据集和所述第二权重数据集执行所述卷积操作,得到第四图像数据集,其中,所述目标输出结果包括所述第四图像数据集,所述第二图像数据集是将S 1组图像数据中的每组图像数据中的M 1个通道的图像数据进行交织重排,得到图像数据集,所述S 1组图像数据是将所述第一图像数据集中每M 1个通道的图像数据分成一组,得到图像数据,M 1≤C 1。
- 根据权利要求11所述的方法,其中,所述对所述第二图像数据集和所述第二权重数据集执行卷积操作,得到第四图像数据集,包括:在所述第二图像数据集中获取C 2组图像数据,其中,每组图像数据包括所述第一图像数据集中位于同一个通道的多个图像数据,所述每组图像数据是从所述C 2组图像数据中的上一组图像数据的存储地址偏移1个地址得到的图像数据;对所述C 2组图像数据和所述第二权重数据集中的N 2×C 2组权重数据执行卷积操作,得到所述第四图像数据集中的N 2组图像数据,其中,每组权重数据与所述每组图像数据具有相同的数据结构;其中,所述第二权重数据集是将第一权重数据集中的数据进行交织重排,得到第二权重数据集,所述第一权重数据集中的权重数据按照N 2×C 2×H 3×W 3的数据格式进行排列,N 2表示所述第一权重数据集包括的权重数据子集的数量,C 2表示每个所述权重数据子集中的通道数量,H 3表示每个所述权重数据子集中的数据高度,W 3表示每个所述权重数据子集中的数据宽度。
- 根据权利要求1至10中任一项所述的方法,其中,所述方法还包括:在第一内存空间中存储所述第一图像数据集和所述第二图像数据集;在第二内存空间中存储所述第二权重数据集,其中,所述第一内存空间与所述第二内存空间为相互独立的内存空间。
- 一种计算机可读的存储介质,所述计算机可读的存储介质包括存储的程序,其中,所述程序运行时执行所述权利要求1至13任一项中所述的方法。
- 一种电子设备,包括存储器和处理器,其中,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至13任一项中所述的方法。
- 一种计算机程序产品,包括计算机指令,当所述计算机指令由计算机设备的处理器读取并执行时,使得所述计算机设备执行权利要求1至13任一项中所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22794576.3A EP4296891A4 (en) | 2021-04-26 | 2022-04-12 | IMAGE DATA PROCESSING METHOD AND DEVICE, STORAGE MEDIUM AND ELECTRONIC DEVICE |
JP2023524148A JP2023547831A (ja) | 2021-04-26 | 2022-04-12 | 画像データの処理方法及び装置並びに電子機器及びコンピュータプログラム |
US17/991,416 US20230083565A1 (en) | 2021-04-26 | 2022-11-21 | Image data processing method and apparatus, storage medium, and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110451609.0A CN112990370B (zh) | 2021-04-26 | 2021-04-26 | 图像数据的处理方法和装置、存储介质及电子设备 |
CN202110451609.0 | 2021-04-26 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/991,416 Continuation US20230083565A1 (en) | 2021-04-26 | 2022-11-21 | Image data processing method and apparatus, storage medium, and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022228105A1 true WO2022228105A1 (zh) | 2022-11-03 |
Family
ID=76340137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/086217 WO2022228105A1 (zh) | 2021-04-26 | 2022-04-12 | 图像数据的处理方法和装置、存储介质及电子设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230083565A1 (zh) |
EP (1) | EP4296891A4 (zh) |
JP (1) | JP2023547831A (zh) |
CN (1) | CN112990370B (zh) |
WO (1) | WO2022228105A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990370B (zh) * | 2021-04-26 | 2021-09-10 | 腾讯科技(深圳)有限公司 | 图像数据的处理方法和装置、存储介质及电子设备 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309837A (zh) * | 2019-07-05 | 2019-10-08 | 北京迈格威科技有限公司 | 基于卷积神经网络特征图的数据处理方法及图像处理方法 |
CN112215754A (zh) * | 2020-10-26 | 2021-01-12 | 北京达佳互联信息技术有限公司 | 图像放大方法、装置、电子设备和存储介质 |
CN112990370A (zh) * | 2021-04-26 | 2021-06-18 | 腾讯科技(深圳)有限公司 | 图像数据的处理方法和装置、存储介质及电子设备 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10489680B2 (en) * | 2016-10-04 | 2019-11-26 | Magic Leap, Inc. | Efficient data layouts for convolutional neural networks |
CN106779057B (zh) * | 2016-11-11 | 2020-04-17 | 北京旷视科技有限公司 | 基于gpu的计算二值神经网络卷积的方法及装置 |
CN109426858B (zh) * | 2017-08-29 | 2021-04-06 | 京东方科技集团股份有限公司 | 神经网络、训练方法、图像处理方法及图像处理装置 |
CN111860815A (zh) * | 2017-08-31 | 2020-10-30 | 中科寒武纪科技股份有限公司 | 一种卷积运算方法及装置 |
CN108875904A (zh) * | 2018-04-04 | 2018-11-23 | 北京迈格威科技有限公司 | 图像处理方法、图像处理装置和计算机可读存储介质 |
CN110557579B (zh) * | 2018-05-31 | 2021-11-02 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法、装置及设备、可读介质 |
CN110163790B (zh) * | 2018-06-11 | 2024-08-16 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、系统、存储介质和计算机设备 |
WO2020069449A1 (en) * | 2018-09-27 | 2020-04-02 | Deepmind Technologies Limited | Image generation using subscaling and depth up-scaling |
CN111695682B (zh) * | 2019-03-15 | 2022-11-01 | 上海寒武纪信息科技有限公司 | 数据处理方法及装置 |
US11645512B2 (en) * | 2019-04-30 | 2023-05-09 | Baidu Usa Llc | Memory layouts and conversion to improve neural network inference performance |
CN111310115B (zh) * | 2020-01-22 | 2024-05-24 | 深圳市商汤科技有限公司 | 数据处理方法、装置及芯片、电子设备、存储介质 |
-
2021
- 2021-04-26 CN CN202110451609.0A patent/CN112990370B/zh active Active
-
2022
- 2022-04-12 JP JP2023524148A patent/JP2023547831A/ja active Pending
- 2022-04-12 WO PCT/CN2022/086217 patent/WO2022228105A1/zh active Application Filing
- 2022-04-12 EP EP22794576.3A patent/EP4296891A4/en active Pending
- 2022-11-21 US US17/991,416 patent/US20230083565A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309837A (zh) * | 2019-07-05 | 2019-10-08 | 北京迈格威科技有限公司 | 基于卷积神经网络特征图的数据处理方法及图像处理方法 |
CN112215754A (zh) * | 2020-10-26 | 2021-01-12 | 北京达佳互联信息技术有限公司 | 图像放大方法、装置、电子设备和存储介质 |
CN112990370A (zh) * | 2021-04-26 | 2021-06-18 | 腾讯科技(深圳)有限公司 | 图像数据的处理方法和装置、存储介质及电子设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4296891A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP4296891A1 (en) | 2023-12-27 |
EP4296891A4 (en) | 2024-09-11 |
US20230083565A1 (en) | 2023-03-16 |
CN112990370A (zh) | 2021-06-18 |
CN112990370B (zh) | 2021-09-10 |
JP2023547831A (ja) | 2023-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11960566B1 (en) | Reducing computations for data including padding | |
WO2021109699A1 (zh) | 人工智能加速器、设备、芯片及数据处理方法 | |
US20230244749A1 (en) | Gpu communication method and device, and medium | |
US20200134435A1 (en) | Computation apparatus, circuit and relevant method for neural network | |
EP3979589A1 (en) | Image acquisition method, device, server and storage medium | |
WO2022033241A1 (zh) | 对象的处理方法及装置、存储介质和电子设备 | |
US8832158B2 (en) | Fast predicate table scans using single instruction, multiple data architecture | |
WO2022228105A1 (zh) | 图像数据的处理方法和装置、存储介质及电子设备 | |
WO2022012119A1 (zh) | 数据处理方法、装置、电子设备及存储介质 | |
WO2021147276A1 (zh) | 数据处理方法、装置及芯片、电子设备、存储介质 | |
WO2020014893A1 (zh) | 反卷积实现方法及相关产品 | |
CN111182332B (zh) | 视频处理方法、装置、服务器及存储介质 | |
US10311557B2 (en) | Automated tonal balancing | |
CN106649377A (zh) | 一种图像处理系统及图像处理的方法 | |
TWI798591B (zh) | 卷積神經網路運算方法及裝置 | |
CN111767246B (zh) | 数据处理方法、相关设备及计算机可读介质 | |
CN112261023A (zh) | 一种卷积神经网络的数据传输方法和装置 | |
JP6003032B2 (ja) | 画像圧縮方法、画像圧縮装置およびシステム | |
US20220292344A1 (en) | Processing data in pixel-to-pixel neural networks | |
WO2024198986A1 (zh) | 一种数据处理的方法及相应装置 | |
US20230376562A1 (en) | Integrated circuit apparatus for matrix multiplication operation, computing device, system, and method | |
US12112265B2 (en) | Architecture for running convolutional networks on memory and mips constrained embedded devices | |
US20240004719A1 (en) | Just-In-Time Re-Partitioning of Feature Maps for Efficient Balancing of Compute Core Workloads | |
WO2023116312A1 (zh) | 数据处理方法、装置、计算机设备及存储介质 | |
CN116650942A (zh) | 业务处理方法、装置、计算机设备和业务处理系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22794576 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023524148 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022794576 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022794576 Country of ref document: EP Effective date: 20230918 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |