WO2021023000A1 - 信息处理方法、装置、电子设备及存储介质 - Google Patents

信息处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021023000A1
WO2021023000A1 PCT/CN2020/103047 CN2020103047W WO2021023000A1 WO 2021023000 A1 WO2021023000 A1 WO 2021023000A1 CN 2020103047 W CN2020103047 W CN 2020103047W WO 2021023000 A1 WO2021023000 A1 WO 2021023000A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
processed
storage space
information
buffer
Prior art date
Application number
PCT/CN2020/103047
Other languages
English (en)
French (fr)
Inventor
陈凯亮
许志耿
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to KR1020217019945A priority Critical patent/KR20210094629A/ko
Priority to JP2021535674A priority patent/JP2022514382A/ja
Publication of WO2021023000A1 publication Critical patent/WO2021023000A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present disclosure relates to the field of computer technology, in particular to information processing methods, devices, electronic equipment and storage media.
  • central processing unit central processing unit
  • DSP digital signal processor
  • the embodiments of the present disclosure provide information processing methods, devices, electronic equipment, and storage media.
  • the first aspect provides an information processing method, including: a CPU obtains data to be processed; allocating virtual storage space for the data to be processed; storing the data to be processed in the virtual storage space; A data processing instruction for the information of the storage space, where the data processing instruction is used by the DSP to obtain the data to be processed from a virtual storage space corresponding to the information and process the data to be processed.
  • the virtual storage space corresponds to the first buffer space in the buffer of the electronic device; and the storing the to-be-processed data in the virtual storage space includes: a CPU transfers the to-be-processed data to the virtual storage space.
  • the data is stored in the first buffer space; the DSP acquiring the data to be processed from the virtual storage space corresponding to the information includes: the DSP acquires the data to be processed from the first buffer space corresponding to the information.
  • the allocating virtual storage space for the to-be-processed data includes: applying for buffer space from the buffer of the electronic device based on the storage space required by the to-be-processed data;
  • the location indication information of the first cache space returned by the cache of the electronic device determines the physical storage location of the virtual storage space.
  • the data to be processed includes at least one array, and each array includes data of the same type, and the method further includes: determining the storage space required by each array in the at least one array; Based on the storage space required by each array in the at least one array, the storage space required by the data to be processed is determined.
  • the method further includes: determining the deviation of each array in the at least one array in the virtual storage space according to the size of the storage space required by each array in the at least one array. Shift.
  • the method further includes: determining the size of the storage space required by the data to be processed based on the data volume of the data to be processed and the result data volume corresponding to the data to be processed.
  • applying for a buffer space from the buffer of the electronic device includes: sending a request for buffer space to the buffer of the electronic device The request carries information about the size of the storage space required by the data to be processed; receiving location indication information from the buffer, where the location indication information is used to indicate the base address of the first buffer space.
  • the data to be processed are network parameters and input data of a network layer in a neural network.
  • the second aspect provides another information processing method, including: DSP receives a data processing instruction from a CPU, the data processing instruction carries information of a virtual storage space; obtaining data to be processed from a virtual storage space corresponding to the information; The data to be processed is processed.
  • the virtual storage space corresponding to the information corresponds to the first buffer space in the buffer of the electronic device; the obtaining the to-be-processed data from the virtual storage space corresponding to the information includes: The first buffer space corresponding to the information acquires the data to be processed.
  • the method further includes: storing the processing result of the to-be-processed data in a first buffer space corresponding to the information.
  • the data to be processed are network parameters and input data of a network layer in a neural network.
  • an information processing device including: an acquisition unit for acquiring data to be processed; an allocation unit for allocating virtual storage space for the data to be processed; and a storage unit for storing the data to be processed Stored in the virtual storage space; a sending unit, configured to send a data processing instruction carrying information of the virtual storage space to the DSP, and the data processing instruction is used by the DSP to obtain all information from the virtual storage space corresponding to the information
  • the data to be processed is described and the data to be processed is processed.
  • a fourth aspect provides another information processing device, including: a receiving unit, configured to receive a data processing instruction from a CPU, where the data processing instruction carries information of a virtual storage space; and an acquiring unit, configured to obtain a virtual storage space corresponding to the information The storage space acquires the data to be processed; the processing unit is used to process the data to be processed.
  • a fifth aspect provides an information processing device, including a processor and a memory, where the memory is used to store computer-readable instructions, and the processor is used to call the computer-readable instructions stored in the memory to execute operations as described in the first aspect or the first aspect.
  • an information processing method provided by any possible implementation manner.
  • a sixth aspect provides an information processing device, including a processor and a memory, the memory is configured to store computer-readable instructions, and the processor is configured to invoke the computer-readable instructions stored in the memory to execute operations as described in the second aspect or the first aspect.
  • the information processing method provided by any one of the possible implementations in the two aspects.
  • a seventh aspect provides an electronic device, including the information processing device provided in the fifth aspect and the information processing device provided in the sixth aspect, or the information processing device provided in the third aspect and the information processing device provided in the fourth aspect.
  • An eighth aspect provides a readable storage medium, the readable storage medium stores a computer program, and the computer program includes program code that, when executed by a processor, causes the processor to execute the first aspect or the first aspect.
  • a ninth aspect provides a computer program product, which is used to execute the information processing method provided in the first aspect or any one of the possible implementations of the first aspect at runtime, or the second aspect or the second aspect The information processing method provided by any possible implementation manner.
  • the CPU obtains the data to be processed, allocates virtual storage space for the data to be processed, stores the data to be processed in the virtual storage space, and sends a data processing instruction carrying information of the virtual storage space to the DSP.
  • the data processing instruction is used for
  • the DSP obtains the data to be processed from the virtual storage space corresponding to the information and processes the data to be processed. In this way, the CPU can send data corresponding to multiple operations to the DSP at one time through the virtual storage space, thereby reducing DSP scheduling overhead and improving information processing efficiency.
  • Fig. 1 is a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • Fig. 2 is a schematic flowchart of an information processing method provided by an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of another information processing method provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of another information processing method provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of another information processing method provided by an embodiment of the present disclosure.
  • Fig. 6 is a schematic structural diagram of an information processing device provided by an embodiment of the present disclosure.
  • Fig. 7 is a schematic structural diagram of another information processing device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of another information processing device provided by an embodiment of the present disclosure.
  • the embodiments of the present disclosure provide information processing methods, devices, electronic equipment, and storage media for improving data processing efficiency. Detailed descriptions are given below.
  • DSP for example, hexagon
  • the use of DSP has many issues that require additional consideration.
  • the DSP cannot directly access the memory space on the CPU.
  • the CPU cannot directly access the space opened up by the DSP.
  • the address space needs to be allocated through the ION buffer.
  • the fast remote procedure call Frast Remote Procedure Call, FastRPC
  • the fast remote procedure call only maps the data that the CPU has to transfer to the DSP for this call to the address space allocated by the ION Buffer.
  • the number of address spaces that can be mapped by a single DSP call is limited, and if a certain block of address space is not mapped in this call, even if the address space has been mapped before, the DSP cannot correctly access the data in the address space. .
  • both the DSP and the CPU are applied to the data processing of the neural network model, it is necessary to call the DSP once for each operation in the neural network model. Therefore, when the depth of the neural network model is deep, the CPU cannot transfer all the data required for data processing through a single call to the DSP, but has to call the DSP multiple times. However, calling the DSP repeatedly will bring a very large overhead.
  • the address space mapped on the ION Buffer may be different each time the DSP is called, and the DSP cannot correctly access the data in the address space that should be mapped for the last call.
  • a virtual storage space can be maintained by the CPU.
  • the virtual storage space stores the information (for example, weight parameters and input data) required by the DSP to execute the data processing of the neural network model, and the virtual storage space can be shared by the DSP .
  • the CPU can transfer all the data required by the DSP for data processing of the entire neural network model to the DSP through only one or a limited number of calls, thereby minimizing the overhead caused by the DSP call.
  • FIG. 1 is a schematic diagram of an electronic device applied in an embodiment of the present disclosure.
  • the electronic device may include a CPU 101, a DSP 102, and a buffer 103.
  • the CPU 101 is used to receive a running instruction carrying data, and schedule the DSP 102 based on the received running instruction.
  • the DSP 102 is used to process data in response to the scheduling of the CPU 101.
  • the buffer 103 is used to buffer data.
  • the buffer 103 may be an ION buffer, or other buffers or storage modules that can be accessed by the CPU and DSP.
  • FIG. 2 is a schematic flowchart of an information processing method provided by an embodiment of the present disclosure. This method can be applied to the electronic device shown in FIG. 1. Among them, the information processing method is described from the perspective of the CPU. As shown in Figure 2, the information processing method may include the following steps.
  • the CPU can obtain the running instruction carrying the data to be processed, and can schedule the DSP based on the running instruction.
  • the running instruction may be input by the user, generated by the electronic device to which the CPU belongs, or sent by other electronic devices or servers.
  • the data to be processed can be the network parameters and input data of the network layer in the neural network, and the number of layers of the network layer is greater than one.
  • the data to be processed can also be other data processed by the DSP alone or data that need to be processed jointly by the CPU and the DSP.
  • the CPU After the CPU obtains the data to be processed, it allocates virtual storage space for the data to be processed.
  • the electronic device cache may apply for a cache space based on the storage space required by the data to be processed, and then determine the physical storage location corresponding to the virtual storage space based on the location indication information of the first cache space returned by the electronic device cache.
  • the electronic device cache refers to the cache in the electronic device to which the CPU belongs, and the cache can be accessed by the CPU and DSP.
  • the buffer can be an ION buffer or other buffers that can be accessed by the CPU and DSP.
  • the CPU may first send a request for applying for cache space (for example, an application instruction) to the cache in the electronic device, and the request may carry information about the size of the storage space required for the data to be processed.
  • the first buffer space can be selected from the free buffer space of the buffer, the size of the first buffer space is equal to the storage space required by the data to be processed, and the location indication is allocated for the first buffer space Information, such as a pointer to the first buffer space, then returns the position indication information to the CPU.
  • the CPU After the CPU receives the location indication information of the first buffer space from the buffer, it can create a virtual storage space based on the size of the storage space required by the data to be processed, allocate a base address for the virtual storage space, and establish the base address of the virtual storage space and the Correspondence between location indication information. Wherein, creating the virtual storage space based on the size of the storage space required by the data to be processed and allocating a base address for the virtual storage space may also be created before sending the request for applying for the buffer space to the buffer in the electronic device.
  • the virtual storage space Before sending the request for applying for cache space to the buffer in the electronic device, the virtual storage space has been created based on the size of the storage space required for the data to be processed, and the base address is allocated for the virtual storage space, the request can also be Carry the base address of the virtual storage space.
  • the buffer can establish a correspondence between the location indication information and the base address of the virtual storage space.
  • the CPU receives the location indication information indicating the first cache space from the cache, it may not establish a correspondence between the base address of the virtual storage space and the location indication information.
  • the location indication information of the first cache space is used to indicate the base address of the first cache space.
  • the data to be processed including multiple operations can correspond to a position indication information in the buffer, such as a pointer.
  • a position indication information in the buffer such as a pointer.
  • data including multiple operations can be shared to the DSP at one time through the virtual storage space, thereby improving information processing efficiency.
  • the first cache space is applied for based on the storage space required by the data to be processed, a suitable cache space can be applied for, that is, the cache space will not be wasted due to the application of too much cache space. Because too little cache space is requested, the data to be processed cannot be cached.
  • the data to be processed may be stored in the virtual storage space, that is, the data to be processed is stored in the first cache space corresponding to the virtual storage space.
  • the CPU After the CPU stores the data to be processed in the virtual storage space, it can send data processing instructions carrying the information of the virtual storage space to the DSP, so that the DSP can obtain the data to be processed from the virtual storage space corresponding to the information in the virtual storage space and process the data to be processed .
  • the CPU can call the preset function library to send data processing instructions carrying the information of the virtual storage space to the DSP.
  • the preset function library is a function library specially used to call DSP, such as FastRPC.
  • the information of the virtual storage space may include the base address of the virtual storage space.
  • the CPU can share the data of multiple operations to the DSP at one time through the virtual storage space. In this process, only one DSP scheduling is required, thereby reducing DSP scheduling overhead and improving information Processing efficiency.
  • FIG. 3 is a schematic flowchart of another information processing method provided by an embodiment of the present disclosure. Among them, the information processing method is described from the perspective of the CPU. As shown in Fig. 3, the information processing method may include the following steps.
  • step 301 is the same as step 201.
  • step 201 please refer to step 201, which will not be repeated here.
  • the size of the storage space required for the data to be processed can be determined.
  • the storage space required by each array in at least one array included in the data to be processed may be determined first, and then the storage space required by each array in the at least one array is determined based on the storage space required by each array in the at least one array.
  • the storage space size that is, the sum of the storage space required by each array in at least one array can be determined as the storage space required for the data to be processed.
  • each array includes the same type of data.
  • the amount of storage space required for the data to be processed can also be determined based on the amount of data to be processed and the amount of result data corresponding to the data to be processed (ie, the amount of result data corresponding to the data to be processed), namely The sum of the data volume of the data to be processed and the result data volume corresponding to the data to be processed can be determined as the storage space required by the data to be processed.
  • the result data amount can be predetermined according to the processing parameters involved in the data to be processed.
  • the amount of data in each array and the amount of result data corresponding to each array in at least one array included in the data to be processed may also be determined first, and then based on the amount of data in each array in the at least one array and each The amount of result data corresponding to the array determines the storage space required for each array in at least one array. That is, first calculate the storage space required for the data included in each array and the storage space required for the result corresponding to the data included in the array, and then the storage space required for the data included in each array and the array include The sum of the storage space required for the result corresponding to the data of the data is the storage space required for each array.
  • step 303 is the same as step 202.
  • the CPU may also determine the offset of each array in the at least one array in the virtual storage space according to the size of the storage space required by each array in the at least one array. Then, data can be written in the virtual storage space according to the base address and offset.
  • step 304 is the same as step 203.
  • step 203 please refer to step 203, which will not be repeated here.
  • step 305 is the same as step 204.
  • step 204 please refer to step 204, which will not be repeated here.
  • the CPU can send data including multiple operations to the DSP at one time through the virtual storage space, thereby reducing DSP scheduling overhead and improving information processing efficiency.
  • FIG. 4 is a schematic flowchart of yet another information processing method provided by an embodiment of the present disclosure. Among them, the information processing method is described from the perspective of DSP. As shown in Figure 4, the information processing method may include the following steps.
  • the DSP After the CPU sends the data processing instruction carrying the information of the virtual storage space to the DSP, the DSP receives the data processing instruction from the CPU. Both the CPU and the DSP have the authority to access the virtual storage space corresponding to the information of the virtual storage space.
  • the DSP After receiving the data processing instruction from the CPU, the DSP obtains the data to be processed from the virtual storage space corresponding to the information in the virtual storage space.
  • the virtual storage space corresponding to the information of the virtual storage space corresponds to the first cache space in the cache of the electronic device, and both the CPU and the DSP have the authority to access the cache of the electronic device. Therefore, the DSP obtains the data to be processed from the first buffer space corresponding to the information in the virtual storage space.
  • the DSP may first obtain the location indication information corresponding to the information of the virtual storage space, and then correspond from the location indication information To obtain the data to be processed in the first buffer space.
  • the DSP can directly send the data acquisition request carrying the information of the virtual storage space to the buffer, and the buffer After receiving the data acquisition request from the DSP, obtain the location indication information corresponding to the virtual storage space information according to the information of the virtual storage space and the correspondence between the location indication information and the information of the virtual storage space.
  • a buffer space acquires the data to be processed, and then returns the data to be processed to the DSP.
  • the DSP After the DSP obtains the data to be processed from the virtual storage space corresponding to the information in the virtual storage space, the data to be processed is processed.
  • the processing of the data to be processed may include convolution processing, and may also include other processing such as full connection processing.
  • the processing result of the data to be processed may be stored in the first buffer space corresponding to the information in the virtual storage space.
  • a storage instruction can be sent to the buffer, and the storage instruction carries the processing result and the information of the virtual storage space.
  • the buffer receives the storage instruction from the DSP, it stores the processing result in the first buffer space.
  • the DSP After the DSP stores the processing result of the data to be processed in the first buffer space corresponding to the information of the virtual storage space, it can send a processing completion response message to the CPU so that the CPU can obtain the processing result from the buffer, so that the buffer of the buffer can be cleared in time space.
  • the DSP can obtain and process data from the virtual storage space at one time through the information in the virtual storage space, thereby improving the efficiency of data processing.
  • FIG. 5 is a schematic flowchart of yet another information processing method provided by an embodiment of the present disclosure. Among them, the information processing method is described from the perspective of CPU and DSP. As shown in FIG. 5, the information processing method may include the following steps.
  • the CPU obtains data to be processed.
  • step 501 is the same as step 201.
  • step 201 For detailed description, please refer to step 201, which will not be repeated here.
  • the CPU determines the size of storage space required for the data to be processed.
  • step 502 is the same as step 302.
  • step 302 please refer to step 302, which will not be repeated here.
  • the CPU sends a request for applying for cache space to the cache.
  • the CPU After the CPU determines the size of the storage space required for the data to be processed, it can send a request for applying for cache space to the buffer, and the request carries information about the size of the storage space required for the data to be processed.
  • the buffer sends location indication information of the first buffer space to the CPU.
  • the buffer After receiving the request from the CPU, the buffer selects the first buffer space from the free buffer space according to the size of the storage space corresponding to the information, and then sends the location indication information of the first buffer space to the CPU. For example, sending a pointer to the CPU to indicate the base address of the first cache space.
  • the CPU determines a physical storage address corresponding to the virtual storage space based on the location indication information.
  • the CPU can allocate virtual storage space for the data to be processed, such as the offset of the data in the virtual storage space, according to the size of the storage space required by the data to be processed.
  • the offset may be the offset of the data of each array in the virtual storage space, or may be the offset of the data of each array and its corresponding result data.
  • the offset is determined relative to the base address of the virtual storage space.
  • the physical storage address corresponding to the virtual storage space can be determined based on the location indication information, that is, the base address of the virtual storage space is determined, so as to determine the actual storage location of the data to be processed.
  • the CPU stores the to-be-processed data in the virtual storage space, that is, in the first cache space indicated by the location indication information.
  • step 506 is similar to step 203.
  • step 203 please refer to step 203, which will not be repeated here.
  • the CPU sends a data processing instruction carrying information of the virtual storage space to the DSP.
  • the DSP receives data processing instructions from the CPU.
  • step 507 is the same as step 204.
  • step 204 please refer to step 204, which will not be repeated here.
  • step 401 the data processing instruction received by the DSP from the CPU is the same as step 401.
  • step 401 please refer to step 401, which will not be repeated here.
  • the DSP obtains the data to be processed from the first buffer space corresponding to the information in the virtual storage space.
  • step 508 is similar to step 402, for detailed description, please refer to step 402, which will not be repeated here.
  • the DSP processes the data to be processed.
  • step 509 is the same as step 403.
  • step 403 which will not be repeated here.
  • the DSP stores the processing result of the data to be processed in the first buffer space corresponding to the information in the virtual storage space.
  • step 510 is the same as step 404.
  • step 404 please refer to step 404, which will not be repeated here.
  • the CPU is responsible for analyzing the neural network model and calculating the space required by each array related to the data processing of the neural network model.
  • the CPU applies for space (ie, virtual storage space) from a virtual heap according to the calculated space size.
  • space ie, virtual storage space
  • the result returned by applying for space is not a usual pointer, but an offset relative to the base address of the heap.
  • the CPU counts the total amount of space required, and allocates space corresponding to the total size on the ION Buffer to obtain the actual base address. Then, the CPU writes the relevant parameters and data required to run the neural network model into the virtual storage space that is applied for, that is, actually writes the space corresponding to the virtual space allocated on the ION Buffer.
  • the specific write address can be passed through The base address and offset are calculated. Then, the CPU initiates a FastRPC call, and passes this virtual storage space to the DSP through FastRPC. In this way, both the CPU and the DSP can share the virtual storage space. Then, the DSP can parse the data in the ION Buffer according to the address information of the virtual storage space, and start calculation, store the calculation result in the corresponding location of the space, and return it to the CPU.
  • FIG. 6 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present disclosure.
  • the information processing apparatus may include: an acquiring unit 601, configured to acquire data to be processed; an allocation unit 602, configured to allocate virtual storage space for the data to be processed; and a storage unit 603, configured to store the data to be processed To the virtual storage space; the sending unit 604 is used to send a data processing instruction carrying information of the virtual storage space to the DSP, and the data processing instruction is used by the DSP to obtain the data to be processed from the virtual storage space corresponding to the information in the virtual storage space and the data to be processed To process.
  • the virtual storage space corresponds to the first buffer space in the buffer of the electronic device; the storage unit 603 is specifically configured to store the data to be processed in the first buffer space; the DSP corresponds to the information in the virtual storage space Obtaining the data to be processed by the virtual storage space includes: the DSP acquires the data to be processed from the first cache space corresponding to the information of the virtual storage space.
  • the allocating unit 602 is specifically configured to: apply for buffer space from the buffer of the electronic device based on the size of the storage space required by the data to be processed; based on the location indication information of the first buffer space returned by the buffer of the electronic device To determine the physical storage location of the virtual storage space.
  • the data to be processed includes at least one array, and each array includes data of the same type.
  • the information processing apparatus may further include: a determining unit 605, configured to determine the location of each array in the at least one array. The required storage space is determined based on the storage space required by each array in the at least one array.
  • the determining unit 605 is further configured to determine the offset of each array in the at least one array in the virtual storage space according to the size of the storage space required by each array in the at least one array.
  • the determining unit 605 is configured to determine the size of the storage space required for the data to be processed based on the data amount of the data to be processed and the result data amount corresponding to the data to be processed.
  • the allocation unit 602 applies for cache space from the cache of the electronic device including: sending a request for applying for cache space to the cache of the electronic device, and the request carries the Information about the size of the storage space required for data processing; receiving position indication information from the buffer, where the position indication information is used to indicate the base address of the first buffer space.
  • the data to be processed are network parameters and input data of the network layer in the neural network.
  • This embodiment may correspond to the description of the method embodiment in the embodiment of the application, and the above and other operations and/or functions of each unit are used to implement the corresponding processes in each method in FIG. 2 and FIG. 3 respectively. For the sake of brevity, it is not here. Repeat it again.
  • FIG. 7 is a schematic structural diagram of another information processing apparatus provided by an embodiment of the present disclosure.
  • the information processing apparatus may include: a receiving unit 701, configured to receive a data processing instruction from the CPU, the data processing instruction carrying information about a virtual storage space; an acquiring unit 702, configured to correspond to the information in the virtual storage space The data to be processed is acquired in the virtual storage space of, and the processing unit 703 is configured to process the data to be processed.
  • the virtual storage space corresponding to the information of the virtual storage space corresponds to the first cache space in the cache of the electronic device; the obtaining unit 702 is specifically configured to obtain the first cache space corresponding to the information of the virtual storage space Pending data.
  • the information processing apparatus may further include: a storage unit 704, configured to store the processing result of the to-be-processed data in the first cache space corresponding to the information in the virtual storage space.
  • the data to be processed are network parameters and input data of the network layer in the neural network.
  • This embodiment may correspond to the description of the method embodiment in the embodiment of the present application, and the above and other operations and/or functions of each unit are used to implement the corresponding flow in each method in FIG. 4, and are not repeated here for brevity.
  • FIG. 8 is a schematic structural diagram of another information processing device provided by an embodiment of the present disclosure.
  • the information processing apparatus can realize various functions of the CPU in the electronic device shown in FIG. 1.
  • the information processing apparatus may include: at least one processor 801, such as a CPU, a transceiver 802, and at least one bus 803.
  • the bus 803 is used to implement connection and communication between these components.
  • the processor 801 is configured to perform the following operations: obtain the data to be processed; allocate virtual storage space for the data to be processed; store the data to be processed in the virtual storage space; the transceiver 802 is configured to send the virtual storage space to the DSP
  • the data processing instruction of the information in the storage space is used by the DSP to obtain the data to be processed from the virtual storage space corresponding to the information in the virtual storage space and process the data to be processed.
  • the virtual storage space corresponds to the first cache space in the cache of the electronic device; the processor 801 storing the to-be-processed data in the virtual storage space includes: storing the to-be-processed data in the first cache space; and the DSP from the virtual storage
  • the obtaining of the data to be processed by the virtual storage space corresponding to the information of the space includes: the DSP obtains the data to be processed from the first buffer space corresponding to the information of the virtual storage space.
  • the processor 801 allocating virtual storage space for the data to be processed includes: applying for buffer space from the buffer of the electronic device based on the storage space required by the data to be processed;
  • the location indication information of the cache space determines the physical storage location of the virtual storage space.
  • the data to be processed includes at least one array, and each array includes data of the same type, and the processor 801 is further configured to perform the following operations: determine the size of storage space required by each array in the at least one array; Based on the storage space required by each array in the at least one array, the storage space required by the data to be processed is determined.
  • the processor 801 is further configured to perform the following operations: determine the deviation of each array in the at least one array in the virtual storage space according to the storage space required by each array in the at least one array. Shift.
  • the processor 801 is further configured to perform the following operations: based on the data volume of the data to be processed and the result data volume corresponding to the data to be processed, determine the size of the storage space required for the data to be processed.
  • the processor 801 based on the size of the storage space required by the data to be processed, requesting cache space from the cache of the electronic device includes: sending a request for applying for cache space to the cache of the electronic device, and the request carries Information about the size of the storage space required by the data to be processed; receiving position indication information from the buffer, where the position indication information is used to indicate the base address of the first buffer space.
  • the data to be processed are network parameters and input data of the network layer in the neural network.
  • step 201-step 203, step 301-step 304, and step 501-step 504 can be executed by the processor 801 in the CPU, and step 204, step 305, and step 505 can be executed by the transceiver 802 in the CPU.
  • the acquisition unit 601, the allocation unit 602, the storage unit 603, and the determination unit 605 may be implemented by the processor 801 in the CPU, and the sending unit 604 may be implemented by the transceiver 802 in the CPU.
  • the foregoing information processing apparatus may also be used to execute various methods performed in the foregoing method embodiments, and details are not described herein again.
  • the information processing apparatus can implement various functions of the DSP in the electronic device shown in FIG. 1.
  • the transceiver 802 is used to receive data processing instructions from the CPU, and the data processing instructions carry information of the virtual storage space
  • the processor 801 is used to perform the following operations: obtain the data to be processed from the virtual storage space corresponding to the information in the virtual storage space ; Process the data to be processed.
  • the virtual storage space corresponding to the information of the virtual storage space corresponds to the first cache space in the cache of the electronic device; the processor 801 obtaining the data to be processed from the virtual storage space corresponding to the information of the virtual storage space includes: Obtain the data to be processed from the first cache space corresponding to the information of the virtual storage space.
  • the processor 801 is further configured to perform the following operation: store the processing result of the to-be-processed data in the first cache space corresponding to the information of the virtual storage space.
  • the data to be processed are network parameters and input data of the network layer in the neural network.
  • steps 402 to 404 and steps 506 to 508 can be executed by the processor 801, and the steps of receiving data processing instructions in step 204, step 305, and step 505 and step 401 can be executed by the transceiver 802.
  • the acquiring unit 702, the processing unit 703, and the storage unit 704 may be implemented by the processor 801, and the receiving unit 701 may be implemented by the transceiver 802.
  • the foregoing information processing apparatus may also be used to execute various methods performed in the foregoing method embodiments, and details are not described herein again.
  • a storage medium is provided, the storage medium is used to store an application program, and the application program is used to execute the information processing method in FIGS. 2 to 4 at runtime.
  • an application program is provided, and the application program is used to execute the information processing method of FIGS. 2 to 4 at runtime.
  • the program can be stored in a computer-readable memory.
  • the memory can include: flash disk, ROM, RAM, magnetic disk or CD, etc.

Abstract

本公开实施例提供了信息处理方法、装置、电子设备及存储介质,包括:CPU获取待处理数据;为待处理数据分配虚拟存储空间;将待处理数据存储至虚拟存储空间;向DSP发送携带虚拟存储空间的信息的数据处理指令,数据处理指令用于DSP从该信息对应的虚拟存储空间获取待处理数据并对待处理数据进行处理。

Description

信息处理方法、装置、电子设备及存储介质 技术领域
本公开涉及计算机技术领域,具体涉及信息处理方法、装置、电子设备及存储介质。
背景技术
随着计算机技术的不断发展,需要处理的数据越来越多。在处理数据时,一般需要中央处理单元(central processing unit,CPU)和数字信号处理器(digital signal processor,DSP)共同协作才能完成。然而,对于CPU上的内存空间,DSP无法直接访问,对于DSP开辟的空间,CPU同样无法直接访问。目前,在通过神经网络模型进行数据处理的过程中,CPU针对每个操作调用一次DSP,使得DSP的调度开销较大。
发明内容
本公开实施例提供信息处理方法、装置、电子设备及存储介质。
第一方面提供一种信息处理方法,包括:CPU获取待处理数据;为所述待处理数据分配虚拟存储空间;将所述待处理数据存储至所述虚拟存储空间;向DSP发送携带所述虚拟存储空间的信息的数据处理指令,所述数据处理指令用于所述DSP从所述信息对应的虚拟存储空间获取所述待处理数据并对所述待处理数据进行处理。
作为一种可能的实施方式,所述虚拟存储空间对应于电子设备的缓存器中的第一缓存空间;所述将所述待处理数据存储至所述虚拟存储空间包括:CPU将所述待处理数据存储至所述第一缓存空间;所述DSP从所述信息对应的虚拟存储空间获取所述待处理数据包括:所述DSP从所述信息对应的第一缓存空间获取所述待处理数据。
作为一种可能的实施方式,所述为所述待处理数据分配虚拟存储空间包括:基于所述待处理数据所需的存储空间大小,向所述电子设备的缓存器申请缓存空间;基于所述电子设备的缓存器返回的第一缓存空间的位置指示信息,确定所述虚拟存储空间的物理存储位置。
作为一种可能的实施方式,所述待处理数据包括至少一个数组,每个数组包括同一类型的数据,所述方法还包括:确定所述至少一个数组中每个数组所需的存储空间大小;基于所述至少一个数组中每个数组所需的存储空间大小,确定所述待处理数据所需的存 储空间大小。
作为一种可能的实施方式,所述方法还包括:根据所述至少一个数组中每个数组所需的存储空间大小,确定所述至少一个数组中每个数组在所述虚拟存储空间中的偏移量。
作为一种可能的实施方式,所述方法还包括:基于所述待处理数据的数据量和所述待处理数据对应的结果数据量,确定所述待处理数据所需的存储空间大小。
作为一种可能的实施方式,所述基于所述待处理数据所需的存储空间大小,向所述电子设备的缓存器申请缓存空间包括:向所述电子设备的缓存器发送用于申请缓存空间的请求,所述请求携带所述待处理数据所需的存储空间大小的信息;接收来自所述缓存器的位置指示信息,所述位置指示信息用于指示所述第一缓存空间的基地址。
作为一种可能的实施方式,所述待处理数据为神经网络中网络层的网络参数和输入数据。
第二方面提供另一种信息处理方法,包括:DSP接收来自CPU的数据处理指令,所述数据处理指令携带虚拟存储空间的信息;从所述信息对应的虚拟存储空间获取待处理数据;对所述待处理数据进行处理。
作为一种可能的实施方式,所述信息对应的虚拟存储空间对应于电子设备的缓存器中的第一缓存空间;所述从所述信息对应的虚拟存储空间获取待处理数据包括:从所述信息对应的第一缓存空间获取待处理数据。
作为一种可能的实施方式,所述方法还包括:将所述待处理数据的处理结果存储在所述信息对应的第一缓存空间。
作为一种可能的实施方式,所述待处理数据为神经网络中网络层的网络参数和输入数据。
第三方面提供一种信息处理装置,包括:获取单元,用于获取待处理数据的;分配单元,用于为所述待处理数据分配虚拟存储空间;存储单元,用于将所述待处理数据存储至所述虚拟存储空间;发送单元,用于向DSP发送携带所述虚拟存储空间的信息的数据处理指令,所述数据处理指令用于所述DSP从所述信息对应的虚拟存储空间获取所述待处理数据并对所述待处理数据进行处理。
第四方面提供另一种信息处理装置,包括:接收单元,用于接收来自CPU的数据处理指令,所述数据处理指令携带虚拟存储空间的信息;获取单元,用于从所述信息对应 的虚拟存储空间获取待处理数据;处理单元,用于对所述待处理数据进行处理。
第五方面提供一种信息处理装置,包括处理器和存储器,所述存储器用于存储计算机可读指令,处理器用于调用所述存储器存储的所述计算机可读指令,执行如第一方面或第一方面中任一种可能的实施方式提供的信息处理方法。
第六方面提供一种信息处理装置,包括处理器和存储器,所述存储器用于存储计算机可读指令,处理器用于调用所述存储器存储的所述计算机可读指令,执行如第二方面或第二方面中任一种可能的实施方式提供的信息处理方法。
第七方面提供一种电子设备,包括第五方面提供的信息处理装置和第六方面提供的信息处理装置,或者包括第三方面提供的信息处理装置和第四方面提供的信息处理装置。
第八方面提供一种可读存储介质,该可读存储介质存储有计算机程序,该计算机程序包括程序代码,该程序代码当被处理器执行时使该处理器执行第一方面或第一方面中任一种可能的实施方式提供的信息处理方法,或者第二方面或第二方面中任一种可能的实施方式提供的信息处理方法。
第九方面提供一种计算机程序产品,该计算机程序产品用于在运行时执行第一方面或第一方面中任一种可能的实施方式提供的信息处理方法,或者第二方面或第二方面中任一种可能的实施方式提供的信息处理方法。
本公开实施例中,CPU获取待处理数据,为待处理数据分配虚拟存储空间,将待处理数据存储至虚拟存储空间,向DSP发送携带虚拟存储空间的信息的数据处理指令,数据处理指令用于DSP从该信息对应的虚拟存储空间获取待处理数据并对待处理数据进行处理。这样,CPU可以通过虚拟存储空间一次性将多个操作对应的数据发送给DSP,从而减小DSP调度开销,提高信息处理效率。
附图说明
图1是本公开实施例提供的一种电子设备的示意图。
图2是本公开实施例提供的一种信息处理方法的流程示意图。
图3是本公开实施例提供的另一种信息处理方法的流程示意图。
图4是本公开实施例提供的又一种信息处理方法的流程示意图。
图5是本公开实施例提供的又一种信息处理方法的流程示意图。
图6是本公开实施例提供的一种信息处理装置的结构示意图。
图7是本公开实施例提供的另一种信息处理装置的结构示意图。
图8是本公开实施例提供的又一种信息处理装置的结构示意图。
具体实施方式
本公开实施例提供信息处理方法、装置、电子设备及存储介质,用于提高数据处理效率。以下分别进行详细说明。
目前,在神经网络模型的数据处理中,DSP(例如,hexagon)的使用较为广泛,其具有计算性能高、功耗低的优点。但是,DSP的使用存在很多需要额外考虑的问题。
例如,对于CPU上的内存空间,DSP无法直接访问。同样,对于DSP开辟的空间,CPU也无法直接进行访问。当DSP与CPU需要进行数据的交换时,需要通过ION缓冲器(ION Buffer)分配地址空间。例如,在CPU需要调用DSP时,通过快速远程过程调用(Fast Remote Procedure Call,FastRPC)机制仅将针对本次调用的CPU要传递给DSP的数据映射到ION Buffer所分配的地址空间。但是单次DSP调用能够映射的地址空间数量是有限的,而且若本次调用没有映射某块地址空间,那么即使之前曾经对该地址空间进行过映射,DSP也无法正确访问到该地址空间的数据。
在DSP和CPU均应用于神经网络模型的数据处理的情况下,针对神经网络模型中的每个操作需要调用一次DSP。因此,当神经网络模型的深度较深时,CPU无法通过对DSP的单次调用来传递数据处理所需的全部数据,而要对DSP进行多次调用。然而,反复调用DSP会带来非常大的开销。另外,每次调用DSP时在ION Buffer上所映射的地址空间可能都是不同的,DSP无法正确访问到针对上次调用所应映射的地址空间中的数据。
鉴于此,本公开提出一种可以应用于神经网络模型的信息处理方法。根据该方法,可以由CPU维护一个虚拟存储空间,该虚拟存储空间存储有DSP执行神经网络模型的数据处理所需要的信息(例如,权重参数和输入数据),并且该虚拟存储空间可以被DSP共享。以这种方式,CPU仅通过一次或有限次调用,即可将DSP进行整个神经网络模型的数据处理所需要的全部数据传递给DSP,最大限度地减少了DSP调用所带来的开销。
请参阅图1,图1是本公开实施例应用的一种电子设备的示意图。如图1所示,电 子设备可以包括CPU 101、DSP 102和缓存器103。CPU 101,用于接收携带数据的运行指令,并基于接收到的运行指令调度DSP 102。DSP 102,用于响应于CPU 101的调度,对数据进行处理。缓存器103,用于缓存数据。可选地,该缓存器103可以为ION缓存器,也可以为能够被CPU和DSP访问的其它缓存器或存储模块。
请参阅图2,图2是本公开实施例提供的一种信息处理方法的流程示意图。该方法可以适用于图1所示的电子设备。其中,该信息处理方法是从CPU的角度描述的。如图2所示,该信息处理方法可以包括以下步骤。
201、获取待处理数据。
CPU可以获取携带待处理数据的运行指令,并且可以基于该运行指令对DSP进行调度。该运行指令可以是用户输入的,也可以是CPU所属电子设备生成的,还可以是其它电子设备或服务器发送的。待处理数据可以为神经网络中网络层的网络参数和输入数据,该网络层的层数大于1。待处理数据也可以是其它由DSP单独处理的数据或者需要CPU和DSP共同处理的数据。
202、为待处理数据分配虚拟存储空间。
CPU获取到待处理数据之后,为待处理数据分配虚拟存储空间。可以基于待处理数据所需的存储空间大小向电子设备缓存申请缓存空间,之后基于电子设备缓存返回的第一缓存空间的位置指示信息确定虚拟存储空间对应的物理存储位置。其中,电子设备缓存即CPU所属电子设备中的缓存器,该缓存器可以被CPU和DSP访问。该缓存器可以为ION缓存器,也可以为其它能够被CPU和DSP访问的缓存器。
CPU可以先向电子设备中的缓存器发送用于申请缓存空间的请求(例如,申请指令),该请求可以携带待处理数据所需的存储空间大小的信息。缓存器接收到来自CPU的请求之后,可以从缓存器的空闲缓存空间中选取第一缓存空间,第一缓存空间的大小等于待处理数据所需的存储空间大小,为第一缓存空间分配位置指示信息,如指向第一缓存空间的指针,之后向CPU返回该位置指示信息。CPU接收到来自缓存器的第一缓存空间的位置指示信息之后,可以基于待处理数据所需的存储空间大小创建虚拟存储空间,为虚拟存储空间分配基地址,建立虚拟存储空间的基地址与该位置指示信息之间的对应关系。其中,基于待处理数据所需的存储空间大小创建虚拟存储空间,以及为虚拟存储空间分配基地址也可以是在向电子设备中的缓存器发送用于申请缓存空间的请求之前创建的。在向电子设备中的缓存器发送用于申请缓存空间的请求之前,已经基于待处理 数据所需的存储空间大小创建虚拟存储空间,以及为虚拟存储空间分配基地址的情况下,该请求还可以携带虚拟存储空间的基地址。此时,缓存器可以建立该位置指示信息与虚拟存储空间的基地址之间的对应关系。CPU接收到来自缓存器的指示第一缓存空间的位置指示信息之后,可以不建立虚拟存储空间的基地址与该位置指示信息之间的对应关系。其中,第一缓存空间的位置指示信息用于指示第一缓存空间的基地址。可见,可以通过虚拟存储空间使包括多个操作的待处理数据对应缓存器中的一个位置指示信息,如指针。这样,可以通过虚拟存储空间一次性将包括多个操作的数据共享给DSP,从而可以提高信息处理效率。此外,由于第一缓存空间是基于待处理数据所需的存储空间大小进行申请的,因此,可以申请到合适的缓存空间,即不会由于申请太多缓存空间而造成缓存空间的浪费,也不会因为申请太少缓存空间导致无法对待处理数据进行缓存处理。
203、将待处理数据存储至虚拟存储空间。
CPU为待处理数据分配虚拟存储空间之后,可以将待处理数据存储至虚拟存储空间,即将待处理数据存储至与虚拟存储空间对应的第一缓存空间。
204、向DSP发送携带虚拟存储空间的信息的数据处理指令。
CPU将待处理数据存储至虚拟存储空间之后,可以向DSP发送携带虚拟存储空间的信息的数据处理指令,以便DSP从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据并对待处理数据进行处理。CPU可以调用预设函数库向DSP发送携带虚拟存储空间的信息的数据处理指令。预设函数库是专门用于对DSP进行调用的函数库,如FastRPC。虚拟存储空间的信息可以包括虚拟存储空间的基地址。
在图2所描述的信息处理方法中,CPU可以通过虚拟存储空间一次性将多个操作的数据共享给DSP,在这个过程中仅进行一次DSP调度即可,从而减小DSP调度开销,提高信息处理效率。
请参阅图3,图3是本公开实施例提供的另一种信息处理方法的流程示意图。其中,该信息处理方法是从CPU的角度描述的。如图3所示,该信息处理方法可以包括以下步骤。
301、获取待处理数据。
其中,步骤301与步骤201相同,详细描述请参考步骤201,在此不再赘述。
302、确定待处理数据所需的存储空间大小。
CPU获取到待处理数据之后,可以确定待处理数据所需的存储空间大小。在一些实施例中,可以先确定待处理数据包括的至少一个数组中每个数组所需的存储空间大小,之后基于至少一个数组中每个数组所需的存储空间大小确定待处理数据所需的存储空间大小,即可以将至少一个数组中每个数组所需的存储空间大小之和确定为待处理数据所需的存储空间大小。其中,每个数组包括同一类型的数据。
在一些实施例中,也可以基于待处理数据的数据量和待处理数据对应的结果数据量(即,待处理数据对应的结果数据的数据量)确定待处理数据所需的存储空间大小,即可以将待处理数据的数据量和待处理数据对应的结果数据量之和确定为待处理数据所需的存储空间大小。该结果数据量可以根据待处理数据所涉及的处理参数预先确定。
在一些实施例中,还可以先确定待处理数据包括的至少一个数组中每个数组的数据量和每个数组对应的结果数据量,之后根据至少一个数组中每个数组的数据量和每个数组对应的结果数据量确定至少一个数组中每个数组所需的存储空间大小。也即先计算每个数组包括的数据所需的存储空间大小以及该数组包括的数据对应的结果所需存储空间大小,之后由每个数组包括的数据所需的存储存储空间大小以及该数组包括的数据对应的结果所需存储空间大小之和得到每个数组所需的存储空间大小。
303、根据待处理数据所需的存储空间大小为待处理数据分配虚拟存储空间。
其中,步骤303与步骤202相同,详细描述请参考步骤202,在此不再赘述。此外,CPU还可以根据至少一个数组中每个数组所需的存储空间大小,确定至少一个数组中每个数组在虚拟存储空间中的偏移量。然后,可以根据基地址以及偏移量来在虚拟存储空间中写入数据。
304、将待处理数据存储至虚拟存储空间。
其中,步骤304与步骤203相同,详细描述请参考步骤203,在此不再赘述。
305、向DSP发送携带虚拟存储空间的信息的数据处理指令。
其中,步骤305与步骤204相同,详细描述请参考步骤204,在此不再赘述。
在图3所描述的信息处理方法中,CPU可以通过虚拟存储空间一次性将包括多个操作的数据发送给DSP,从而减小DSP调度开销,提高信息处理效率。
请参阅图4,图4是本公开实施例提供的又一种信息处理方法的流程示意图。其中,该信息处理方法是从DSP的角度描述的。如图4所示,该信息处理方法可以包括以下 步骤。
401、接收来自CPU的数据处理指令。
CPU向DSP发送携带虚拟存储空间的信息的数据处理指令之后,DSP接收来自CPU的该数据处理指令。CPU和DSP均具有访问虚拟存储空间的信息对应的虚拟存储空间的权限。
402、从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据。
DSP接收到来自CPU的数据处理指令之后,从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据。虚拟存储空间的信息对应的虚拟存储空间对应于电子设备缓存中的第一缓存空间,CPU和DSP均具有访问电子设备缓存的权限。因此,DSP从虚拟存储空间的信息对应的第一缓存空间获取待处理数据。在第一缓存空间的位置指示信息与虚拟存储空间的基地址之间的对应关系是由CPU建立的情况下,DSP可以先获取虚拟存储空间的信息对应的位置指示信息,之后从位置指示信息对应的第一缓存空间获取待处理数据。在第一缓存空间的位置指示信息与虚拟存储空间的基地址之间的对应关系是由缓存器建立的情况下,DSP可以直接向缓存器发送携带虚拟存储空间的信息的数据获取请求,缓存器接收到来自DSP的数据获取请求之后,根据虚拟存储空间的信息以及位置指示信息与虚拟存储空间的信息之间的对应关系获取虚拟存储空间的信息对应的位置指示信息,从位置指示信息对应的第一缓存空间获取待处理数据,之后向DSP返回待处理数据。
403、对待处理数据进行处理。
DSP从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据之后,对待处理数据进行处理。对待处理数据的处理可以包括卷积处理,还可以包括全连接处理等其它处理。
404、将待处理数据的处理结果存储在虚拟存储空间的信息对应的第一缓存空间。
DSP对待处理数据进行处理之后,可以将待处理数据的处理结果存储在虚拟存储空间的信息对应的第一缓存空间。可以向缓存器发送存储指令,该存储指令携带处理结果和虚拟存储空间的信息。缓存器接收到来自DSP的存储指令之后,将处理结果存储在第一缓存空间。
DSP将待处理数据的处理结果存储在虚拟存储空间的信息对应的第一缓存空间之后,可以向CPU发送处理完成响应消息,以便CPU可以从缓存器获取处理结果,从而可以 及时清理缓存器的缓存空间。
在图4所描述的信息处理方法中,DSP可以通过虚拟存储空间的信息一次性从虚拟存储空间获取数据并处理,从而可以提高数据处理效率。
请参阅图5,图5是本公开实施例提供的又一种信息处理方法的流程示意图。其中,该信息处理方法是从CPU和DSP的角度描述的。如图5所示,该信息处理方法可以包括以下步骤。
501、CPU获取待处理数据。
其中,步骤501与步骤201相同,详细描述请参考步骤201,在此不再赘述。
502、CPU确定待处理数据所需的存储空间大小。
其中,步骤502与步骤302相同,详细描述请参考步骤302,在此不再赘述。
503、CPU向缓存器发送用于申请缓存空间的请求。
CPU确定待处理数据所需的存储空间大小之后,可以向缓存器发送用于申请缓存空间的请求,该请求携带待处理数据所需的存储空间大小的信息。
504、缓存器向CPU发送第一缓存空间的位置指示信息。
缓存器接收到来自CPU的请求之后,根据该信息对应的存储空间大小从空闲的缓存空间中选取第一缓存空间,之后向CPU发送第一缓存空间的位置指示信息。例如,向CPU发送指针,指示第一缓存空间的基地址。
505、CPU基于位置指示信息确定虚拟存储空间对应的物理存储地址。
CPU在确定待处理数据所需的存储空间大小之后,可以根据待处理数据所需的存储空间大小为待处理数据分配虚拟存储空间,例如数据在虚拟存储空间的偏移量。其中,该偏移量可以为每个数组的数据在虚拟存储空间中的偏移量,也可以为每个数组的数据及其对应的结果数据的偏移量。该偏移量相对于虚拟存储空间的基地址来被确定。其它相关的描述可以参考上述实施例。
在接收到来自缓存器的位置指示信息之后,可以基于位置指示信息确定虚拟存储空间对应的物理存储地址,即确定虚拟存储空间的基地址,从而确定待处理数据的实际存储位置。
506、CPU将待处理数据存储至虚拟存储空间,即存储至位置指示信息指示的第一 缓存空间。
其中,步骤506与步骤203相似,详细描述请参考步骤203,在此不再赘述。
507、CPU向DSP发送携带虚拟存储空间的信息的数据处理指令。
相应地,DSP接收来自CPU的数据处理指令。
其中,步骤507与步骤204相同,详细描述请参考步骤204,在此不再赘述。
其中,DSP接收来自CPU的数据处理指令与步骤401相同,详细描述请参考步骤401,在此不再赘述。
508、DSP从虚拟存储空间的信息对应的第一缓存空间获取待处理数据。
其中,步骤508与步骤402相似,详细描述请参考步骤402,在此不再赘述。
509、DSP对待处理数据进行处理。
其中,步骤509与步骤403相同,详细描述请参考步骤403,在此不再赘述。
510、DSP将待处理数据的处理结果存储在虚拟存储空间的信息对应的第一缓存空间。
其中,步骤510与步骤404相同,详细描述请参考步骤404,在此不再赘述。
在本公开一些实施例中,CPU负责解析神经网络模型,并计算与神经网络模型的数据处理相关的各个数组需要的空间大小。CPU根据计算出来的空间大小,向一个虚拟的堆申请空间(即,虚拟存储空间)。这里申请空间返回的结果并非通常的指针,而是一个相对于该堆的基地址的偏移量。在申请空间完成后,CPU统计需要的空间的总和,并在ION Buffer上分配与该总和相对应大小的空间,获得实际的基地址。然后,CPU将运行神经网络模型所需要的相关参数和数据写入所申请的虚拟存储空间,即实际写入ION Buffer上所分配的与该虚拟空间相对应的空间,具体的写入地址可以通过基地址和偏移量计算得出。然后,CPU发起FastRPC调用,通过FastRPC将这块虚拟存储空间传递给DSP。这样,CPU与DSP均可共享该虚拟存储空间。然后,DSP可以根据该虚拟存储空间的地址信息解析到ION Buffer中的数据,并开始计算,将计算结果存储到该空间的相应位置,并返回给CPU。
通过这种方式,整个数据处理过程仅进行了一次FastRPC调用,而这次调用中仅传递了与该虚拟存储空间相关的一个数组,从而在满足FastRPC调用的要求的前提下最大限度地减少了额外的开销。
请参阅图6,图6是本公开实施例提供的一种信息处理装置的结构示意图。如图6所示,该信息处理装置可以包括:获取单元601,用于获取待处理数据;分配单元602,用于为待处理数据分配虚拟存储空间;存储单元603,用于将待处理数据存储至虚拟存储空间;发送单元604,用于向DSP发送携带虚拟存储空间的信息的数据处理指令,数据处理指令用于DSP从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据并对待处理数据进行处理。
在一些实施例中,虚拟存储空间对应于电子设备的缓存器中的第一缓存空间;存储单元603,具体用于将待处理数据存储至第一缓存空间;DSP从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据包括:DSP从虚拟存储空间的信息对应的第一缓存空间获取待处理数据。
在一些实施例中,分配单元602具体用于:基于待处理数据所需的存储空间大小,向电子设备的缓存器申请缓存空间;基于电子设备的缓存器返回的第一缓存空间的位置指示信息,确定虚拟存储空间的物理存储位置。
在一些实施例中,所述待处理数据包括至少一个数组,每个数组包括同一类型的数据,该信息处理装置还可以包括:确定单元605,用于确定所述至少一个数组中每个数组所需的存储空间大小,基于所述至少一个数组中每个数组所需的存储空间大小确定待处理数据所需的存储空间大小。
在一些实施例中,确定单元605,还用于根据所述至少一个数组中每个数组所需的存储空间大小,确定所述至少一个数组中每个数组在虚拟存储空间中的偏移量。
在一些实施例中,确定单元605,用于基于待处理数据的数据量和待处理数据对应的结果数据量,确定待处理数据所需的存储空间大小。
在一些实施例中,分配单元602基于待处理数据所需的存储空间大小,向电子设备的缓存器申请缓存空间包括:向电子设备的缓存器发送用于申请缓存空间的请求,该请求携带待处理数据所需的存储空间大小的信息;接收来自缓存器的位置指示信息,位置指示信息用于指示第一缓存空间的基地址。
在一些实施例中,待处理数据为神经网络中网络层的网络参数和输入数据。
本实施例可对应于本申请实施例中方法实施例描述,并且各个单元的上述和其它操作和/或功能分别为了实现图2和图3中各方法中的相应流程,为了简洁,在此不再赘述。
请参阅图7,图7是本公开实施例提供的另一种信息处理装置的结构示意图。如图7所示,该信息处理装置可以包括:接收单元701,用于接收来自CPU的数据处理指令,数据处理指令携带虚拟存储空间的信息;获取单元702,用于从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据;处理单元703,用于对待处理数据进行处理。
在一些实施例中,虚拟存储空间的信息对应的虚拟存储空间对应于电子设备的缓存器中的第一缓存空间;获取单元702,具体用于从虚拟存储空间的信息对应的第一缓存空间获取待处理数据。
在一些实施例中,该信息处理装置还可以包括:存储单元704,用于将待处理数据的处理结果存储在虚拟存储空间的信息对应的第一缓存空间。
在一些实施例中,待处理数据为神经网络中网络层的网络参数和输入数据。
本实施例可对应于本申请实施例中方法实施例描述,并且各个单元的上述和其它操作和/或功能分别为了实现图4中各方法中的相应流程,为了简洁,在此不再赘述。
请参阅图8,图8是本公开实施例提供的又一种信息处理装置的结构示意图。该信息处理装置可以实现图1所示的电子设备中的CPU的各个功能。如图8所示,该信息处理装置可以包括:至少一个处理器801,如CPU,收发器802以及至少一个总线803。总线803,用于实现这些组件之间的连接通信。
在一些实施例中,处理器801用于执行以下操作:获取待处理数据;为待处理数据分配虚拟存储空间;将待处理数据存储至虚拟存储空间;收发器802,用于向DSP发送携带虚拟存储空间的信息的数据处理指令,数据处理指令用于DSP从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据并对待处理数据进行处理。
在一些实施例中,虚拟存储空间对应于电子设备缓存中的第一缓存空间;处理器801将待处理数据存储至虚拟存储空间包括:将待处理数据存储至第一缓存空间;DSP从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据包括:DSP从虚拟存储空间的信息对应的第一缓存空间获取待处理数据。
在一些实施例中,处理器801为待处理数据分配虚拟存储空间包括:基于待处理数据所需的存储空间大小,向电子设备的缓存器申请缓存空间;基于电子设备的缓存器返回的第一缓存空间的位置指示信息,确定虚拟存储空间的物理存储位置。
在一些实施例中,待处理数据包括至少一个数组,每个数组包括同一类型的数据,处理器801还用于执行以下操作:确定所述至少一个数组中每个数组所需的存储空 间大小;基于所述至少一个数组中每个数组所需的存储空间大小,确定待处理数据所需的存储空间大小。
在一些实施例中,处理器801还用于执行以下操作:根据所述至少一个数组中每个数组所需的存储空间大小,确定所述至少一个数组中每个数组在虚拟存储空间中的偏移量。
在一些实施例中,处理器801还用于执行以下操作:基于待处理数据的数据量和待处理数据对应的结果数据量,确定待处理数据所需的存储空间大小。
在一些实施例中,处理器801基于待处理数据所需的存储空间大小,向电子设备的缓存器申请缓存空间包括:向电子设备中的缓存器发送用于申请缓存空间的请求,该请求携带待处理数据所需的存储空间大小的信息;接收来自缓存器的位置指示信息,位置指示信息用于指示第一缓存空间的基地址。
在一些实施例中,待处理数据为神经网络中网络层的网络参数和输入数据。
其中,步骤201-步骤203、步骤301-步骤304以及步骤501-步骤504可以由CPU中的处理器801来执行,步骤204、步骤305以及步骤505可以由CPU中的收发器802来执行。
其中,获取单元601、分配单元602、存储单元603和确定单元605可以由CPU中的处理器801来实现,发送单元604可以由CPU中的收发器802来实现。
上述信息处理装置还可以用于执行前述方法实施例中执行的各种方法,不再赘述。
在另一些实施例中,该信息处理装置可以实现图1所示的电子设备中的DSP的各个功能。其中:收发器802,用于接收来自CPU的数据处理指令,数据处理指令携带虚拟存储空间的信息;处理器801用于执行以下操作:从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据;对待处理数据进行处理。
在一些实施例中,虚拟存储空间的信息对应的虚拟存储空间对应于电子设备的缓存器中的第一缓存空间;处理器801从虚拟存储空间的信息对应的虚拟存储空间获取待处理数据包括:从虚拟存储空间的信息对应的第一缓存空间获取待处理数据。
在一些实施例中,处理器801还用于执行以下操作:将待处理数据的处理结果存储在虚拟存储空间的信息对应的第一缓存空间。
在一些实施例中,待处理数据为神经网络中网络层的网络参数和输入数据。
其中,步骤402-步骤404以及步骤506-步骤508可以由处理器801来执行,步骤204、步骤305和步骤505中接收数据处理指令的步骤以及步骤401可以由收发器802来执行。
其中,获取单元702、处理单元703和存储单元704可以由处理器801来实现,接收单元701可以由收发器802来实现。
上述信息处理装置还可以用于执行前述方法实施例中执行的各种方法,不再赘述。
在一些实施例中提供了一种存储介质,该存储介质用于存储应用程序,应用程序用于在运行时执行图2-图4的信息处理方法。
在一些实施例中提供了一种应用程序,该应用程序用于在运行时执行图2-图4的信息处理方法。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、ROM、RAM、磁盘或光盘等。

Claims (27)

  1. 一种信息处理方法,其特征在于,包括:
    中央处理单元CPU获取待处理数据;
    为所述待处理数据分配虚拟存储空间;
    将所述待处理数据存储至所述虚拟存储空间;
    向数字信号处理器DSP发送携带所述虚拟存储空间的信息的数据处理指令,所述数据处理指令用于所述DSP从所述信息对应的虚拟存储空间获取所述待处理数据并对所述待处理数据进行处理。
  2. 根据权利要求1所述的方法,其特征在于,所述虚拟存储空间对应于所述CPU所属电子设备的缓存器中的第一缓存空间;
    所述将所述待处理数据存储至所述虚拟存储空间包括:
    所述CPU将所述待处理数据存储至所述第一缓存空间;
    所述DSP从所述信息对应的虚拟存储空间获取所述待处理数据包括:
    所述DSP从所述信息对应的所述第一缓存空间获取所述待处理数据。
  3. 根据权利要求2所述的方法,其特征在于,所述为所述待处理数据分配虚拟存储空间包括:
    基于所述待处理数据所需的存储空间大小,向所述电子设备的缓存器申请缓存空间;
    基于所述电子设备的缓存器返回的所述第一缓存空间的位置指示信息,确定所述虚拟存储空间的物理存储位置。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述待处理数据所需的存储空间大小,向所述电子设备的缓存器申请缓存空间包括:
    向所述电子设备的缓存器发送用于申请缓存空间的请求,所述请求携带所述待处理数据所需的存储空间大小的信息;
    接收来自所述缓存器的位置指示信息,所述位置指示信息用于指示所述第一缓存空间的基地址。
  5. 根据权利要求1至3中任一项所述的方法,其特征在于,所述待处理数据包括至少一个数组,每个数组包括同一类型的数据,所述方法还包括:
    确定所述至少一个数组中每个数组所需的存储空间大小;
    基于所述至少一个数组中每个数组所需的存储空间大小,确定所述待处理数据所需的存储空间大小。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    根据所述至少一个数组中每个数组所需的存储空间大小,确定所述至少一个数组中每个数组在所述虚拟存储空间中的偏移量。
  7. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    基于所述待处理数据的数据量和所述待处理数据对应的结果数据量,确定所述待处理数据所需的存储空间大小。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述待处理数据包括神经网络模型中目标网络层的网络参数和输入数据。
  9. 一种信息处理方法,其特征在于,包括:
    数字信号处理器DSP接收来自中央处理单元CPU的数据处理指令,所述数据处理指令携带虚拟存储空间的信息;
    从所述信息对应的虚拟存储空间获取待处理数据;
    对所述待处理数据进行所述数据处理指令指示的处理操作。
  10. 根据权利要求9所述的方法,其特征在于,所述信息对应的虚拟存储空间对应于所述CPU所属电子设备的缓存器中的第一缓存空间;
    所述从所述信息对应的虚拟存储空间获取待处理数据包括:
    从所述信息对应的第一缓存空间获取待处理数据。
  11. 根据权利要求9或10所述的方法,其特征在于,所述方法还包括:
    将所述待处理数据的处理结果存储在所述信息对应的第一缓存空间。
  12. 根据权利要求9-11任一项所述的方法,其特征在于,所述待处理数据为神经网络模型中目标网络层的网络参数和输入数据。
  13. 一种信息处理装置,其特征在于,包括:
    获取单元,用于获取待处理数据;
    分配单元,用于为所述待处理数据分配虚拟存储空间;
    存储单元,用于将所述待处理数据存储至所述虚拟存储空间;
    发送单元,用于向数字信号处理器DSP发送携带所述虚拟存储空间的信息的数据处理指令,所述数据处理指令用于所述DSP从所述信息对应的虚拟存储空间获取所述待处理数据并对所述待处理数据进行处理。
  14. 根据权利要求13所述的装置,其特征在于,所述虚拟存储空间对应于电子设备的缓存器中的第一缓存空间;
    所述存储单元,具体用于将所述待处理数据存储至所述第一缓存空间;
    所述DSP从所述信息对应的虚拟存储空间获取所述待处理数据包括:
    所述DSP从所述信息对应的所述第一缓存空间获取所述待处理数据。
  15. 根据权利要求14所述的装置,其特征在于,所述分配单元具体用于:
    基于所述待处理数据所需的存储空间大小,向所述电子设备的缓存器申请缓存空间;
    基于所述电子设备的缓存器返回的所述第一缓存空间的位置指示信息,确定所述虚拟存储空间的物理存储位置。
  16. 根据权利要求15所述的装置,其特征在于,所述分配单元基于所述待处理数据所需的存储空间大小,向所述电子设备的缓存器申请缓存空间包括:
    向所述电子设备的缓存器发送用于申请缓存空间的请求,所述请求携带所述待处理数据所需的存储空间大小的信息;
    接收来自所述缓存器的位置指示信息,所述位置指示信息用于指示所述第一缓存空间的基地址。
  17. 根据权利要求13至15中任一项所述的装置,其特征在于,所述待处理数据包括至少一个数组,每个数组包括同一类型的数据,所述装置还包括:
    第一确定单元,用于确定所述至少一个数组中每个数组所需的存储空间大小;
    所述第一确定单元,还用于基于所述至少一个数组中每个数组所需的存储空间大小,确定所述待处理数据所需的存储空间大小。
  18. 根据权利要求16所述的装置,其特征在于,所述第一确定单元,还用于根据所述至少一个数组中每个数组所需的存储空间大小,确定所述至少一个数组中每个数组在所述虚拟存储空间中的偏移量。
  19. 根据权利要求13至15任一项所述的装置,其特征在于,所述装置还包括:
    第二确定单元,用于基于所述待处理数据的数据量和所述待处理数据对应的结果数据量,确定所述待处理数据所需的存储空间大小。
  20. 根据权利要求13-19任一项所述的装置,其特征在于,所述待处理数据包括神经网络模型中目标网络层的网络参数和输入数据。
  21. 一种信息处理装置,其特征在于,包括:
    接收单元,用于接收来自中央处理单元CPU的数据处理指令,所述数据处理指令携带虚拟存储空间的信息;
    获取单元,用于从所述信息对应的虚拟存储空间获取待处理数据;
    处理单元,用于对所述待处理数据进行处理。
  22. 根据权利要求21所述的装置,其特征在于,所述信息对应的虚拟存储空间对应于电子设备的缓存器中的第一缓存空间;
    所述获取单元,具体用于从所述信息对应的第一缓存空间获取待处理数据。
  23. 根据权利要求21或22所述的装置,其特征在于,所述装置还包括:
    存储单元,用于将所述待处理数据的处理结果存储在所述信息对应的第一缓存空间。
  24. 根据权利要求21-23任一项所述的装置,其特征在于,所述待处理数据为神经网络模型中目标网络层的网络参数和输入数据。
  25. 一种信息处理装置,其特征在于,包括处理器和存储器,所述存储器用于存储计算机指令,所述处理器用于调用所述存储器存储的计算机指令,执行如权利要求1-12任一项所述的信息处理方法。
  26. 一种电子设备,其特征在于,包括如权利要求13-20任一项所述的信息处理装置和如权利要求21-24任一项所述的信息处理装置。
  27. 一种可读存储介质,其特征在于,所述可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-12任一项所述的信息处理方法。
PCT/CN2020/103047 2019-08-06 2020-07-20 信息处理方法、装置、电子设备及存储介质 WO2021023000A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020217019945A KR20210094629A (ko) 2019-08-06 2020-07-20 정보 처리 방법, 장치, 전자 디바이스 및 기록 매체
JP2021535674A JP2022514382A (ja) 2019-08-06 2020-07-20 情報処理方法、装置、電子デバイス、及び記録媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910733625.1A CN110489356B (zh) 2019-08-06 2019-08-06 信息处理方法、装置、电子设备及存储介质
CN201910733625.1 2019-08-06

Publications (1)

Publication Number Publication Date
WO2021023000A1 true WO2021023000A1 (zh) 2021-02-11

Family

ID=68549633

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103047 WO2021023000A1 (zh) 2019-08-06 2020-07-20 信息处理方法、装置、电子设备及存储介质

Country Status (5)

Country Link
JP (1) JP2022514382A (zh)
KR (1) KR20210094629A (zh)
CN (1) CN110489356B (zh)
TW (1) TWI782304B (zh)
WO (1) WO2021023000A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110489356B (zh) * 2019-08-06 2022-02-22 上海商汤智能科技有限公司 信息处理方法、装置、电子设备及存储介质
CN113342553A (zh) * 2021-07-06 2021-09-03 阳光保险集团股份有限公司 一种数据的获取方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040034748A1 (en) * 2002-08-13 2004-02-19 Renesas Technology Corp. Memory device containing arbiter performing arbitration for bus access right
CN104601711A (zh) * 2015-01-27 2015-05-06 曙光云计算技术有限公司 用于云服务器的基于fpga的数据存储方法和系统
CN105589829A (zh) * 2014-09-15 2016-05-18 华为技术有限公司 基于多核处理器芯片的数据处理方法、装置以及系统
CN106339258A (zh) * 2016-08-10 2017-01-18 西安诺瓦电子科技有限公司 可编程逻辑器件与微处理器共享内存的管理方法及装置
CN108920413A (zh) * 2018-06-28 2018-11-30 中国人民解放军国防科技大学 面向gpdsp的卷积神经网络多核并行计算方法
CN110489356A (zh) * 2019-08-06 2019-11-22 上海商汤智能科技有限公司 信息处理方法、装置、电子设备及存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7873810B2 (en) * 2004-10-01 2011-01-18 Mips Technologies, Inc. Microprocessor instruction using address index values to enable access of a virtual buffer in circular fashion
US20060179273A1 (en) * 2005-02-09 2006-08-10 Advanced Micro Devices, Inc. Data processor adapted for efficient digital signal processing and method therefor
CN101000596A (zh) * 2007-01-22 2007-07-18 北京中星微电子有限公司 一种可实现芯片内多核间通信的芯片及通信方法
US8359453B2 (en) * 2010-09-13 2013-01-22 International Business Machines Corporation Real address accessing in a coprocessor executing on behalf of an unprivileged process
US9164804B2 (en) * 2012-06-20 2015-10-20 Memory Technologies Llc Virtual memory module
US9218289B2 (en) * 2012-08-06 2015-12-22 Qualcomm Incorporated Multi-core compute cache coherency with a release consistency memory ordering model
CN104317768B (zh) * 2014-10-15 2017-02-15 中国人民解放军国防科学技术大学 面向cpu+dsp异构系统的矩阵乘加速方法
US10049327B2 (en) * 2014-12-12 2018-08-14 Qualcomm Incorporated Application characterization for machine learning on heterogeneous core devices
CN105045763B (zh) * 2015-07-14 2018-07-13 北京航空航天大学 一种基于fpga+多核dsp的pd雷达信号处理系统及其并行实现方法
US9626295B2 (en) * 2015-07-23 2017-04-18 Qualcomm Incorporated Systems and methods for scheduling tasks in a heterogeneous processor cluster architecture using cache demand monitoring
US20190004878A1 (en) * 2017-07-01 2019-01-03 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with security, power reduction, and performace features
CN107463510B (zh) * 2017-08-21 2020-05-08 北京工业大学 一种面向高性能的异构多核共享cache缓冲管理方法
CN109034382A (zh) * 2017-10-30 2018-12-18 上海寒武纪信息科技有限公司 场景或物体的识别方法及相关产品
CN108959103A (zh) * 2018-07-31 2018-12-07 西安电子科技大学 基于bwdsp库函数的软件测试方法
CN109947680A (zh) * 2019-01-16 2019-06-28 佛山市顺德区中山大学研究院 一种基于dsp的软件运行速度优化方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040034748A1 (en) * 2002-08-13 2004-02-19 Renesas Technology Corp. Memory device containing arbiter performing arbitration for bus access right
CN105589829A (zh) * 2014-09-15 2016-05-18 华为技术有限公司 基于多核处理器芯片的数据处理方法、装置以及系统
CN104601711A (zh) * 2015-01-27 2015-05-06 曙光云计算技术有限公司 用于云服务器的基于fpga的数据存储方法和系统
CN106339258A (zh) * 2016-08-10 2017-01-18 西安诺瓦电子科技有限公司 可编程逻辑器件与微处理器共享内存的管理方法及装置
CN108920413A (zh) * 2018-06-28 2018-11-30 中国人民解放军国防科技大学 面向gpdsp的卷积神经网络多核并行计算方法
CN110489356A (zh) * 2019-08-06 2019-11-22 上海商汤智能科技有限公司 信息处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN110489356A (zh) 2019-11-22
TW202107288A (zh) 2021-02-16
JP2022514382A (ja) 2022-02-10
KR20210094629A (ko) 2021-07-29
TWI782304B (zh) 2022-11-01
CN110489356B (zh) 2022-02-22

Similar Documents

Publication Publication Date Title
US7818503B2 (en) Method and apparatus for memory utilization
US8266289B2 (en) Concurrent data processing in a distributed system
US9639459B2 (en) I/O latency and IOPs performance in thin provisioned volumes
WO2021023000A1 (zh) 信息处理方法、装置、电子设备及存储介质
US10116746B2 (en) Data storage method and network interface card
US10235047B2 (en) Memory management method, apparatus, and system
CN106161110A (zh) 一种网络设备中的数据处理方法及系统
US20210117333A1 (en) Providing direct data access between accelerators and storage in a computing environment, wherein the direct data access is independent of host cpu and the host cpu transfers object map identifying object of the data
CN109240617A (zh) 分布式存储系统写请求处理方法、装置、设备及存储介质
CN104951239B (zh) 高速缓存驱动器、主机总线适配器及其使用的方法
CN113760560A (zh) 一种进程间通信方法以及进程间通信装置
US7660964B2 (en) Windowing external block translations
WO2022062833A1 (zh) 内存分配方法及相关设备
WO2018119709A1 (zh) 用于多操作系统的内存访问方法、装置和电子设备
US20140149528A1 (en) Mpi communication of gpu buffers
WO2018103022A1 (zh) 帧缓存实现方法、装置、电子设备和计算机程序产品
EP1557755A1 (en) Method for transferring data in a multiprocessor system, multiprocessor system and processor carrying out this method.
CN113296691B (zh) 数据处理系统、方法、装置以及电子设备
CN109558250A (zh) 一种基于fpga的通信方法、设备、主机及异构加速系统
CN107209738A (zh) 储存存储器直接访问
US11609869B2 (en) Systems, methods, and devices for time synchronized storage delivery
EP1839148A2 (en) Transferring data between system and storage in a shared buffer
TWI530787B (zh) 電子裝置以及資料寫入方法
US20140143457A1 (en) Determining a mapping mode for a dma data transfer
CN109992217A (zh) 一种服务质量控制方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20850823

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021535674

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217019945

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20850823

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20850823

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20850823

Country of ref document: EP

Kind code of ref document: A1