CN115964335A - Data processing method and device and electronic equipment - Google Patents

Data processing method and device and electronic equipment Download PDF

Info

Publication number
CN115964335A
CN115964335A CN202211616573.8A CN202211616573A CN115964335A CN 115964335 A CN115964335 A CN 115964335A CN 202211616573 A CN202211616573 A CN 202211616573A CN 115964335 A CN115964335 A CN 115964335A
Authority
CN
China
Prior art keywords
data
neural network
dimensional data
width
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211616573.8A
Other languages
Chinese (zh)
Inventor
张韵东
刘学丰
刘小涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongxing Micro Technology Co ltd
Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co ltd
Vimicro Corp
Original Assignee
Zhongxing Micro Technology Co ltd
Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co ltd
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongxing Micro Technology Co ltd, Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co ltd, Vimicro Corp filed Critical Zhongxing Micro Technology Co ltd
Priority to CN202211616573.8A priority Critical patent/CN115964335A/en
Publication of CN115964335A publication Critical patent/CN115964335A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A data processing method and device and electronic equipment are provided. The data processing method comprises the following steps: acquiring sampling data of the electromagnetic signal, wherein the sampling data is one-dimensional data; segmenting one-dimensional data into a plurality of data segments; converting the plurality of data segments into two-dimensional data such that a width of the two-dimensional data is less than a width of a neural network model supported by the neural network processor; and processing the two-dimensional data based on a neural network processor. The embodiment of the application is beneficial to solving the problem that the data width of the electromagnetic signal is too large and the electromagnetic signal cannot be deployed in the neural network model, and is also beneficial to improving the processing efficiency and reducing the time delay through parallel processing of the two-dimensional data of the lossless conversion.

Description

Data processing method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of data processing, and more particularly, to a data processing method and device, and an electronic device.
Background
With the development of automation and intelligence of electronic devices, terminal devices generally process massive multimedia data such as videos and images based on a neural network processor. However, the neural network processor is limited by hardware resources, the neural network model in the neural network processor has a limited supportable width, the supportable width generally does not exceed 4096 bits, and for the sampling data of the electromagnetic signal with a larger width (for example, greater than 10000 bits), the neural network model is difficult to support and cannot meet the use requirement.
Disclosure of Invention
The embodiment of the application provides a data processing method and device and electronic equipment. Various aspects of embodiments of the present application are described below.
In a first aspect, a method for data processing is provided, including: acquiring sampling data of an electromagnetic signal, wherein the sampling data is one-dimensional data; segmenting the one-dimensional data into a plurality of data segments; converting the plurality of data segments into two-dimensional data such that a width of the two-dimensional data is less than a width of a neural network model supported by a neural network processor; processing the two-dimensional data based on the neural network processor.
In a second aspect, an apparatus for data processing is provided, including: the general processor is used for acquiring sampling data of the electromagnetic signals, and the sampling data are one-dimensional data; dividing the one-dimensional data into a plurality of data segments, and converting the plurality of data segments into two-dimensional data; a neural network processor for processing the two-dimensional data; wherein a width of the two-dimensional data is less than a width of a neural network model supported by the neural network processor.
In a third aspect, an electronic device is provided, which comprises the data processing apparatus according to the second aspect, and a memory.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program for executing the method according to the first aspect.
The embodiment of the application can convert the one-dimensional data of the wider electromagnetic signal into the two-dimensional data in a lossless manner, so that the width of the two-dimensional data is smaller than that of a neural network model supported by a neural network processor, and the two-dimensional data is processed in parallel through the neural network model. The embodiment of the application is beneficial to solving the problem that the data width of the electromagnetic signal is too large and the electromagnetic signal cannot be deployed in the neural network model, and is also beneficial to improving the processing efficiency and reducing the time delay through parallel processing of the two-dimensional data of the lossless conversion.
Drawings
Fig. 1 is a schematic flowchart of a data processing method according to an embodiment of the present application.
FIG. 2 is a diagram illustrating a calculation sequence of a convolution operation.
Fig. 3 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
With the development of automation and intelligence of electronic devices, many terminal devices have a special-purpose processor, such as a neural network processor, in addition to a general-purpose processor. Neural network processors are also known as neural-Network Processing Units (NPUs). The NPU generally adopts a "data-driven parallel computing" architecture, and is particularly good at processing massive multimedia data of video and image types and pattern recognition.
It should be understood that a terminal device is also referred to as an end-side device and may also be referred to as a client, a terminal, a portable terminal, a mobile terminal, a communication terminal, a portable mobile terminal, an edge-side device, a touch screen, etc. For example, the terminal device may be, but is not limited to, various smart phones, digital cameras, video cameras, smart phones, notebook computers, tablet computers, smart phones, portable phones, game machines, televisions, display units, personal Media Players (PMPs), personal Digital Assistants (PDAs), robots controlled by electronic computers, in-vehicle terminals, devices for security (such as surveillance cameras, smoke alarms, fire extinguishing devices), smart speakers, and the like. The terminal device may also be a portable communication terminal having a wireless communication function and a pocket size.
The NPU generally processes multimedia data such as video and image using a neural network model. Neural Networks (NNs), or so-called artificial neural networks, are mathematical or computational models that contain a set of neurons connected to one another in order to model the relationships between inputs and outputs or to discover patterns in data. The neural network may be classified into a one-dimensional neural network, a two-dimensional neural network, a three-dimensional neural network, and the like according to the dimension of processing data. The model of the neural network may include a deep neural network, a Convolutional Neural Network (CNN), and the like. In recent years, a Visual Neural Network (VNN) represented by a convolutional neural network has been rapidly developed. VNN has become a common tool for most recognition tasks, and is widely used in many fields such as image classification, target detection, face detection and recognition, semantic segmentation, and visual question answering.
The visual neural network is also a two-dimensional neural network, whose main parameters may include (N, H, W, C). Where N is the number of batches (batch), e.g., the number of image frames; h is height (height), such as the number of pixels in the height direction of the feature map; w is width (width), such as the number of pixels in the width direction of the feature map; c is the number of channels (channels), for example, the number of channels of a color RGB image is 3.
In a neural network, the height and width parameters generally represent the resolution of the image, and the height value is generally smaller than the width value. The neural network can process image data within a parameter range and cannot directly process image data beyond the parameter range. For example, a neural network having a resolution of 4K (4096 × 2160) can process image data having a resolution of (1980 × 1080), but cannot losslessly process image data having a resolution of 8K (7680 × 4320). The optic neural network model may also process one-dimensional data having a width less than that supported by the model.
As the resolution of the sensor increases, i.e., the number of sampling points of the sensor increases, the electromagnetic signal data output by the sensor generally has a larger width. The width of the data indicates the size of the occupied space of the data, and the basic unit of the data width is bytes and bits (bit), and the bits are binary bits. For example, the width of the electromagnetic signal data output by the radar sensor for collecting weather information is usually large, such as 16384 bits and 20480 bits. However, due to the cost of the terminal device and the chip hardware resources, the signal width that can be supported by the neural network model in the NPU currently does not exceed 4096 bits. For electromagnetic signal data with a large width, the neural network model is difficult to support input of such a scale without loss, and further cannot meet the requirements of application scenarios.
Therefore, how to develop a solution for neural network processor deployment that supports electromagnetic signal data of a large width is a problem to be solved.
Based on this, the embodiment of the present application provides a data processing method. The embodiment of the application can convert the one-dimensional data of the ultra-wide electromagnetic signals into the two-dimensional data in a lossless manner, so that the neural network can process the two-dimensional data in parallel.
Fig. 1 is a schematic flowchart of a data processing method according to an embodiment of the present application. The method for processing data according to the embodiment of the present application is described in detail below with reference to fig. 1. As shown in fig. 1, the method may mainly include steps S110 to S130, which are described in detail below.
It should be noted that, the sequence numbers of the steps in the embodiments of the present application do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In step S110, sampling data of the electromagnetic signal is acquired, and the sampling data is one-dimensional data.
The electromagnetic signal is an output signal of the sensor for acquiring specific information. The number of sampling points of the electromagnetic signal is generally larger, or referred to as higher resolution, i.e. the width of the sampled data of the electromagnetic signal is larger. The width of the sample data may be represented by a binary bit, and the sample data formed of a plurality of binary bits is one-dimensional data. One-dimensional data may also be understood as a data sequence formed by a plurality of binary bits. For example, the width of the sampled data of the electromagnetic signal output by the radar for collecting weather information may be 16384 bits, i.e., the width of the one-dimensional data of the electromagnetic signal is 16384 bits.
The neural network model in the NPU generally supports a limited signal width (e.g., no more than 4096 bits). For the one-dimensional data of the electromagnetic signals with larger width, the width supported by the neural network model is exceeded, and the neural network model is difficult to be applied to processing the one-dimensional data with super-large width in a lossless manner.
In step S120, the one-dimensional data is divided into a plurality of data segments, and the plurality of data segments are converted into two-dimensional data such that the width of the two-dimensional data is smaller than the width of the neural network model supported by the neural network processor.
And dividing the one-dimensional data of the electromagnetic signal with larger width into a plurality of data segments, wherein the width of each data segment is smaller than that of a neural network model supported by the neural network processor. The plurality of data segments are losslessly converted into two-dimensional data, which may be in the form of an array or matrix. The two-dimensional data can be used to indicate the number of pixels in the image height direction and the number of pixels in the width direction, for example. Thus, the width of the two-dimensional data is less than the width of the neural network model supported by the neural network processor. Typically the width value of the neural network model is greater than the value of the height parameter.
In some embodiments, the one-dimensional data of the electromagnetic signal of greater width may be divided equally into a plurality of data segments of the same width, each data segment having a width less than the width of the neural network model supported by the neural network processor. In some embodiments, the one-dimensional data of the electromagnetic signal with a larger width may also be divided equally into a plurality of data segments with different widths, and the width of each data segment is smaller than the width of the neural network model supported by the neural network processor. For example, one-dimensional data of an electromagnetic signal having a width of 20480 bits may be divided equally into 32 data segments, each having a width of 640 bits; one-dimensional data of an electromagnetic signal having a width of 20480 bits may also be divided equally into 16 data segments, each having a width of 1280 bits.
In some embodiments, the data of the segmented plurality of data segments do not overlap (or overlap). In some embodiments, there is overlapping data (or overlapping data) between the multiple data segments that are segmented to accommodate the needs of subsequent processing.
Different neural network models have different structural features, such as different neuron (node) arrangement structures and different network depths. For example, the sizes and the numbers of convolution kernels in each layer of the convolutional neural network model are different, and different padding (padding) amounts and step amounts set in the convolution calculation affect the mapping relationship between input data and output data. Therefore, if the two-dimensional data is not adjusted according to the structural characteristics of the neural network application model, the size or filling amount of the convolution kernel of each layer, and other factors, valid data may be lost in the processing process of a certain layer of network.
In some implementations, padding data is added at the head and/or tail of the first data segment according to one or more of structural features, convolution kernels and padding amount of the neural network model, so that the result after network processing retains valid data. Wherein the first data segment is any one of a plurality of data segments. Therefore, the one-dimensional data is converted into the two-dimensional data in a lossless mode and processed through the neural network, and the calculation result is guaranteed to be unchanged.
In some embodiments, the same padding data is added at the head and tail of multiple data segments. In some embodiments, different padding data may be added to the head and tail of the multiple data segments to adapt to the structural characteristics of the neural network model and the size of each layer of convolution kernel.
In some implementations, for multiple data segments where there is data overlap, the data overlap size between the multiple data segments needs to be calculated. And adding filling data at the head and/or tail of the first data segment according to one or more items of structural characteristics, convolution kernels and filling quantity of the neural network model and the data overlapping size among the data segments, so that the result after network processing retains effective data.
In step S130, the two-dimensional data is processed based on the neural network processor.
The neural network processor usually adopts a neural network model to process multimedia data such as videos and images, and the neural network model can process one-dimensional data with the width smaller than that supported by the model. The following description will be given by taking a common optic nerve network model as an example. In the visual neural network, the height and width parameters respectively correspond to the height and width resolution of the image which can be processed, and the neural network can process the image data within the parameter range. For example, a neural network with a height and width parameter of (4096 × 2160) can process image data with a resolution of (1980 × 1080). Because the width of the converted two-dimensional data is smaller than that of the neural network model, the neural network model can process the two-dimensional data.
The neural network generally includes a multi-layer network, each layer of network has input data (or feature vector), convolution kernel, multiple channels, and output data (or feature vector) is obtained after calculation. The operation of the input feature vector and the convolution kernel in each layer of the network usually includes a large number of multiply-accumulate operations. The following description will be made by taking a convolution operation as an example.
FIG. 2 is a diagram illustrating a calculation sequence of a convolution operation. As shown in fig. 2, the size of input data is 4 × 4, the convolution kernel size is 3 × 3, the step S is 1, the size of the padding P is 0, and the size of output data is 2 × 2. Wherein, the size of the output data is calculated as 2= (4-3 +2 × 0)/1 +1.
For the input data, the convolution operation is applied by sliding windows of the filter, which are 3 × 3 portions of gray in fig. 2, at regular intervals. As shown in fig. 2, the elements of the convolution kernel (or called filter) at each position are multiplied by the corresponding elements of the input, and then summed, and this calculation is called a multiply-accumulate operation. This result is then saved to the corresponding location of the output. The convolution operation output can be obtained by performing this process once at all positions, such as the operations of steps (one), (two), (three) and (four) in fig. 2. It can be seen that a large number of multiply-accumulate operations are involved in a neural network, and some multiply-accumulate operations can be processed in parallel. For example, the operations of steps (one), (two), (three), (four) in fig. 2 may be processed in parallel.
To improve processing efficiency, in some implementations, multiple channels, such as 32 channels, may be included in the neural network processor, and multiple channels may be used for parallel computations. Therefore, the neural network processor can perform parallel processing on the operation of the two-dimensional data in the neural network model based on a plurality of channels, thereby being beneficial to reducing the time delay of the system and improving the timeliness of the processing.
A plurality of multiplier-adder calculation units may also be included in the neural network processor. A Multiply and Accumulate (MAC) unit is an arithmetic unit in which multiply and add operations can be performed within the same instruction cycle. Alternatively, one instruction cycle can output one multiply-add result to save the execution delay of the whole multiply-add operation. Preferably, the neural network processor performs parallel processing on the operation of the two-dimensional data in the neural network model based on the plurality of channels and the plurality of multiplier-adders, which helps to reduce the time delay of the system.
The neural network usually takes multiply-accumulate calculation as the leading factor, whether the high energy efficiency of the MAC calculation influences the processing efficiency, and the low-area arrangement of the MAC calculation unit also influences the carrying amount of the data to be processed.
In some implementations, the multiplier-adder can be combined into a MAC array computation unit to match the MAC array computation unit to the mechanical and operational characteristics of the neural network model. For example, to adapt the hardware for convolution calculations, the convolution operation can be transformed into a mathematically equivalent matrix calculation. The MAC array computing unit in the neural network processor can be utilized to perform multi-channel high-efficiency parallel processing on the two-dimensional data subjected to the lossless conversion of the electromagnetic signals, and the processing efficiency and timeliness are improved.
The neural network may generally comprise a plurality of layers of networks, and the input data and convolution kernels of each layer of network are different, and the involved calculations are also different. The deep learning algorithm is composed of individual computing units, which may be called operators (operators). In the neural network model, an operator corresponds to computational logic in one layer. For example, the computation of a convolutional layer is an operator, and the summation of weights in a fully-connected layer is an operator.
In the process of executing the neural network operator sequence, the output data of the previous operator is the input data of the next operator, which relates to the transportation and processing of data. If the hardware is not properly arranged or designed, the data handling and processing amount is increased, and the memory bandwidth is increased. The memory bandwidth refers to the amount of information accessed by the memory per unit time, and is also referred to as the number of bits or bytes read/written from/to the memory per unit time.
In some implementations, through reasonable layout design, the input data of the current layer network of the neural network model and the output data of the layer network above the current layer network can be stored in the same address space. Thereby eliminating the need for data handling and processing, contributing to reduced memory bandwidth, and improving execution efficiency.
Preferably, the NPU can independently execute the instruction sequence to complete the calculation of the neural network model, without instruction intervention of a general-purpose Central Processing Unit (CPU), which is helpful for improving the calculation efficiency of the data processing network of the electromagnetic signal.
The embodiment of the application can convert the one-dimensional data of the ultra-wide electromagnetic signals into the two-dimensional data in a lossless manner, so that the width of the two-dimensional data is smaller than that of a neural network model supported by a neural network processor, and the two-dimensional data is processed in parallel through the neural network model. The embodiment of the application solves the problem that the data width of the electromagnetic signal is too large and the electromagnetic signal cannot be deployed in the neural network model, and is beneficial to improving the processing efficiency and reducing the time delay through parallel processing of the two-dimensional data of the lossless conversion.
Method embodiments of the present application are described in detail above in conjunction with fig. 1-2, and apparatus embodiments and device embodiments of the present application are described in detail below in conjunction with fig. 3-4. It is to be understood that the description of the apparatus embodiments corresponds to the description of the method embodiments, and therefore reference may be made to the preceding method embodiments for parts which are not described in detail.
Fig. 3 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 3, the data processing apparatus 300 may include a general processor 310 and a neural network processor 320.
The general purpose processor 310 is configured to acquire sampled data of the electromagnetic signal, the sampled data being one-dimensional data. The general purpose processor 310 is configured to segment one-dimensional data of the electromagnetic signal into a plurality of data segments and convert the plurality of data segments into two-dimensional data.
The electromagnetic signal is an output signal of the sensor for acquiring specific information. The number of sampling points of the electromagnetic signal is generally large, that is, the width of the sampling data of the electromagnetic signal is large. The width of the sample data may be represented by binary bits, and the sample data formed by a plurality of binary bits is one-dimensional data. For example, the width of the sampled data of the electromagnetic signal may be 16384 bits, i.e., the width of the one-dimensional data of the electromagnetic signal is 16384 bits.
The general-purpose processor 310 is an arithmetic and control core, and is a final execution unit for information processing and program operation. The general purpose processor 310 may be a Central Processing Unit (CPU), a micro control unit, a network processor, or other conventional processor.
The neural network processor 320 is used to process the two-dimensional data. Wherein the width of the two-dimensional data is less than the width of the neural network model supported by the neural network processor.
Because the width of the converted two-dimensional data is smaller than that of the neural network model, the neural network model can directly process the two-dimensional data.
Optionally, the general purpose processor 310 is configured to add padding data at a head and/or a tail of a first data segment, the first data segment being any one of the plurality of data segments, based on one or more of structural characteristics of the neural network model, a convolution kernel, and a padding amount.
Optionally, neural network processor 320 includes multiple processing channels and multiple multiplier-adders. The neural network processor 320 processes the two-dimensional data in parallel based on the plurality of channels and the plurality of multiplier-adders.
Alternatively, the input data of the current layer network of the neural network model and the output data of the network of the previous layer of the current layer network are stored in the same address space.
The embodiment of the application can convert the one-dimensional data of the ultra-wide electromagnetic signals into the two-dimensional data in a lossless manner, so that the width of the two-dimensional data is smaller than that of a neural network model supported by a neural network processor, and the two-dimensional data is processed in parallel through the neural network model. The embodiment of the application solves the problem that the data width of the electromagnetic signal is too large and cannot be deployed in the neural network model, and is also beneficial to improving the processing efficiency and reducing the time delay.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 4, the electronic device 400 may comprise a data processing apparatus 410 as described in any of the previous paragraphs, and a memory 420.
The memory 420 is used to store instructions for execution by the data processing apparatus 410. Illustratively, the Memory 420 may be a Read Only Memory (ROM), an erasable programmable read-only Memory (EPROM), an electrically erasable programmable read-only Memory (EEPROM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), a Synchronous Dynamic Random Access Memory (SDRAM), a double data rate SDRAM (DDR), a DDR2, a flash Memory, or the like. The embodiment of the present application is not particularly limited to this.
It should be noted that the electronic device mentioned in the embodiments of the present application is an electronic device composed of microelectronic devices, and refers to a device that can be composed of electronic components such as integrated circuits, transistors, and electronic tubes, and functions by applying electronic technology (including software). The electronic device may be a device with sensor information collection functionality, the electronic device may be a device providing voice and/or data connectivity to a user, may be used to connect people, things, and machines, such as a handheld device with wireless connection functionality, a vehicle-mounted device, and the like. The electronic device may be a random device and may be referred to as a terminal device. The electronic device may also be a portable communication terminal having a wireless communication function and a pocket size.
An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, the computer program being configured to execute the method for data processing as described in any one of the foregoing.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the disclosure are all or partially produced when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a machine-readable storage medium or transmitted from one machine-readable storage medium to another, e.g., from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The machine-readable storage medium may be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It should be understood that, in the various embodiments of the present application, "first", "second", and the like are used for distinguishing different objects, and are not used for describing a specific order, the order of execution of the above-mentioned processes is not meant to imply any order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not be construed as limiting the implementation processes of the embodiments of the present application.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In the several embodiments provided in this application, it should be understood that when a portion is referred to as being "connected" or "coupled" to another portion, it is intended that the portion can be not only "directly connected," but also "electrically connected," with another element interposed therebetween. In addition, the term "connected" also means that the parts are "physically connected" as well as "wirelessly connected". In addition, when a portion is referred to as "comprising" an element, it means that the portion may include another element without excluding the other element unless otherwise stated.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of data processing, comprising:
acquiring sampling data of an electromagnetic signal, wherein the sampling data is one-dimensional data;
segmenting the one-dimensional data into a plurality of data segments;
converting the plurality of data segments into two-dimensional data such that a width of the two-dimensional data is less than a width of a neural network model supported by a neural network processor;
processing the two-dimensional data based on the neural network processor.
2. The method of claim 1, wherein prior to converting the plurality of data segments into two-dimensional data, the method comprises:
and adding filling data at the head and/or tail of a first data segment according to one or more items of the structural characteristics, the convolution kernel and the filling quantity of the neural network model, wherein the first data segment is any one of the plurality of data segments.
3. The method of claim 1, wherein the processing the two-dimensional data based on the neural network processor comprises:
parallel processing the two-dimensional data based on the neural network processor having a plurality of channels and a plurality of multipliers and adders.
4. The method of claim 1, wherein input data for a network in a current level of the neural network model is stored in the same address space as output data for a network in a layer above the current level.
5. An apparatus for data processing, comprising:
the general processor is used for acquiring sampling data of the electromagnetic signals, and the sampling data are one-dimensional data; dividing the one-dimensional data into a plurality of data segments, and converting the plurality of data segments into two-dimensional data;
the neural network processor is used for processing the two-dimensional data;
wherein a width of the two-dimensional data is less than a width of a neural network model supported by the neural network processor.
6. The apparatus of claim 5, wherein the general purpose processor is configured to add padding data at a head and/or a tail of a first data segment, the first data segment being any one of the plurality of data segments, based on one or more of structural characteristics of the neural network model, a convolution kernel, and a padding amount.
7. The apparatus of claim 5, wherein the neural network processor comprises a plurality of processing channels and a plurality of multiplier adders, and wherein the neural network processor processes the two-dimensional data in parallel based on the plurality of channels and the plurality of multiplier adders.
8. The apparatus of claim 5, wherein input data for a current-level network of the neural network model is stored in the same address space as output data for a network of a layer above the current-level network.
9. An electronic device, characterized in that it comprises a data processing apparatus according to any one of claims 5-8, and a memory.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon for performing the method according to any one of claims 1-4.
CN202211616573.8A 2022-12-15 2022-12-15 Data processing method and device and electronic equipment Pending CN115964335A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211616573.8A CN115964335A (en) 2022-12-15 2022-12-15 Data processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211616573.8A CN115964335A (en) 2022-12-15 2022-12-15 Data processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115964335A true CN115964335A (en) 2023-04-14

Family

ID=87352124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211616573.8A Pending CN115964335A (en) 2022-12-15 2022-12-15 Data processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115964335A (en)

Similar Documents

Publication Publication Date Title
US11593658B2 (en) Processing method and device
WO2020199693A1 (en) Large-pose face recognition method and apparatus, and device
CN111401406B (en) Neural network training method, video frame processing method and related equipment
CN111402130B (en) Data processing method and data processing device
EP4163831A1 (en) Neural network distillation method and device
EP4283520A1 (en) Pruning processing method for convolutional neural network, data processing method and devices
CN112258512B (en) Point cloud segmentation method, device, equipment and storage medium
CN107944545B (en) Computing method and computing device applied to neural network
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN112668588B (en) Parking space information generation method, device, equipment and computer readable medium
CN113326930A (en) Data processing method, neural network training method, related device and equipment
US20220101539A1 (en) Sparse optical flow estimation
CN114626503A (en) Model training method, target detection method, device, electronic device and medium
CN111091182A (en) Data processing method, electronic device and storage medium
CN110782430A (en) Small target detection method and device, electronic equipment and storage medium
EP4343616A1 (en) Image classification method, model training method, device, storage medium, and computer program
CN114926636A (en) Point cloud semantic segmentation method, device, equipment and storage medium
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN115223662A (en) Data processing method, device, equipment and storage medium
CN111639523B (en) Target detection method, device, computer equipment and storage medium
CN115100536B (en) Building identification method and device, electronic equipment and computer readable medium
CN116091844A (en) Image data processing method and system based on edge calculation
CN116403019A (en) Remote sensing image quantum identification method and device, storage medium and electronic device
CN115964335A (en) Data processing method and device and electronic equipment
JP2020021208A (en) Neural network processor, neural network processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination