EP4630968A1 - Falten kumulativer summierungsoperationen unter verwendung von matrixmultiplikationen - Google Patents
Falten kumulativer summierungsoperationen unter verwendung von matrixmultiplikationenInfo
- Publication number
- EP4630968A1 EP4630968A1 EP23738278.3A EP23738278A EP4630968A1 EP 4630968 A1 EP4630968 A1 EP 4630968A1 EP 23738278 A EP23738278 A EP 23738278A EP 4630968 A1 EP4630968 A1 EP 4630968A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- program code
- target operation
- matrix multiplication
- reformulated
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0495—Quantised networks; Sparse networks; Compressed networks
Definitions
- Artificial neural networks may comprise interconnected groups of artificial neurons (e.g., neuron models).
- the artificial neural network may be a computational device or be represented as a method to be performed by a computational device.
- Convolutional neural networks are a type of feed-forward artificial neural network.
- Convolutional neural networks may include collections of neurons that each have a receptive field and that collectively tile an input space.
- Convolutional neural networks such as deep convolutional neural networks (DCNs)
- CNNs deep convolutional neural networks
- these neural network architectures are used in various technologies, such as image recognition, speech recognition, acoustic scene classification, keyword spotting, autonomous driving, and other classification tasks.
- Many neural networks and image processing techniques involve performing a cumulative summation operation on an image/tensor.
- a cumulative summation (cumsum) operation an output image/tensor of the cumsum operation may be used further to detect objects/features in the image.
- the cumsum operation can be a bottleneck when the sizes of the tensors are large. Because of this bottleneck, achieving parallelism for this operation using single instruction, multiple data (SIMD) instructions Seyfarth Ref. No.72178-005813 1 92209752v.1 Qualcomm Ref. No.2301114WO involves significant data rearrangement that increases latency (e.g., inferences per second) and costs.
- SIMD single instruction, multiple data
- a processor-implemented method includes receiving a set of program code. The method further includes identifying a target operation in the set of program code, the target operation including at least a cumulative summation operation. The method still further includes reformulating the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation. The method also includes outputting an updated set of program code including the reformulated target operation.
- Another aspect of the present disclosure is directed to an apparatus including means for receiving a set of program code.
- the apparatus further includes means for identifying a target operation in the set of program code, the target operation including at least a cumulative summation operation.
- the apparatus still further includes means for reformulating the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation.
- the apparatus also includes means for outputting an updated set of program code including the reformulated target operation.
- a non-transitory computer- readable medium with non-transitory program code recorded thereon is disclosed.
- the program code is executed by a processor and includes program code to receive a set of program code.
- the program code further includes program code to identify a target operation in the set of program code, the target operation including at least a cumulative summation operation.
- the program code still further includes program code to reformulate the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation.
- the program code also includes program code to output an updated set of program code including the reformulated target operation.
- Another aspect of the present disclosure is directed to an apparatus having a memory and one or more processors coupled to the memory.
- the processor(s) is configured to receive a set of program code.
- the processor(s) is further configured to identify a target operation in the set of program code, the target operation including at least a cumulative summation operation.
- the processor(s) is still further configured to reformulate the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation.
- the processor(s) is also configured to output an updated set of program code including the reformulated target operation.
- FIGURE 1 illustrates an example implementation of a neural network using a system-on-a-chip (SOC), including a general-purpose processor in accordance with certain aspects of the present disclosure.
- FIGURES 2A, 2B, and 2C are diagrams illustrating a neural network in accordance with aspects of the present disclosure. Seyfarth Ref. No.72178-005813 3 92209752v.1 Qualcomm Ref. No.2301114WO [0014]
- FIGURE 2D is a diagram illustrating an exemplary deep convolutional network (DCN) in accordance with aspects of the present disclosure.
- FIGURE 3 is a block diagram illustrating an exemplary deep convolutional network (DCN) in accordance with aspects of the present disclosure.
- FIGURE 4 is a block diagram illustrating an exemplary software architecture that may modularize artificial intelligence (AI) functions, in accordance with aspects of the present disclosure.
- FIGURE 5 is a diagram illustrating an example implementation of reformulating operations of an example portion of an artificial neural network (ANN) model to form a simplified portion of an ANN model, in accordance with aspects of the present disclosure.
- FIGURE 6 is a flow chart illustrating a processor-implemented method for reformulating target operations in an artificial neural network, in accordance with aspects of the present disclosure.
- an apparatus may be implemented or a method may be practiced using any number of the aspects set forth.
- the scope of the disclosure is intended to cover such an apparatus or method practiced using other structure, functionality, or structure and Seyfarth Ref. No.72178-005813 4 92209752v.1 Qualcomm Ref. No.2301114WO functionality in addition to or other than the various aspects of the disclosure set forth.
- any aspect of the disclosure disclosed may be embodied by one or more elements of a claim.
- the word “exemplary” is used to mean “serving as an example, instance, or illustration.” Any aspect described as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.
- the orthographic feature transform transforms two-dimensional (2D) image-based feature maps at each scale to an orthographic birds-eye view representation.
- Voxel-based features g(x, y, z) may be generated by accumulating image-based features f(u, v) over the projected voxel area.
- the orthographic feature transform as well as many other image processing approaches, may perform back-to-back (e.g., consecutive) cumulative summation (cumsum) operations for a region of interest (RoI).
- aspects of the present disclosure are directed to reformulating the cumulative summation operations using matrix multiplication operations.
- aspects of the present disclosure are broadly applicable to use-cases involving neural networks for object detection and feature extraction from camera images such as navigation, entertainment, safety, and security applications for instance.
- the techniques and approaches described may be employed, for example, in mobiles phones, automobiles, robotics, and other systems, for detecting objects or performing other tasks.
- Particular aspects of the subject matter described in this disclosure can be implemented to realize one or more of the following potential advantages.
- the described techniques may increase processing speed (e.g., frame rate) and model accuracy.
- the described techniques may enable networks employing cumsum on large tensors to reduce data overflow instances. Moreover, the described techniques may enable offloading of cumsum operations to a matrix co- processor, rather than a vector co-processor that has lower throughput. The described techniques may also enable folding cumsum operations with preceding or succeeding convolution or matrix multiplication operations. Furthermore, the described techniques may enable certain operations to be pre-computed (e.g., at graph creation time) thereby reducing latency. [0028] Accordingly, aspects of the present disclosure may beneficially find broad application in neural networks for object detection and feature extraction from camera images, for example. Additionally, aspects of the present disclosure may be employed in mobile devices, autonomous vehicles, as well as robotic devices for navigation, entertainment, safety, security, and other applications.
- FIGURE 1 illustrates an example implementation of a system-on-a-chip (SOC) 100, which may include a central processing unit (CPU) 102 or a multi-core Seyfarth Ref. No.72178-005813 6 92209752v.1 Qualcomm Ref. No.2301114WO CPU configured for reformulating cumulative summation operations in artificial neural networks.
- SOC system-on-a-chip
- Variables e.g., neural signals and synaptic weights
- system parameters associated with a computational device e.g., neural network with weights
- delays e.g., frequency bin information, and task information
- NPU neural processing unit
- GPU graphics processing unit
- DSP digital signal processor
- Instructions executed at the CPU 102 may be loaded from a program memory associated with the CPU 102 or may be loaded from a memory block 118.
- the SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104, a DSP 106, a connectivity block 110, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures.
- the NPU 108 is implemented in the CPU 102, DSP 106, and/or GPU 104.
- the SOC 100 may also include a sensor processor 114, image signal processors (ISPs) 116, and/or navigation module 120, which may include a global positioning system.
- the SOC 100 may be based on an ARM instruction set.
- the instructions loaded into the general-purpose processor 102 may include code to receive a set of program code.
- the general-purpose processor 102 may also include code identify a target operation in the set of program code.
- the target operation includes a cumulative summation operation.
- the general- purpose processor 102 may include code to reformulate the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation.
- the general-purpose processor 102 may further include code to output an updated set of program code including the reformulated target operation.
- Deep learning architectures may perform an object recognition task by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building up a useful feature representation of the input data.
- a shallow classifier may be a two-class linear classifier, for example, in which a weighted sum of the feature vector components may be compared with a threshold to predict to which class the input belongs.
- Human engineered features may be templates or kernels tailored to a specific problem domain by engineers with domain expertise. Deep learning architectures, in contrast, may learn to represent features that are similar to what a human engineer might design, but through training.
- a deep network may learn to represent and recognize new types of features that a human might not have considered.
- a deep learning architecture may learn a hierarchy of features. If presented with visual data, for example, the first layer may learn to recognize relatively simple features, such as edges, in the input stream. In another example, if presented with auditory data, the first layer may learn to recognize spectral power in specific frequencies. The second layer, taking the output of the first layer as input, may learn to recognize combinations of features, such as simple shapes for visual data or combinations of sounds for auditory data. For instance, higher layers may learn to represent complex shapes in visual data or words in auditory data. Still higher layers may learn to recognize common visual objects or spoken phrases.
- Deep learning architectures may perform especially well when applied to problems that have a natural hierarchical structure. For example, the classification of motorized vehicles may benefit from first learning to recognize wheels, windshields, and other features. These features may be combined at higher layers in different ways to recognize cars, trucks, and airplanes.
- Neural networks may be designed with a variety of connectivity patterns. In feed-forward networks, information is passed from lower to higher layers, with each neuron in a given layer communicating to neurons in higher layers. A hierarchical representation may be built up in successive layers of a feed-forward network, as described above. Neural networks may also have recurrent or feedback (also called top- down) connections.
- a recurrent connection the output from a neuron in a given layer may be communicated to another neuron in the same layer.
- a recurrent architecture may be helpful in recognizing patterns that span more than one of the input data chunks Seyfarth Ref. No.72178-005813 8 92209752v.1 Qualcomm Ref. No.2301114WO that are delivered to the neural network in a sequence.
- a connection from a neuron in a given layer to a neuron in a lower layer is called a feedback (or top-down) connection.
- a network with many feedback connections may be helpful when the recognition of a high-level concept may aid in discriminating the particular low-level features of an input.
- the connections between layers of a neural network may be fully connected or locally connected.
- FIGURE 2A illustrates an example of a fully connected neural network 202.
- a neuron in a first layer may communicate its output to every neuron in a second layer, so that each neuron in the second layer will receive input from every neuron in the first layer.
- FIGURE 2B illustrates an example of a locally connected neural network 204.
- a neuron in a first layer may be connected to a limited number of neurons in the second layer.
- a locally connected layer of the locally connected neural network 204 may be configured so that each neuron in a layer will have the same or a similar connectivity pattern, but with connections strengths that may have different values (e.g., 210, 212, 214, and 216).
- the locally connected connectivity pattern may give rise to spatially distinct receptive fields in a higher layer because the higher layer neurons in a given region may receive inputs that are tuned through training to the properties of a restricted portion of the total input to the network.
- a locally connected neural network is a convolutional neural network.
- FIGURE 2C illustrates an example of a convolutional neural network 206.
- the convolutional neural network 206 may be configured such that the connection strengths associated with the inputs for each neuron in the second layer are shared (e.g., 208). Convolutional neural networks may be well suited to problems in which the spatial location of inputs is meaningful.
- DCN deep convolutional network
- FIGURE 2D illustrates a detailed example of a DCN 200 designed to recognize visual features from an image 226 input from an image capturing device 230, such as a car-mounted camera.
- the DCN 200 of the current example may be trained to identify traffic signs and a number provided on the traffic sign.
- the DCN 200 may be trained for other tasks, such as identifying lane markings or identifying traffic lights.
- the DCN 200 may be trained with supervised learning.
- the DCN 200 may be presented with an image, such as the image 226 of a speed limit sign, and a forward pass may then be computed to produce an output 222.
- the DCN 200 may include a feature extraction section and a classification section.
- a convolutional layer 232 may apply convolutional kernels (not shown) to the image 226 to generate a first set of feature maps 218.
- the convolutional kernel for the convolutional layer 232 may be a 5x5 kernel that generates 28x28 feature maps. In the present example, because four different feature maps are generated in the first set of feature maps 218, four different convolutional kernels were applied to the image 226 at the convolutional layer 232.
- the convolutional kernels may also be referred to as filters or convolutional filters.
- the first set of feature maps 218 may be subsampled by a max pooling layer (not shown) to generate a second set of feature maps 220.
- the max pooling layer reduces the size of the first set of feature maps 218. That is, a size of the second set of feature maps 220, such as 14x14, is less than the size of the first set of feature maps 218, such as 28x28.
- the reduced size provides similar information to a subsequent layer while reducing memory consumption.
- the second set of feature maps 220 may be further convolved via one or more subsequent convolutional layers (not shown) to generate one or more subsequent sets of feature maps (not shown).
- the second set of feature maps 220 is convolved to generate a first feature vector 224. Furthermore, the first feature vector 224 is further convolved to generate a second feature vector 228. Each feature of the second feature vector 228 may include a number that corresponds to a possible feature of the image 226, such as “sign,” “60,” and “100.” A softmax function (not shown) may convert the numbers in the second feature vector 228 to a probability. As such, an output 222 of the DCN 200 may be a probability of the image 226 including one or more features.
- the probabilities in the output 222 for “sign” and “60” are higher than the probabilities of the others of the output 222, such as “30,” “40,” “50,” “70,” “80,” “90,” and “100”.
- the output 222 produced by the DCN 200 may likely be incorrect.
- an error may be calculated between the output 222 and a target output.
- the target output is the ground truth of the image 226 (e.g., Seyfarth Ref. No.72178-005813 10 92209752v.1 Qualcomm Ref. No.2301114WO “sign” and “60”).
- the weights of the DCN 200 may then be adjusted so the output 222 of the DCN 200 is more closely aligned with the target output.
- a learning algorithm may compute a gradient vector for the weights.
- the gradient may indicate an amount that an error would increase or decrease if the weight were adjusted.
- the gradient may correspond directly to the value of a weight connecting an activated neuron in the penultimate layer and a neuron in the output layer.
- the gradient may depend on the value of the weights and on the computed error gradients of the higher layers.
- the weights may then be adjusted to reduce the error. This manner of adjusting the weights may be referred to as “back propagation” as it involves a “backward pass” through the neural network.
- the error gradient of weights may be calculated over a small number of examples, so that the calculated gradient approximates the true error gradient.
- This approximation method may be referred to as stochastic gradient descent. Stochastic gradient descent may be repeated until the achievable error rate of the entire system has stopped decreasing or until the error rate has reached a target level.
- the DCN 200 may be presented with new images and a forward pass through the DCN 200 may yield an output 222 that may be considered an inference or a prediction of the DCN 200.
- Deep belief networks are probabilistic models comprising multiple layers of hidden nodes. DBNs may be used to extract a hierarchical representation of training data sets.
- a DBN may be obtained by stacking up layers of Restricted Boltzmann Machines (RBMs).
- RBM Restricted Boltzmann Machines
- An RBM is a type of artificial neural network that can learn a probability distribution over a set of inputs. Because RBMs can learn a probability distribution in the absence of information about the class to which each input should be categorized, RBMs are often used in unsupervised learning.
- the bottom RBMs of a DBN may be trained in an unsupervised manner and may serve as feature extractors
- the top RBM may be trained in a supervised manner (on a joint distribution of inputs from the previous layer and target classes) and may serve as a classifier. Seyfarth Ref.
- Deep convolutional networks are networks of convolutional networks, configured with additional pooling and normalization layers. DCNs have achieved state-of-the-art performance on many tasks. DCNs can be trained using supervised learning in which both the input and output targets are known for many exemplars and are used to modify the weights of the network by use of gradient descent methods. [0047] DCNs may be feed-forward networks. In addition, as described above, the connections from a neuron in a first layer of a DCN to a group of neurons in the next higher layer are shared across the neurons in the first layer.
- the feed-forward and shared connections of DCNs may be exploited for fast processing.
- the computational burden of a DCN may be much less, for example, than that of a similarly sized neural network that comprises recurrent or feedback connections.
- the processing of each layer of a convolutional network may be considered a spatially invariant template or basis projection. If the input is first decomposed into multiple channels, such as the red, green, and blue channels of a color image, then the convolutional network trained on that input may be considered three-dimensional, with two spatial dimensions along the axes of the image and a third dimension capturing color information.
- the outputs of the convolutional connections may be considered to form a feature map in the subsequent layer, with each element of the feature map (e.g., 220) receiving input from a range of neurons in the previous layer (e.g., feature maps 218) and from each of the multiple channels.
- the values in the feature map may be further processed with a non-linearity, such as a rectification, max(0, x). Values from adjacent neurons may be further pooled, which corresponds to down sampling, and may provide additional local invariance and dimensionality reduction. Normalization, which corresponds to whitening, may also be applied through lateral inhibition between neurons in the feature map.
- the performance of deep learning architectures may increase as more labeled data points become available or as computational power increases.
- FIGURE 3 is a block diagram illustrating a deep convolutional network (DCN) 350.
- the DCN 350 may include multiple different types of layers based on connectivity and weight sharing.
- the DCN 350 includes the convolution blocks 354A, 354B.
- Each of the convolution blocks 354A, 354B may be configured with a convolution layer (CONV) 356, a normalization layer (LNorm) 358, and a max pooling layer (MAX POOL) 360.
- CONV convolution layer
- LNorm normalization layer
- MAX POOL max pooling layer
- the convolution layers 356 may include one or more convolutional filters, which may be applied to the input data to generate a feature map.
- the normalization layer 358 may normalize the output of the convolution filters.
- the normalization layer 358 may provide whitening or lateral inhibition.
- the max pooling layer 360 may provide down sampling aggregation over space for local invariance and dimensionality reduction.
- the parallel filter banks for example, of a deep convolutional network may be loaded on a CPU 102 or GPU 104 of an SOC 100 (e.g., FIGURE 1) to achieve high performance and low power consumption.
- the parallel filter banks may be loaded on the DSP 106 or an ISP 116 of an SOC 100.
- the DCN 350 may access other processing blocks that may be present on the SOC 100, such as sensor processor 114 and navigation module 120, dedicated, respectively, to sensors and navigation.
- the DCN 350 may also include one or more fully connected layers 362 (FC1 and FC2).
- the DCN 350 may further include a logistic regression (LR) layer 364. Between each layer 356, 358, 360, 362, 364 of the DCN 350 are weights (not shown) that are to be updated.
- the output of each of the layers e.g., 356, 358, 360, 362, 364) may serve as an input of a succeeding one of the layers (e.g., 356, 358, 360, 362, 364) in the DCN 350 to learn hierarchical feature representations from input data 352 (e.g., Seyfarth Ref. No.72178-005813 13 92209752v.1 Qualcomm Ref.
- FIGURE 4 is a block diagram illustrating an exemplary software architecture 400 that may modularize artificial intelligence (AI) functions.
- AI artificial intelligence
- applications may be designed that may cause various processing blocks of an SOC 420 (for example a CPU 422, a DSP 424, a GPU 426 and/or an NPU 428) (which may be similar to SoC 100 of FIGURE 1) to support reformulation of cumsum operations for an AI application 402, according to aspects of the present disclosure.
- the AI application 402 may be performed at a device such as a smartphone, for instance.
- the AI application 402 may be configured to call functions defined in a user space 404 that may, for example, provide for the detection and recognition of a scene indicative of the location in which the device currently operates.
- the AI application 402 may, for example, configure a microphone and a camera differently depending on whether the recognized scene is an office, a lecture hall, a restaurant, or an outdoor setting such as a lake.
- the AI application 402 may make a request to compiled program code associated with a library defined in an AI function application programming interface (API) 406. This request may ultimately rely on the output of a deep neural network configured to provide an inference response based on video and positioning data, for example.
- API AI function application programming interface
- a run-time engine 408, which may be compiled code of a runtime framework, may be further accessible to the AI application 402.
- the AI application 402 may cause the run-time engine 408, for example, to request an inference at a particular time interval or triggered by an event detected by the user interface of the application 402.
- the run-time engine 408 may in turn send a signal to an operating system in an operating system (OS) space 410, such as a kernel 412, running on the SOC 420.
- OS operating system
- the kernel 412 may be a Linux kernel.
- the operating system in turn, may cause a continuous relaxation of quantization to be performed on the CPU 422, the DSP 424, the GPU 426, the NPU 428, or some combination thereof.
- the CPU 422 may be accessed directly by the Seyfarth Ref.
- the application 402 (e.g., an AI application) may be configured to call functions defined in a user space 404 that may, for example, provide for the detection and recognition of a scene indicative of the location in which the device currently operates.
- the application 402 may, for example, configure a microphone and a camera differently depending on whether the recognized scene is an office, a lecture hall, a restaurant, or an outdoor setting such as a lake.
- the application 402 may make a request to compiled program code associated with a library defined in a SceneDetect application programming interface (API) 406 to provide an estimate of the current scene. This request may ultimately rely on the output of a differential neural network configured to provide scene estimates based on video and positioning data, for example.
- a run-time engine 408, which may be compiled code of a Runtime Framework, may be further accessible to the application 402.
- the application 402 may cause the run-time engine, for example, to request a scene estimate at a particular time interval or triggered by an event detected by the user interface of the application.
- the run-time engine may in turn send a signal to the operating system 410, such as the kernel 412, running on the SOC 420.
- the operating system 410 may cause a computation to be performed on the CPU 422, the DSP 424, the GPU 426, the NPU 428, or some combination thereof.
- the CPU 422 may be accessed directly by the operating system, and other processing blocks may be accessed through a driver, such as the driver 414-418 for the DSP 424, for the GPU 426, or for the NPU 428.
- the differential neural network may be configured to run on a combination of processing blocks, such as the CPU 422 and the GPU 426, or may be run on the NPU 428.
- many neural networks and image processing techniques involve performing a cumulative summation operation on an image/tensor.
- a cumulative summation (cumsum) operation an output image/tensor of the cumsum Seyfarth Ref. No.72178-005813 15 92209752v.1 Qualcomm Ref. No.2301114WO operation may be further used to detect objects/features in the image.
- the cumsum operation can be a bottleneck and may result in overflows during summations when the sizes of the tensors are large enough to produce intermediate values that may exceed the upper bound of values represented using the operating bit width. For instance, a fixed bit width of eight may represent values in the range of 0-255. The summations of a cumsum operation over a region of interest may produce intermediate values that exceed 255, resulting in an overflow and the value of the summation being reset to zero. Because of this bottleneck, achieving parallelism for this operation using single instruction, multiple data (SIMD) instructions involves significant data rearrangement, which reduces the performance of this operation.
- SIMD single instruction, multiple data
- SIMD instructions refer to a computing method that enables processing of multiple data with a single instruction rather than using one instruction to process each individual data.
- Aspects of the present disclosure are directed to folding or reformulating cumsum operations.
- the cumsum operations may be reformulated, for instance, using matrix multiplication operations. That is, aspects of the present disclosure involve implementing the cumsum operation as a matrix multiplication leveraging a hardware matrix multiplication unit.
- the cumsum operation may thus be invoked as a matrix multiplication using a matrix multiplication unit on the processor, for instance.
- This implementation may be significantly faster than the traditional SIMD implementation because of the higher tera operations per second (TOPS) of matrix multiplication units.
- TOPS tera operations per second
- this implementation of cumsum may permit such operations to be fused with the preceding or succeeding operations.
- Equation 3 would be computed for every new input.
- the output in Equation 4 would be computed for every new matrix A (e.g., meaning for every new input).
- the reformulated operation in Equation 5 may be pre-computed (e.g., at graph creation time or at compile time by a compiler) independent of the input data.
- the operation in Equation 6 may only be computed if there is a new input. Accordingly, this implementation may improve the frame rate (e.g., frames per second (fps)) of networks that use cumsum operations by fundamentally reducing the number of operations in the technique and networks.
- fps frames per second
- neural networks may perform two consecutive (e.g., back-to- back) cumsum operations along different axes (e.g., X-axis and Y-axis) to compute a sum of elements present in a certain region of interest (RoI) of the input that may correspond to features in the image.
- the RoI may be a bounding box of a particular object in the image, which is defined by the four corners of the rectangle.
- the operation may be interpreted as: Seyfarth Ref. No.72178-005813 17 92209752v.1 Qualcomm Ref.
- Input[RoI] SY[RoI] * Input * SX[RoI], where SY[RoI] and SX[RoI] are identity matrices appropriately padded with zeros to extract Input[RoI] from the Input.
- a feature may be computed as follows: Feature + ⁇ ) Seyfarth Ref. No.72178-005813 18 92209752v.1 Qualcomm Ref.
- GridSampler(x,y) function may sample four nearest-neighboring values and interpolate using bilinear interpolation to obtain the results at a non-integral location ( ⁇ , ⁇ ).
- the GridSampler function may also be written as a linear operation that may also be fused with the cumsum.
- Equation 11 may be re-written in matrix form as:
- Equation 12 may be re-written in flattened matrix form as: Where (1 ⁇ ⁇ )(1 ⁇ ⁇ ) is a linear index of , ⁇ (1 ⁇ ⁇ ) is a linear index of (i, j+1), (1 ⁇ ⁇ ) ⁇ is a linear index of (i+1, j), and ⁇ ⁇ is a linear index of ( ⁇ +1, ⁇ +1).
- multiple row matrices may be produced for multiple sampling points. Each row of sampling points may be stacked to form a single matrix, which may be used for a single matrix multiplication operation for sampling at multiple points.
- the diagonal elements of the output matrix may be generated as elements of a single vector by rearranging the elements of the input (Inp), CX*FX (e.g., considered to be X), and FY*CY (e.g., considered to be Y) as follows:
- Features [Inp00 Inp01 Inp02 ... Inpm, n ⁇ 1 Inpm, n ] [X00 ⁇ Y00 ... XF0 ⁇ Y0F X00 ⁇ Y11 ... XF0 ⁇ Y1F X0m ⁇ Yn0 ... XFm ⁇ YnF].
- Seyfarth Ref [Inp00 Inp01 Inp02 ... Inpm, n ⁇ 1 Inpm, n ] [X00 ⁇ Y00 ... XF0 ⁇ Y0F X00 ⁇ Y11 ... XF0 ⁇ Y1F X0m ⁇ Yn0 ... XFm ⁇ Y
- the left-hand-side input matrix may be obtained by a flattening operation and the right-hand-side constant matrix can be pre-computed (e.g., computed at the graph creation), thus making the operation a single matmul at run time with no redundant computations.
- the matrix on the right-hand-side may be sparse depending upon the features.
- aspects of the present disclosure may improve the processing speed (e.g., frame rate (measured, for instance, in frames per second (fps)) of neural networks.
- FIGURE 5 is a diagram illustrating an example implementation of reformulating operations of an example portion 500 of an artificial neural network (ANN) model to form a simplified portion 520 of an ANN model, in accordance with aspects of the present disclosure.
- ANN artificial neural network
- the example portion 500 of the ANN model may be configured for monocular 3D object detection, for example.
- the example portion 500 of the ANN model may include multiple cumsum operations 504a, 504b performed relative to a region of interest of an input image 502, which may be defined according to indices of an input image.
- the input in the example of FIGURE 5 is an image, the present disclosure is not so limiting. Rather other types of inputs are contemplated and the techniques described may be applied to any other types of data. Additionally, while various dimensions are provided in the example implementation shown in FIGURE 5, it is noted that the dimensions are merely an example for ease of understanding and not limiting.
- the example portion 500 also includes GatherNd operations 506a-d that may gather data regarding boundaries of the region of interest in the input image 502.
- the GatherNd operations 506a-d may slice parameters into a tensor with a shape specified by indices, where the indices represent k-dimensional integer tensors of indices into parameters. Seyfarth Ref. No.72178-005813 21 92209752v.1 Qualcomm Ref. No.2301114WO [0083]
- the cumsum operations 504a, 504b may be reformulated as a matrix multiplication operation 524 of the simplified portion 520 of the ANN model.
- the GatherNd operations 506a-d may be reformulated as matrices and flattened in accordance with Equations 11-13.
- the matrix multiplication operation 524 for calculating features of the simplified portion 520 of the ANN model may be pre-computed. Additionally, the tensor for input image 502 may be reshaped by a reshape operation 522 to produce a reshaped input. The reshape operation 522 may change the shape of an array. The reshaped input may enable direct calculation of the features of the RoI of the input image 502, which may be output as output matrix 526. Thus, features of the ROI may be directly calculated by performing the matrix multiplication operation 524 of the pre-computed matrix and the reshaped input.
- FIGURE 6 is a flow chart illustrating a processor-implemented method 600 for reformulating target operations in an artificial neural network, in accordance with aspects of the present disclosure.
- the processor-implemented method 600 may be performed by a processor such as the CPU 102 or the NPU 108, for example.
- the processor receives a set of program code.
- the set of program code may be code for the artificial neural network.
- the artificial neural network may, for instance, be configured to detect an object in an image.
- the processor identifies a target operation in the set of program code.
- the target operation includes at least a cumulative summation operation.
- the target operation may include a cumsum operation fused with a feature extraction or other operations in the artificial neural network, for instance.
- the target operation may include a cumsum operation and a feature extraction, and the processor may reformulate the target operation using a matrix multiplication Seyfarth Ref. No.72178-005813 22 92209752v.1 Qualcomm Ref. No.2301114WO operation at a model framework (e.g., open neural network exchange (ONNX) model framework) level.
- model framework e.g., open neural network exchange (ONNX) model framework
- the cumsum operation may be preceded or succeeded by a linear operation or a second matrix multiplication operation.
- the linear operation may be a convolution operation, a sampling operation, an interpolation operation, or a resizing operation that may be interpreted as matrix multiplication operations.
- the processor reformulates the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation.
- the processor outputs an updated set of program code including the reformulated target operation.
- the equations described in aspects of the present disclosure may be specific to one-dimensional (1D) or two-dimensional (2D) example cases, the present disclosure is not so limiting.
- a processor-implemented method comprising: receiving a set of program code; identifying a target operation in the set of program code, the target operation including at least a cumulative summation operation; reformulating the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation; and outputting an updated set of program code including the reformulated target operation.
- the processor-implemented method of clause 1 or 2 in which the cumulative summation operation is preceded or succeeded by a linear operation or a second matrix multiplication operation and further comprises: Seyfarth Ref. No.72178-005813 23 92209752v.1 Qualcomm Ref. No.2301114WO fusing the linear operation with the reformulated target operation to produce a fused operation; and outputting the updated set of program code including the fused operation. 4.
- the processor-implemented method of any of clauses 1-3 in which the linear operation comprises a convolution operation and the second matrix multiplication operation comprises a general matrix multiplication operation. 5.
- the processor-implemented method of any of clauses 1-4 in which a portion of the fused operation is computed apriori. 6.
- the processor-implemented method of any of clauses 1-5 in which the target operation further includes a feature extraction and the reformulated target operation uses the first matrix multiplication operation at a model framework level.
- the set of program code comprises code for an artificial neural network configured for detecting an object
- the target operation is applied to a region of interest of an input image and an output of the reformulated target operation comprises features extracted from the region of interest.
- the processor-implemented method of any of clauses 1-7 further comprising: receiving an input tensor at the artificial neural network; flattening the input tensor; and directly calculating the output by multiplying the flattened input tensor with a single matrix.
- An apparatus comprising: a memory; and at least one processor coupled to the memory, the at least one processor configured: to receive a set of program code; to identify a target operation in the set of program code, the target operation including at least a cumulative summation operation; Seyfarth Ref. No.72178-005813 24 92209752v.1 Qualcomm Ref. No.2301114WO to reformulate the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation; and to output an updated set of program code including the reformulated target operation.
- the first matrix multiplication operation utilizes one or more triangular matrices.
- the apparatus of clause 9 or 10 in which the cumulative summation operation is preceded or succeeded by a linear operation or a second matrix multiplication operation and the at least one processor is further configured: to fuse the linear operation with the reformulated target operation to produce a fused operation; and to output the updated set of program code including the fused operation.
- the linear operation comprises a convolution operation and the second matrix multiplication operation comprises a general matrix multiplication operation.
- the target operation further includes a feature extraction and the reformulated target operation uses the first matrix multiplication operation at a model framework level.
- the set of program code comprises code for an artificial neural network configured for detecting an object
- the target operation is applied to a region of interest of an input image and an output of the reformulated target operation comprises features extracted from the region of interest.
- the at least one processor is further configured: to receive an input tensor at the artificial neural network; to flatten the input tensor; and Seyfarth Ref. No.72178-005813 25 92209752v.1 Qualcomm Ref. No.2301114WO to directly calculate the output by multiplying the flattened input tensor with a single matrix. 17.
- the non-transitory computer-readable medium of any of clauses 17-21 in which the target operation further includes a feature extraction and the reformulated target operation uses the first matrix multiplication operation at a model framework level. Seyfarth Ref. No.72178-005813 26 92209752v.1 Qualcomm Ref. No.2301114WO 23.
- program code further comprises: program code to receive an input tensor at the artificial neural network; program code to flatten the input tensor; and program code to directly calculate the output by multiplying the flattened input tensor with a single matrix.
- An apparatus comprising: means for receiving a set of program code; means for identifying a target operation in the set of program code, the target operation including at least a cumulative summation operation; means for reformulating the target operation in the set of program code using a first matrix multiplication operation to produce a reformulated target operation; and means for outputting an updated set of program code including the reformulated target operation.
- the set of program code comprises code for an artificial neural network configured for detecting an object
- the target operation is applied to a region of interest of an input image and an output of the reformulated target operation comprises features extracted from the region of interest.
- 30. The apparatus of any of clauses 25-29, further comprising: means for receiving an input tensor at the artificial neural network; means for flattening the input tensor; and means for directly calculating the output by multiplying the flattened input tensor with a single matrix.
- the receiving means, identifying means, reformulating and/or outputting means may be the CPU 102, program memory associated with the CPU 102, the dedicated memory block 118, fully connected layers 362, NPU 428 and or the routing connection processing unit 216 configured to perform the functions recited.
- the aforementioned means may be any module or any apparatus configured to perform the functions recited by the aforementioned means.
- the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions.
- the means may include various hardware and/or software component(s) and/or module(s), including, but not limited to, a circuit, an application specific integrated circuit (ASIC), or processor.
- ASIC application specific integrated circuit
- determining encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database, or another data structure), ascertaining and the like. Additionally, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Seyfarth Ref. No.72178-005813 28 92209752v.1 Qualcomm Ref.
- determining may include resolving, selecting, choosing, establishing, and the like.
- a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members.
- “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array signal
- PLD programmable logic device
- a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- the steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two.
- a software module may reside in any form of storage medium that is known in the art.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read-only memory
- EEPROM electrically erasable programmable read-only memory
- registers a hard disk, a removable disk, a CD-ROM and so forth.
- a software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
- a storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. Seyfarth Ref. No.72178-005813 29 92209752v.1 Qualcomm Ref.
- the methods disclosed comprise one or more steps or actions for achieving the described method.
- the method steps and/or actions may be interchanged with one another without departing from the scope of the claims.
- the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
- the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may comprise a processing system in a device.
- the processing system may be implemented with a bus architecture.
- the bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints.
- the bus may link together various circuits including a processor, machine-readable media, and a bus interface.
- the bus interface may be used to connect a network adapter, among other things, to the processing system via the bus.
- the network adapter may be used to implement signal processing functions.
- a user interface e.g., keypad, display, mouse, joystick, etc.
- the bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further.
- the processor may be responsible for managing the bus and general processing, including the execution of software stored on the machine-readable media.
- the processor may be implemented with one or more general-purpose and/or special- purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software.
- Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
- Machine-readable media may include, by way of example, random access memory (RAM), flash memory, read only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable Read-only memory (EEPROM), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination Seyfarth Ref.
- the machine-readable media may be embodied in a computer-program product.
- the computer-program product may comprise packaging materials.
- the machine-readable media may be part of the processing system separate from the processor. However, as those skilled in the art will readily appreciate, the machine-readable media, or any portion thereof, may be external to the processing system.
- the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from the device, all which may be accessed by the processor through the bus interface.
- the machine-readable media may be integrated into the processor, such as the case may be with cache and/or general register files.
- the processing system may be configured as a general-purpose processing system with one or more microprocessors providing the processor functionality and external memory providing at least a portion of the machine-readable media, all linked together with other supporting circuitry through an external bus architecture.
- the processing system may comprise one or more neuromorphic processors for implementing the neuron models and models of neural systems described.
- the processing system may be implemented with an application specific integrated circuit (ASIC) with the processor, the bus interface, the user interface, supporting circuitry, and at least a portion of the machine-readable media integrated into a single chip, or with one or more field programmable gate arrays (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure.
- FPGAs field programmable gate arrays
- PLDs programmable logic devices
- controllers state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure.
- state machines gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure.
- the machine-readable media may comprise a number of software modules.
- the software modules include instructions that, when executed by the processor, cause Seyfarth Ref. No.72178-005813 31 92209752v.1 Qualcomm Ref. No.2301114WO the processing system to perform various functions.
- the software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor.
- the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium.
- Computer- readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
- a storage medium may be any available medium that can be accessed by a computer.
- such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- any connection is properly termed a computer-readable medium.
- the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave
- DSL digital subscriber line
- wireless technologies such as infrared (IR), radio, and microwave
- Disk and disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
- computer-readable media may comprise non-transitory computer- readable media (e.g., tangible media).
- Qualcomm Ref. No.2301114WO media may comprise transitory computer- readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.
- certain aspects may comprise a computer program product for performing the operations presented.
- such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described.
- the computer program product may include packaging material.
- various methods described can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device.
- storage means e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.
- CD compact disc
- floppy disk etc.
- any other suitable technique for providing the methods and techniques described to a device can be utilized.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202241070368 | 2022-12-06 | ||
| PCT/US2023/024475 WO2024123391A1 (en) | 2022-12-06 | 2023-06-05 | Folding cumulative summation operations using matrix multiplications |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4630968A1 true EP4630968A1 (de) | 2025-10-15 |
Family
ID=87136464
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP23738278.3A Pending EP4630968A1 (de) | 2022-12-06 | 2023-06-05 | Falten kumulativer summierungsoperationen unter verwendung von matrixmultiplikationen |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4630968A1 (de) |
| CN (1) | CN120266130A (de) |
| WO (1) | WO2024123391A1 (de) |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12518133B2 (en) * | 2021-04-22 | 2026-01-06 | Nvidia Corporation | Kernel generation for neural networks |
| WO2022251317A1 (en) * | 2021-05-27 | 2022-12-01 | Rutgers, The State University Of New Jersey | Systems of neural networks compression and methods thereof |
-
2023
- 2023-06-05 WO PCT/US2023/024475 patent/WO2024123391A1/en not_active Ceased
- 2023-06-05 CN CN202380082110.4A patent/CN120266130A/zh active Pending
- 2023-06-05 EP EP23738278.3A patent/EP4630968A1/de active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| CN120266130A (zh) | 2025-07-04 |
| WO2024123391A1 (en) | 2024-06-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TWI853083B (zh) | 用於經由調整記憶體內計算陣列的列閾值來執行 xnor 等效運算的裝置、方法與電腦可讀取媒體 | |
| EP3869415A1 (de) | Übertragungslernen in neuronalen netzen | |
| US20170061328A1 (en) | Enforced sparsity for classification | |
| CN107430703A (zh) | 对细调特征的顺序图像采样和存储 | |
| US12206869B2 (en) | Skip convolutions for efficient video processing | |
| US12412373B2 (en) | Saliency-based input resampling for efficient object detection | |
| US20220156528A1 (en) | Distance-based boundary aware semantic segmentation | |
| WO2025090188A1 (en) | Hardware-aware efficient architectures for text-to-image diffusion models | |
| EP4584722A1 (de) | Meta-vortraining mit erweiterungen zur verallgemeinerung der verarbeitung neuronaler netze zur domänenanpassung | |
| EP4630968A1 (de) | Falten kumulativer summierungsoperationen unter verwendung von matrixmultiplikationen | |
| US20210334516A1 (en) | Compact encoded heat maps for keypoint detection networks | |
| EP4434008B1 (de) | Auf salienz basierende eingabeneuabtastung zur effizienten objekterkennung | |
| US20260093771A1 (en) | Efficient one-dimensional (1d) convolution support on two-dimensional (2d) convolution engines | |
| WO2024130688A1 (en) | Image set anomaly detection with transformer encoder | |
| US20260051148A1 (en) | Graph cuts for explainability | |
| US20250054168A1 (en) | Attention-based refinement for depth completion | |
| US20220122594A1 (en) | Sub-spectral normalization for neural audio data processing | |
| WO2024102526A1 (en) | Realistic distraction and pseudo-labeling regularization for optical flow estimation | |
| CN118215934A (zh) | 用于高效对象检测的基于显著性的输入重采样 | |
| WO2024205619A1 (en) | Predictive model with soft, per-example invariances through probabilistic modeling | |
| EP4315169A1 (de) | Equivariante steuerbare neuronale faltungsnetze |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20250414 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) |