CN112529016A - Method and device for extracting feature points in image - Google Patents
Method and device for extracting feature points in image Download PDFInfo
- Publication number
- CN112529016A CN112529016A CN202011516597.7A CN202011516597A CN112529016A CN 112529016 A CN112529016 A CN 112529016A CN 202011516597 A CN202011516597 A CN 202011516597A CN 112529016 A CN112529016 A CN 112529016A
- Authority
- CN
- China
- Prior art keywords
- data
- extraction
- filtering
- pixel
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000001914 filtration Methods 0.000 claims abstract description 207
- 238000000605 extraction Methods 0.000 claims abstract description 132
- 238000004364 calculation method Methods 0.000 claims description 38
- 239000000284 extract Substances 0.000 claims description 11
- 238000003491 array Methods 0.000 claims description 5
- 238000005265 energy consumption Methods 0.000 abstract description 3
- 238000010276 construction Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 8
- 230000001629 suppression Effects 0.000 description 6
- 230000010339 dilation Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002620 method output Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and a device for extracting feature points in an image.A filtering module can output a processing result after processing the data of one pixel without waiting for the processing of the data of other pixels; moreover, when the extraction module receives the data of one pixel, the extraction module can perform extraction processing, and then can output an extraction result without processing results of other pixels. Therefore, when the filtering module carries out filtering processing on the data of the ith pixel, the extraction module can simultaneously execute extraction work on the middle characteristic point of the (i-1) th pixel, and a 'service pipeline type' working mode is formed among the modules, so that the time required by the extraction of the characteristic points is greatly reduced, the extraction efficiency of the characteristic points is increased, the power consumption of the extraction device can be reduced, and the energy consumption ratio is improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for extracting feature points from an image.
Background
When extracting feature points in an image, taking Harris/Shi-Tomasi algorithm as an example, the method may include: the method comprises the steps of sobel filtering, covariance matrix construction, box filtering, response value calculation, threshold judgment, dilation filtering and non-maximum suppression, wherein in the steps, after data processing of all pixels in an image is finished, data processing results of all pixels are uniformly transmitted to a module corresponding to the next step, so that the module corresponding to the next step transmits the data processing results of all the pixels to the module corresponding to the next step after all the data processing results of all the pixels are processed. In this way, if the extraction of the image feature points is to be realized, the required time is the time consumed by the completion of the execution of all the steps, and the extraction efficiency of the feature points is low.
Therefore, how to improve the extraction efficiency of the image feature points is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The embodiment of the invention provides a method and a device for extracting feature points in an image, which are used for improving the extraction efficiency of the feature points of the image.
In a first aspect, an embodiment of the present invention provides an apparatus for extracting feature points in an image, including: a filtering module and an extraction module;
the filtering module is used for: sequentially filtering the data of each pixel in the image data to be processed, and transmitting the filtering result of the pixel to the extraction module when the filtering processing of the data of any pixel is finished;
the extraction module is configured to: and when the filtering processing result of the pixel is received, extracting the characteristic point in the filtering processing result of the pixel, and outputting the extraction result of the pixel.
Optionally, in an embodiment of the present invention, the filtering module includes: a plurality of filtering units;
each of the filtering units is configured to: and simultaneously, filtering the data of any pixel, and simultaneously transmitting the filtering result of the pixel to the extraction module.
Optionally, in the embodiment of the present invention, the structures of the filtering units are the same.
Optionally, in an embodiment of the present invention, the filtering unit or the filtering module includes: the device comprises a data reading subunit, a line buffer, an array extracting subunit and a calculating subunit;
the data reading subunit is configured to: sequentially reading the data of each pixel from the image data to be processed, and sequentially storing the read data into the line buffer according to a preset storage rule;
the array extraction subunit is to: extracting the data array from the line buffer according to a preset extraction rule and a preset extraction window;
the computing subunit is to: calculating the data array according to a preset algorithm, and outputting a calculation result; wherein the calculation result is: and calculating results corresponding to the data at the central position of the data array.
Optionally, in an embodiment of the present invention, the storage rule includes:
when the line buffer comprises k1 lines of storage spaces, each line of the storage spaces is correspondingly provided with line sequence numbers according to a preset line sequence number setting rule, the image data to be processed comprises k2 lines of pixels, k1 is smaller than k2, k1 and k2 are positive integers, and the line sequence number is set to any integer from zero to k1-1, the line sequence numbers of the pixels in the 1 st line to the k1-1 line in the image to be processed are: the data of the pixel of the F1 th row is stored into the storage space with the row number of F1, and F1 is any integer from 1 to k 1-1;
for pixels from line k1 to line k2 in the image to be processed: the data of the pixels of the F2 th row is stored into the storage space with the row number of k1-1, and replaces the original data in the storage space with the row number of k1-1, and F2 is any integer from k1 to k 2;
wherein each row of the storage space comprises: the storage space with the row sequence number set to zero for the first time only stores a preset first boundary value, and except the storage space with the row sequence number set to zero for the first time, the storage spaces corresponding to the rest row sequence numbers are: the head space and the tail space are both stored with preset second boundary values, and the middle space is stored with data of corresponding pixels.
Optionally, in this embodiment of the present invention, the rule for setting the row sequence number includes:
when the line buffer is not filled, sequentially setting zero to k1-1 as the line sequence numbers of the storage spaces of the k1 lines according to the arrangement sequence of the storage spaces of the lines;
when the line buffer is filled or any line memory space is filled after the line buffer is filled, the original line sequence number zero is updated to the line sequence number k1-1, and the original line sequence number F3 is updated to the line sequence number F3-1, wherein F3 is any integer from 1 to k 1-1.
Optionally, in this embodiment of the present invention, the storage capacity of each row of the storage space is the same;
the storage capacity of the middle space is equal to the number of columns of the pixels in the image data to be processed;
the head space and the tail space have the same storage capacity, and when the data array extracted according to the extraction window comprises y rows and y columns, the storage capacity of the head space is (y-1)/2; wherein y is a positive integer.
Optionally, in an embodiment of the present invention, the extraction rule includes:
when the data reading subunit stores the read data of the ith row and jth column pixels in the image data to be processed into the row buffer, the extracted data of the center position of the data array is the data of the ith-1 row and jth-1 column pixels in the image data to be processed, and i and j are integers greater than 1.
Optionally, in an embodiment of the present invention, the array extraction subunit and the calculation unit subunit are both provided in multiple numbers, and the number of the array extraction subunit and the number of the calculation subunit are the same;
each of the array extraction subunits is specifically configured to: according to the extraction rule, simultaneously extracting the data array from the line buffer according to the extraction window; wherein the data arrays extracted by different array extracting subunits comprise at least partially different data;
each of the calculation subunit is specifically configured to: and according to the preset algorithm, simultaneously calculating the data array, and outputting a calculation result.
In a second aspect, an embodiment of the present invention provides a method for extracting feature points in an image, including:
the filtering module is used for sequentially filtering the data of each pixel in the image data to be processed and outputting the filtering result of the pixel to the extraction module when the filtering processing of the data of any pixel is finished;
the extraction module extracts the characteristic points in the filtering processing result of the pixel and outputs the extraction result of the pixel.
The invention has the following beneficial effects:
according to the method and device for extracting the feature points in the image, provided by the embodiment of the invention, when the extraction device comprises the filtering module and the extraction module, the filtering module can transmit the filtering result of any pixel to the extraction module when the filtering processing of the data of the pixel is finished, namely the processing result is output after the data of one pixel is processed, and the processing of the data of the other pixels is not required to be waited; and when receiving the filtering processing result of the pixel, the extracting module extracts the feature point in the filtering processing result of the pixel, that is, when receiving the data of one pixel, the extracting module can perform extracting processing, and then can output the extracting result without processing results of other pixels. Therefore, when the filtering module carries out filtering processing on the data of the ith pixel, the extraction module can simultaneously execute extraction work on the middle characteristic point of the (i-1) th pixel, and a 'service pipeline type' working mode is formed among the modules, so that the time required by the extraction of the characteristic points is greatly reduced, the extraction efficiency of the characteristic points is increased, the power consumption of the extraction device can be reduced, and the energy consumption ratio is improved.
Drawings
Fig. 1 is a schematic structural diagram of an extraction device provided in an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a filtering module according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a row number and a row storage space according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating another row number and row storage space provided in an embodiment of the present invention;
FIG. 5 is a diagram illustrating a structure of a row of memory spaces provided in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another filtering module provided in the embodiment of the present invention;
FIG. 7 is a schematic diagram of the Harris/Shi-Tomasi algorithm provided in an embodiment of the present invention;
FIG. 8 is a schematic diagram of a service pipeline mode of operation provided in an embodiment of the present invention;
fig. 9 is a flowchart of an extraction method provided in the embodiment of the present invention.
Detailed Description
The following describes in detail a specific implementation of a method and an apparatus for extracting feature points from an image according to an embodiment of the present invention with reference to the drawings. It should be noted that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides an apparatus for extracting feature points in an image, as shown in fig. 1, which may include: a filtering module 10 and an extraction module 20;
the filtering module 10 is configured to: sequentially filtering the data of each pixel in the image data to be processed, and transmitting the filtering result of any pixel to the extraction module 20 when the filtering of the data of the pixel is completed;
the extraction module 20 is configured to: and when the filtering processing result of the pixel is received, extracting the characteristic point in the filtering processing result of the pixel, and outputting the extraction result of the pixel.
For example, after the filtering module performs filtering processing on the data of the 1 st pixel in the image data to be processed, the result (denoted as L1) is output to the extraction module, and when the extraction module receives the result L1, the extraction module extracts the feature points and outputs the extraction result (denoted as T1);
when the filtering module performs filtering processing on the data of the 2 nd pixel, the extraction module may simultaneously perform feature point extraction on the result L1, and output the extraction result after extraction is completed;
similarly, when the filtering module performs filtering processing on the data of the 3 rd pixel, the extraction module may simultaneously perform feature point extraction on the result L2 (i.e., the result of filtering processing on the data of the 2 nd pixel by the filtering module), and output the extraction result after extraction;
until the filtering module performs filtering processing on the data of the last pixel, the extraction module may simultaneously perform feature point extraction on the result Ln-1 (i.e., the filtering processing result of the filtering module on the data of the second last pixel); then, when receiving the result Ln (i.e., the result of the filtering process performed on the data of the last pixel by the filtering module), the extracting module extracts the feature points, and outputs the extraction result (denoted as Tn).
That is, the filtering module can output the processing result after processing the data of one pixel, without waiting for the processing of the data of the rest pixels; moreover, the extraction module can perform extraction processing when receiving data of one pixel, and then can output an extraction result without processing results of other pixels.
Therefore, when the filtering module carries out filtering processing on the data of the nth pixel, the extraction module can simultaneously execute extraction work on the middle characteristic point of the (n-1) th pixel, and a 'service pipeline type' working mode is formed among the modules, so that the time required by the extraction of the characteristic point is greatly reduced, the extraction efficiency of the characteristic point is increased, the power consumption of the extraction device can be reduced, and the energy consumption ratio is improved.
Optionally, in an embodiment of the present invention, the filtering module includes: a plurality of filtering units;
each filtering unit is used for: and simultaneously, filtering the data of any pixel, and simultaneously transmitting the filtering result of the pixel to the extraction module.
Therefore, when the filtering module comprises a plurality of filtering units, the filtering units can simultaneously work in parallel, and after one filtering unit is not required to work, the other filtering unit works again, so that the serial work among the filtering units is avoided, the working efficiency of the filtering module can be improved, and the time required by filtering processing is reduced.
Specifically, in the embodiment of the present invention, the data processed by different filtering units may be different, or the processing manners of the same data by different filtering units may be different.
For the example of the Harris/Shi-Tomasi algorithm, for sobel filtering:
the module for performing the sobel filtering may be referred to as a sobel filtering module, and the sobel filtering module may include: the device comprises a horizontal sobel filtering unit and a vertical sobel filtering unit, wherein the horizontal sobel filtering unit can carry out filtering processing in the horizontal direction on the data of pixels, and the vertical sobel filtering unit can carry out filtering processing in the vertical direction on the data of the pixels; therefore, the horizontal sobel filtering unit and the vertical sobel filtering unit perform different processing manners on the same data.
For box filtering:
the module for performing the cassette filtering may be referred to as a cassette filtering module, and the cassette filtering module may include: a first box type filter unit, a second box type filter unit and a third box type filter unit, wherein the first box type filter unit can output first data (such as Gx) for constructing covariance matrix module (i.e. module for constructing covariance matrix)2) The second box filter unit may perform filtering processing on second data (e.g., GxGy) output by the covariance matrix building block, and the third box filter unit may perform filtering processing on second data (e.g., Gy) output by the covariance matrix building block2) Carrying out filtering treatment; therefore, the first, second, and third cassette filter units perform processing on different data.
Specifically, in the embodiment of the present invention, the structures of the filtering units are the same.
Therefore, the structure of the filtering module can be simplified, the structure of the extracting device is further simplified, and the complexity of the structure of the extracting device is reduced.
Optionally, in an embodiment of the present invention, as shown in fig. 2, the filtering unit or the filtering module includes: a data reading subunit 21, a line buffer 22, an array extraction subunit 23, and a calculation subunit 24;
the data reading subunit 21 is configured to: sequentially reading the data of each pixel from the image data to be processed, and sequentially storing the read data into the line buffer 22 according to a preset storage rule;
the array extraction subunit 23 is configured to: extracting the data array from the line buffer 22 according to a preset extraction rule and a preset extraction window;
the calculation subunit 24 is configured to: calculating the data array according to a preset algorithm, and outputting a calculation result; wherein, the calculation result is as follows: and calculating results corresponding to the data at the central position of the data array.
Therefore, the functions of the filtering unit or the filtering module can be realized through the cooperation of the data reading subunit, the array extraction subunit and the calculation subunit, and the filtering unit or the filtering module can output the data of any pixel after the data of the pixel is processed, so that the extraction efficiency of the feature points is improved, and the power consumption of the extraction device is reduced.
Specifically, in the embodiment of the present invention, the storing the rule may include:
when the line buffer comprises k1 line storage spaces, each line storage space is correspondingly provided with line sequence numbers according to a preset line sequence number setting rule, the image data to be processed comprises k2 lines of pixels, k1 is smaller than k2, k1 and k2 are positive integers, and the line sequence number is set to any integer from zero to k1-1, for the pixels from the line 1 to the line k1-1 in the image to be processed: the data of the F1 th line pixel is stored into a storage space with a line sequence number of F1, and F1 is any integer from 1 to k 1-1;
for the pixels from line k1 to line k2 in the image to be processed: the data of the F2 th line of pixels is stored into a storage space with the line sequence number of k1-1, and replaces the original data in the storage space with the line sequence number of k1-1, and F2 is any integer from k1 to k 2;
wherein each row of storage space comprises: head space, middle part space and afterbody space, the storage space that line sequence number was set up to zero for the first time only stores and has the first boundary value of default, except that the storage space that line sequence number was set up to zero for the first time, in the storage space that other row sequence numbers correspond: the head space and the tail space are both stored with preset second boundary values, and the middle space is stored with data of corresponding pixels.
For example, as shown in fig. 2, the line buffer 22 includes three lines of memory space (i.e., k1 ═ 3) as an example, but in an actual case, the number of lines of memory space included in the line buffer 22 is not limited to three, and the memory size of the line buffer 22 may be set according to actual needs, and is not limited herein.
Taking 100 lines of pixels (i.e., k2 equals 100) in the image data to be processed as an example, when k1 equals 3, the line number (as shown in the dashed box 1) may be any one of 0, 1, and 2; thus:
before the data of the pixel is not read, only the first boundary value is stored in the storage space with the line number of 0 in the line buffer 22, as shown in fig. 2;
for the read data of the pixels in the 1 st row to the 2 nd row in the image to be processed: the data of the pixels in the 1 st row is stored into the storage space with the row sequence number of 1, and the data of the pixels in the 2 nd row is stored into the storage space with the row sequence number of 2, as shown in fig. 2;
for the pixels of the 3 rd row to the 100 th row in the image to be processed: the data of the 3 rd row of pixels is stored into the storage space with the row sequence number 2, and replaces the original data (i.e. the first boundary value) in the storage space with the row sequence number 2, as shown in fig. 3; the data of the 4 th row of pixels is stored into the storage space with the row sequence number 2, and replaces the original data (i.e. the data of the 1 st row of pixels) in the storage space with the row sequence number 2, as shown in fig. 4; the data of the 5 th row of pixels is stored into the storage space with the row sequence number 2, and replaces the original data in the storage space with the row sequence number 2, which is not shown in the figure; the data of the pixels up to the 100 th row are stored in the storage space with the row sequence number 2, and the original data in the storage space with the row sequence number 2 is replaced, which is not shown in the figure.
Further, referring to the schematic structural diagram of one line of memory space shown in fig. 5, taking that the line of memory space stores data of the 1 st line of pixels, and the 1 st line includes 4 pixels, the line of memory space can store 6 data as an example, at this time:
the first location of the line of memory space can be defined as the head space, where a second preset value (e.g., b) is stored;
defining the second to fifth locations of the row of memory space as a middle space where data of 4 pixels in the 1 st row are stored (e.g., P11, P12, P13, and P14);
the last position of the row of memory space is defined as the tail space where the second preset value (e.g. b) is still stored.
In addition, the specific setting manner of the first preset value and the second preset value may be set according to actual needs, and is not limited herein.
Therefore, through the setting of the storage rule, the read data of the pixels can be stored to the corresponding position in the line buffer, so that the subsequent array extraction subunit extracts the data array, and the function of the filtering module or the filtering unit is realized.
Specifically, in the embodiment of the present invention, the rule for setting the line sequence number includes:
when the line buffer is not filled, for k1 lines of storage space, setting zero to k1-1 as the line sequence number of each line of storage space in sequence according to the arrangement sequence of each line of storage space;
when the line register is filled or any line memory space is filled after the line register is filled, the original line sequence number zero is updated to the line sequence number k1-1, and the original line sequence number F3 is updated to the line sequence number F3-1, wherein F3 is any integer from 1 to k 1-1.
For example, as shown in fig. 2, when there is still remaining space in the line buffer 22, it indicates that the line buffer 22 is not filled, and at this time, the line buffer 22 includes three lines of storage spaces, and the line sequence numbers are 0, 1, and 2 from top to bottom;
when the line buffer 22 is filled up, that is, when the read data of the pixels in the 2 nd line is all stored in the storage space with the line sequence number of 2, based on the current situation, the read data of the pixels in the 3 rd line has no remaining space to be stored, and at this time, the line sequence number of the storage space in each line needs to be adjusted, that is: as shown in the dotted line frame 1 in fig. 3, the original line serial number 0 is updated to the line serial number 2, the original line serial number 1 is updated to the line serial number 0, and the original line serial number 2 is updated to the line serial number 1; then, storing the read 3 rd row pixel data into the storage space with the current row serial number of 2, and replacing the first boundary value stored in the storage space with the current row serial number of 2;
similarly, when all the read data of the pixels in the 3 rd row are stored in the storage space with the row sequence number of 2, based on the current situation, the read data of the pixels in the 4 th row still have no remaining space to be stored, and at this time, the row sequence numbers of the storage spaces in the rows need to be continuously adjusted, that is: as shown in the dotted line frame 1 in fig. 4, the original line serial number 0 is updated to the line serial number 2, the original line serial number 1 is updated to the line serial number 0, and the original line serial number 2 is updated to the line serial number 1; then, the read data of the 4 th row of pixels is stored into the storage space with the current row number of 2, and the data of the 1 st row of pixels stored in the storage space with the current row number of 2 is replaced.
Therefore, when data is read and stored, only the row serial number needs to be adjusted, and the data does not need to be moved, so that the increase of the calculation amount of the extraction device due to the movement of a large amount of data can be avoided, and the extraction efficiency of the feature points is improved.
Specifically, in the embodiment of the present invention, the storage capacity of each row of storage space is the same;
the storage capacity of the middle space is equal to the number of columns of pixels in the image data to be processed;
the storage capacity of the head space and the tail space are the same, and when the data array extracted according to the extraction window comprises y rows and y columns, the storage capacity of the head space is (y-1)/2; wherein y is a positive integer.
For example, as shown in fig. 5, the storage capacity of the head space and the tail space are both 1, and the corresponding extracted data array is 3 rows and 3 columns.
Alternatively, where the data array is 5 rows and 5 columns, y is 5, and the head space and tail space may have a storage capacity of 2.
In this way, when the data array is extracted, the data can be completely extracted, so that the subsequent calculation subunit can calculate the data array conveniently.
Specifically, in the embodiment of the present invention, the extraction rule includes:
when the data reading subunit stores the read data of the ith row and jth column pixels in the image data to be processed into the row buffer, the extracted data at the central position of the data array is the data of the ith-1 row and jth-1 column pixels in the image data to be processed, and i and j are integers greater than 1.
For example, table 1 is a line buffer including three lines of storage space as an example, the leftmost side indicates the current line number, and table 2 shows the data array extracted by the array extraction subunit when the data reading subunit stores the read data of P22 (i.e., the 2 nd row and 2 nd column pixels) into the line buffer.
TABLE 1
0 | a | a | a | a | a | a | a | a | a |
1 | b | P11 | P12 | P13 | P14 | P15 | P16 | P17 | P18 |
2 | b | P21 | P22 |
TABLE 2
a | a | a |
b | P11 | P12 |
b | P21 | P22 |
In tables 1 and 2, a represents the first boundary value, and b represents the second boundary value, which may be set as needed, but is not limited thereto.
Specifically, based on the above table 1 and table 2, when the data reading subunit reads the data of P22 and stores the data in the 3 rd position in the row sequence number 2 in the row buffer, the array extracting subunit may extract the data array from the row buffer shown in table 1, and the 2 nd position in the row sequence number 1 is located at the center position of the data array, that is, the data of P11 (i.e., the 1 st row and 1 st column pixels) is located at the center position of the extracted data array.
Specifically, in the data array, the positional relationship of the data of each pixel in the data array needs to be the same as that in the original image.
For example, when the stored data in the line buffer is shown in table 3, it indicates that the data reading subunit has stored the read data of P33 (i.e. the 3 rd row and 3 rd column pixels) to the 4 th position in the line sequence number 2, at this time: the array extraction subunit may extract the data array with the center position P22 from the line buffer, and the arrangement order of the data in the extracted data array is the same as that in the original image to be processed, as shown in table 4.
TABLE 3
2 | b | P31 | P32 | P33 | a | a | a | a | a |
0 | b | P11 | P12 | P13 | P14 | P15 | P16 | P17 | P18 |
1 | b | P21 | P22 | P23 | P24 | P25 | P26 | P27 | P28 |
TABLE 4
P11 | P12 | P13 |
P21 | P22 | P23 |
P31 | P32 | P33 |
Therefore, the calculation subunit can calculate the calculation result of the data array correctly, so that calculation errors are avoided, and the accuracy of feature point extraction is improved.
Specifically, in the embodiment of the present invention, the array extraction subunit and the calculation unit subunit are both provided in plurality, and the number of the array extraction subunit and the number of the calculation subunit are the same;
each array extraction subunit is specifically configured to: according to the extraction rule, simultaneously extracting the data array from the line buffer according to the extraction window; wherein, the data arrays extracted by the different array extracting subunits comprise at least partially different data;
each calculation subunit is specifically configured to: and according to a preset algorithm, simultaneously calculating the data array, and outputting the calculated result.
For example, as shown in fig. 6, M array extraction subunits 23 and M computation subunits 24 are shown, wherein the M array extraction subunits 23 can simultaneously extract data arrays, and then the M computation subunits 24 can simultaneously perform computation processing on the data arrays.
Therefore, the parallelism of the filtering processing can be further increased, the time consumed during the filtering processing is reduced, and the efficiency of extracting the feature points is further improved.
Of course, when setting the value of M, it is not preferable to set it to be large, and the value of M may be set according to factors such as the size of resources that can be provided by the extraction device, the requirement for processing speed, and the requirement for efficiency of feature point extraction, and is not limited herein.
The following describes the operation of the above-mentioned extraction device provided in the embodiment of the present invention with reference to a specific embodiment.
Taking Harris/Shi-Tomasi algorithm as an example, as shown in fig. 7, the algorithm may include: the method comprises the steps of sobel filtering, covariance matrix construction, box filtering, response value calculation, threshold judgment, expansion filtering and non-maximum suppression, and through the steps, the feature points in the original gray level image can be extracted to finally obtain a feature point list.
When the sobel filtering is performed, the filtering in the horizontal direction and the filtering in the vertical direction need to be performed on the input data respectively, and the two filtering processes are performed simultaneously; when performing the box filtering, the three input data need to be filtered separately, and the three filtering are performed simultaneously.
And, a module to be used for performing sobel filtering mayDefining as a sobel filtering module, wherein a unit in the sobel filtering module for performing filtering processing in a horizontal direction may be defined as a horizontal filtering unit, and a unit in the sobel filtering module for performing filtering processing in a vertical direction may be defined as a vertical filtering unit; a module for performing the construction of the covariance matrix may be defined as a construction module; the block for performing the box filtering may be defined as a box filtering block in which the data Gx is performed2The unit for performing the filtering process may be defined as a first filtering unit for performing the filtering process on the data Gy in the box type filtering module2The unit for performing the filtering process may be defined as a second filtering unit, and the unit for performing the filtering process on the data GxGy in the box type filtering module may be defined as a third filtering unit; a module for performing calculation of the response value may be defined as a calculation module; a module for performing threshold judgment may be defined as a judgment module; the module for performing dilation filtering may be defined as a dilation filtering module; a module for performing non-maximum suppression may be defined as a suppression module.
The sobel filtering module, the box type filtering module and the expansion filtering module are different only when the calculating subunit performs calculating processing on the data array, and other processes are similar.
With reference to fig. 8, taking an example that the image data to be processed includes 6 pixels, the specific execution process of each module may include:
stage 0:
a horizontal filtering unit and a vertical filtering unit in the sobel filtering module respectively and simultaneously carry out filtering processing on the data of the 1 st pixel, and then, transmitting the processing results (marked as Ls1) to a construction module; because the construction module, the box type filtering module, the calculation module, the judgment module, the expansion filtering module and the suppression module do not receive any input data in the working period of the sobel filtering module, the construction module, the box type filtering module, the calculation module, the judgment module, the expansion filtering module and the suppression module do not execute effective work at the moment.
Stage 1:
when the construction module receives the processing result Ls1 of the 1 st pixel output by the sobel filtering module, the construction module may process the processing result Ls1 of the 1 st pixel to obtain a processing result (denoted as G1), and the processing result G1 includes: gx21,Gy 21 and GxGy 1;
meanwhile, the horizontal filtering unit and the vertical filtering unit in the sobel filtering module respectively and simultaneously perform filtering processing on the data of the 2 nd pixel, and then transmit the processing results (recorded as Ls2) to the building module.
Stage 2:
the first filter unit in the cassette filter module receives the processing result Gx of the 1 st pixel 21 hour, for the processing result Gx 21 to obtain a processing result (denoted as Hx1), and a second filtering unit in the box filtering module receives the processing result Gy2For treatment result Gy at time 121, processing to obtain a processing result (recorded as Hy1), and processing the processing result GxGy1 to obtain a processing result (recorded as Hxy1) when the third filtering unit in the box type filtering module receives the processing result GxGy 1; and the first filtering unit, the second filtering unit and the third filtering unit are performed simultaneously;
when the construction module receives the processing result Ls2 of the 2 nd pixel output by the sobel filtering module, the construction module may process the processing result Ls2 of the 2 nd pixel to obtain a processing result (denoted as G2), and the processing result G2 includes: gx22,Gy 22 and GxGy 2;
meanwhile, the horizontal filtering unit and the vertical filtering unit in the sobel filtering module respectively and simultaneously perform filtering processing on the data of the 3 rd pixel, and then transmit the processing results (recorded as Ls3) to the building module.
Similarly, for the 3 rd stage to the 11 th stage, similarly to the above case, detailed description thereof is omitted.
Thus, through the above process, when extracting the feature points in the image to be processed, the consumed time is the sum of the execution times of the modules filled with the black points in fig. 8, thereby realizing a 'business pipeline' working mode; obviously, compared with the prior art in which each module transmits the data of all pixels to the next module after processing the data, the time consumed by feature point extraction is greatly shortened, and the efficiency of feature point extraction is greatly improved.
It should be noted that, optionally, when the modules process the received result, the processing time of each module is different, and at this time, it is necessary to set:
setting the initial processing time of each module needing to carry out processing work simultaneously to be the same;
among the modules performing processing simultaneously, the module that completes processing first needs to wait for the module that completes processing last to complete processing, and then starts processing next data.
For example, referring to fig. 8, taking level 4 as an example, in this level, the modules performing the work include: the system comprises a sobel filtering module, a construction module, a box type filtering module, a calculation module and a judgment module, wherein the sobel filtering module is used for executing filtering processing on data of a 5 th pixel, the construction module is used for executing processing on a filtering processing result of a 4 th pixel, the box type filtering module is used for executing processing on a covariance matrix result corresponding to a 3 rd pixel, the calculation module is used for executing processing on a filtering processing result of a 2 nd pixel, and the judgment module is used for executing processing on a calculation result of a 1 st pixel;
in fig. 8, there is a difference in length of the square corresponding to each module in the direction of M1, which is used to indicate that there is a difference in processing time of each module; if the 'service pipeline type' working module requires that each module at each stage needs to start working at the same time, each module needs to be controlled and coordinated through a service protocol;
therefore, in the 4 th stage shown in fig. 8, the lengths of the modules in the M1 direction are, in order from small to large: judging module, calculating module, building module, box type filtering module and Sobel filtering module, then:
after the judging module finishes the work of the stage, the judging module does not immediately enter the next stage of processing, but is in a waiting stage, and the judging module waits for the Sobel filtering module with the longest working time in the stage to finish the work and then enters the next stage;
similarly, after the calculation module, the construction module and the box type filter module work at the stage, the next stage of processing is not immediately started, and the Sobel filter module with the longest working time in the stage is in a waiting stage and then enters the next stage after finishing working.
Therefore, the modules can start to work simultaneously during peer processing, and the chaos of the work of the modules is avoided, so that the error generated during feature point extraction is reduced.
Based on the same inventive concept, embodiments of the present invention provide a method for extracting feature points in an image, an implementation principle of the method for extracting is similar to that of the aforementioned extraction device, and specific embodiments of the method for extracting may refer to specific embodiments of the aforementioned method for extracting, and repeated details are not repeated.
Specifically, the method for extracting feature points in an image according to the embodiment of the present invention, as shown in fig. 9, may include:
s901, the filtering module sequentially carries out filtering processing on the data of each pixel in the image data to be processed, and outputs the filtering processing result of each pixel to the extraction module when the filtering processing on the data of any pixel is finished;
and S902, the extraction module extracts the characteristic points in the filtering processing result of the pixel and outputs the extraction result of the pixel.
Optionally, in an embodiment of the present invention, the filtering module includes: when the plurality of filtering units complete the filtering processing of the data of any pixel, the method outputs the filtering processing result of the pixel to the extraction module, and specifically includes:
each filtering unit simultaneously carries out filtering processing on the data of any pixel and simultaneously transmits the filtering processing result of the pixel to the extraction module.
Optionally, in the embodiment of the present invention, the filtering processing on the data of any pixel specifically includes:
the data reading subunit reads the data of each pixel in sequence from the image data to be processed and stores the read data in the line buffer in sequence according to a preset storage rule;
the array extraction subunit extracts the data array from the line buffer according to a preset extraction rule and a preset extraction window;
the calculation subunit calculates the data array according to a preset algorithm, and outputs a calculation result; wherein, the calculation result is as follows: and calculating results corresponding to the data at the central position of the data array.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. An apparatus for extracting feature points in an image, comprising: a filtering module and an extraction module;
the filtering module is used for: sequentially filtering the data of each pixel in the image data to be processed, and transmitting the filtering result of the pixel to the extraction module when the filtering processing of the data of any pixel is finished;
the extraction module is configured to: and when the filtering processing result of the pixel is received, extracting the characteristic point in the filtering processing result of the pixel, and outputting the extraction result of the pixel.
2. The extraction device of claim 1, wherein the filtering module comprises: a plurality of filtering units;
each of the filtering units is configured to: and simultaneously, filtering the data of any pixel, and simultaneously transmitting the filtering result of the pixel to the extraction module.
3. The extraction apparatus according to claim 2, wherein the structures of the respective filter units are the same.
4. The extraction apparatus according to any one of claims 1 to 3, wherein the filtering unit or the filtering module comprises: the device comprises a data reading subunit, a line buffer, an array extracting subunit and a calculating subunit;
the data reading subunit is configured to: sequentially reading the data of each pixel from the image data to be processed, and sequentially storing the read data into the line buffer according to a preset storage rule;
the array extraction subunit is to: extracting the data array from the line buffer according to a preset extraction rule and a preset extraction window;
the computing subunit is to: calculating the data array according to a preset algorithm, and outputting a calculation result; wherein the calculation result is: and calculating results corresponding to the data at the central position of the data array.
5. The extraction device of claim 4, wherein the storage rule comprises:
when the line buffer comprises k1 lines of storage spaces, each line of the storage spaces is correspondingly provided with line sequence numbers according to a preset line sequence number setting rule, the image data to be processed comprises k2 lines of pixels, k1 is smaller than k2, k1 and k2 are positive integers, and the line sequence number is set to any integer from zero to k1-1, the line sequence numbers of the pixels in the 1 st line to the k1-1 line in the image to be processed are: the data of the pixel of the F1 th row is stored into the storage space with the row number of F1, and F1 is any integer from 1 to k 1-1;
for pixels from line k1 to line k2 in the image to be processed: the data of the pixels of the F2 th row is stored into the storage space with the row number of k1-1, and replaces the original data in the storage space with the row number of k1-1, and F2 is any integer from k1 to k 2;
wherein each row of the storage space comprises: the storage space with the row sequence number set to zero for the first time only stores a preset first boundary value, and except the storage space with the row sequence number set to zero for the first time, the storage spaces corresponding to the rest row sequence numbers are: the head space and the tail space are both stored with preset second boundary values, and the middle space is stored with data of corresponding pixels.
6. The extraction apparatus according to claim 5, wherein the row number setting rule includes:
when the line buffer is not filled, sequentially setting zero to k1-1 as the line sequence numbers of the storage spaces of the k1 lines according to the arrangement sequence of the storage spaces of the lines;
when the line buffer is filled or any line memory space is filled after the line buffer is filled, the original line sequence number zero is updated to the line sequence number k1-1, and the original line sequence number F3 is updated to the line sequence number F3-1, wherein F3 is any integer from 1 to k 1-1.
7. The extraction apparatus according to claim 5, wherein the storage capacity of the storage space is the same for each row;
the storage capacity of the middle space is equal to the number of columns of the pixels in the image data to be processed;
the head space and the tail space have the same storage capacity, and when the data array extracted according to the extraction window comprises y rows and y columns, the storage capacity of the head space is (y-1)/2; wherein y is a positive integer.
8. The extraction device of claim 4, wherein the extraction rule comprises:
when the data reading subunit stores the read data of the ith row and jth column pixels in the image data to be processed into the row buffer, the extracted data of the center position of the data array is the data of the ith-1 row and jth-1 column pixels in the image data to be processed, and i and j are integers greater than 1.
9. The extraction apparatus according to claim 4, wherein the array extraction subunit and the calculation unit subunit are each provided in plural, and the number of the arrangement of the array extraction subunit and the number of the arrangement of the calculation subunit are the same;
each of the array extraction subunits is specifically configured to: according to the extraction rule, simultaneously extracting the data array from the line buffer according to the extraction window; wherein the data arrays extracted by different array extracting subunits comprise at least partially different data;
each of the calculation subunit is specifically configured to: and according to the preset algorithm, simultaneously calculating the data array, and outputting a calculation result.
10. A method for extracting feature points in an image is characterized by comprising the following steps:
the filtering module is used for sequentially filtering the data of each pixel in the image data to be processed and outputting the filtering result of the pixel to the extraction module when the filtering processing of the data of any pixel is finished;
the extraction module extracts the characteristic points in the filtering processing result of the pixel and outputs the extraction result of the pixel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011516597.7A CN112529016A (en) | 2020-12-21 | 2020-12-21 | Method and device for extracting feature points in image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011516597.7A CN112529016A (en) | 2020-12-21 | 2020-12-21 | Method and device for extracting feature points in image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112529016A true CN112529016A (en) | 2021-03-19 |
Family
ID=75002007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011516597.7A Withdrawn CN112529016A (en) | 2020-12-21 | 2020-12-21 | Method and device for extracting feature points in image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112529016A (en) |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2872480Y (en) * | 2005-12-09 | 2007-02-21 | 中国科学院沈阳自动化研究所 | Image corner fast extraction device |
CN101156776A (en) * | 2007-09-17 | 2008-04-09 | 中国人民解放军第四军医大学 | Electrical impedance scanning detection system and method of real-time multi-information extraction |
CN101540046A (en) * | 2009-04-10 | 2009-09-23 | 凌阳电通科技股份有限公司 | Panoramagram montage method and device based on image characteristics |
CN102222317A (en) * | 2011-06-22 | 2011-10-19 | 王洪剑 | Image scaling method and system |
JP2014038606A (en) * | 2012-07-20 | 2014-02-27 | Jfe Steel Corp | Feature point extraction method for photographed image, and feature point extraction device |
US20140198995A1 (en) * | 2013-01-15 | 2014-07-17 | Stmicroelectronics S.R.I. | Method and apparatus for computing image pyramids and related computer program product |
CN104021549A (en) * | 2014-05-19 | 2014-09-03 | 清华大学深圳研究生院 | Total affine invariant SURF feature point detection method and device thereof |
CN104978728A (en) * | 2014-04-08 | 2015-10-14 | 南京理工大学 | Image matching system of optical flow method |
CN105335943A (en) * | 2015-09-24 | 2016-02-17 | 上海斐讯数据通信技术有限公司 | Image median filtering method and system |
CN106204660A (en) * | 2016-07-26 | 2016-12-07 | 华中科技大学 | A kind of Ground Target Tracking device of feature based coupling |
CN106503743A (en) * | 2016-10-31 | 2017-03-15 | 天津大学 | A kind of quantity is more and the point self-adapted clustering method of the high image local feature of dimension |
CN108681984A (en) * | 2018-07-26 | 2018-10-19 | 珠海市微半导体有限公司 | A kind of accelerating circuit of 3*3 convolution algorithms |
CN108694735A (en) * | 2018-05-11 | 2018-10-23 | 歌尔科技有限公司 | Wearable device and analog dial pointer picture compression storage redraw method, equipment |
CN108876711A (en) * | 2018-06-20 | 2018-11-23 | 山东师范大学 | A kind of sketch generation method, server and system based on image characteristic point |
CN109671042A (en) * | 2018-12-19 | 2019-04-23 | 西安电子科技大学 | Gray-scale image processing system and method based on FPGA morphological operator |
CN110246165A (en) * | 2019-05-30 | 2019-09-17 | 中国科学院长春光学精密机械与物理研究所 | It improves visible images and SAR image matches the method and system of Quasi velosity |
US20190355169A1 (en) * | 2018-05-18 | 2019-11-21 | Samsung Electronics Co., Ltd. | Semantic mapping for low-power augmented reality using dynamic vision sensor |
CN110782477A (en) * | 2019-10-10 | 2020-02-11 | 重庆第二师范学院 | Moving target rapid detection method based on sequence image and computer vision system |
CN110991291A (en) * | 2019-11-26 | 2020-04-10 | 清华大学 | Image feature extraction method based on parallel computing |
CN111583093A (en) * | 2020-04-27 | 2020-08-25 | 西安交通大学 | Hardware implementation method for ORB feature point extraction with good real-time performance |
CN111861883A (en) * | 2020-06-23 | 2020-10-30 | 燕山大学 | Multi-channel video splicing method based on synchronous integral SURF algorithm |
US20210142767A1 (en) * | 2017-12-29 | 2021-05-13 | Zhejiang Uniview Technologies Co., Ltd. | Image data reading method and apparatus, electronic device, and readable storage medium |
-
2020
- 2020-12-21 CN CN202011516597.7A patent/CN112529016A/en not_active Withdrawn
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2872480Y (en) * | 2005-12-09 | 2007-02-21 | 中国科学院沈阳自动化研究所 | Image corner fast extraction device |
CN101156776A (en) * | 2007-09-17 | 2008-04-09 | 中国人民解放军第四军医大学 | Electrical impedance scanning detection system and method of real-time multi-information extraction |
CN101540046A (en) * | 2009-04-10 | 2009-09-23 | 凌阳电通科技股份有限公司 | Panoramagram montage method and device based on image characteristics |
CN102222317A (en) * | 2011-06-22 | 2011-10-19 | 王洪剑 | Image scaling method and system |
JP2014038606A (en) * | 2012-07-20 | 2014-02-27 | Jfe Steel Corp | Feature point extraction method for photographed image, and feature point extraction device |
US20140198995A1 (en) * | 2013-01-15 | 2014-07-17 | Stmicroelectronics S.R.I. | Method and apparatus for computing image pyramids and related computer program product |
CN104978728A (en) * | 2014-04-08 | 2015-10-14 | 南京理工大学 | Image matching system of optical flow method |
CN104021549A (en) * | 2014-05-19 | 2014-09-03 | 清华大学深圳研究生院 | Total affine invariant SURF feature point detection method and device thereof |
CN105335943A (en) * | 2015-09-24 | 2016-02-17 | 上海斐讯数据通信技术有限公司 | Image median filtering method and system |
CN106204660A (en) * | 2016-07-26 | 2016-12-07 | 华中科技大学 | A kind of Ground Target Tracking device of feature based coupling |
CN106503743A (en) * | 2016-10-31 | 2017-03-15 | 天津大学 | A kind of quantity is more and the point self-adapted clustering method of the high image local feature of dimension |
US20210142767A1 (en) * | 2017-12-29 | 2021-05-13 | Zhejiang Uniview Technologies Co., Ltd. | Image data reading method and apparatus, electronic device, and readable storage medium |
CN108694735A (en) * | 2018-05-11 | 2018-10-23 | 歌尔科技有限公司 | Wearable device and analog dial pointer picture compression storage redraw method, equipment |
US20190355169A1 (en) * | 2018-05-18 | 2019-11-21 | Samsung Electronics Co., Ltd. | Semantic mapping for low-power augmented reality using dynamic vision sensor |
CN108876711A (en) * | 2018-06-20 | 2018-11-23 | 山东师范大学 | A kind of sketch generation method, server and system based on image characteristic point |
CN108681984A (en) * | 2018-07-26 | 2018-10-19 | 珠海市微半导体有限公司 | A kind of accelerating circuit of 3*3 convolution algorithms |
CN109671042A (en) * | 2018-12-19 | 2019-04-23 | 西安电子科技大学 | Gray-scale image processing system and method based on FPGA morphological operator |
CN110246165A (en) * | 2019-05-30 | 2019-09-17 | 中国科学院长春光学精密机械与物理研究所 | It improves visible images and SAR image matches the method and system of Quasi velosity |
CN110782477A (en) * | 2019-10-10 | 2020-02-11 | 重庆第二师范学院 | Moving target rapid detection method based on sequence image and computer vision system |
CN110991291A (en) * | 2019-11-26 | 2020-04-10 | 清华大学 | Image feature extraction method based on parallel computing |
CN111583093A (en) * | 2020-04-27 | 2020-08-25 | 西安交通大学 | Hardware implementation method for ORB feature point extraction with good real-time performance |
CN111861883A (en) * | 2020-06-23 | 2020-10-30 | 燕山大学 | Multi-channel video splicing method based on synchronous integral SURF algorithm |
Non-Patent Citations (6)
Title |
---|
LI, H等: "Feature Point Extraction and Tracking Based on a Local Adaptive Threshold", IEEE ACCESS, vol. 8, pages 44325 - 44334, XP011777649, DOI: 10.1109/ACCESS.2020.2977841 * |
OZGUNALP, U等: "Robust lane-detection algorithm based on improved symmetrical local threshold for feature extraction and inverse perspective mapping", IET IMAGE PROCESSING, vol. 13, no. 6, pages 975 - 982, XP006081443, DOI: 10.1049/iet-ipr.2018.5154 * |
ZHAO, CY等: "Optical nanoscale positioning measurement with a feature-based method", OPTICS AND LASERS IN ENGINEERING, vol. 134, pages 106225 * |
姜晓明;刘强;: "基于FPGA的低复杂度快速SIFT特征提取", 北京航空航天大学学报, no. 04, pages 167 - 173 * |
罗军;肖芳;毛雪莹;黄启俊;常胜;: "基于FPGA的方向滤波指纹图像增强算法实现", 电子技术应用, no. 06, pages 13 - 16 * |
薛顺瑞;高原;唐湘成;刘怡;黄自力;: "基于FPGA并行处理SIFT算法特征点检测", 电视技术, no. 23, pages 194 - 198 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11816559B2 (en) | Dilated convolution using systolic array | |
EP3637281A1 (en) | Operational accelerator | |
CN105930902A (en) | Neural network processing method and system | |
CN110188869B (en) | Method and system for integrated circuit accelerated calculation based on convolutional neural network algorithm | |
EP4227886A1 (en) | Matrix operation method and apparatus for image data, device, and storage medium | |
CN110807170B (en) | Method for realizing Same convolution vectorization of multi-sample multi-channel convolution neural network | |
CN110796236B (en) | Vectorization implementation method for pooling of multi-sample multi-channel convolutional neural network | |
CN112215345B (en) | Convolutional neural network operation method and device based on Tenscorore | |
KR20210014561A (en) | Method and apparatus for extracting image data in parallel from multiple convolution windows, device, and computer-readable storage medium | |
CN111639701B (en) | Method, system and equipment for extracting image features and readable storage medium | |
JP2637749B2 (en) | Data processing apparatus and processing method | |
CN110377874B (en) | Convolution operation method and system | |
JP6970827B2 (en) | Arithmetic processing unit | |
CN112862725B (en) | Method for computing, computing device, and computer-readable storage medium | |
CN112529016A (en) | Method and device for extracting feature points in image | |
JP7251354B2 (en) | Information processing device, information processing program, and information processing method | |
CN112183732A (en) | Convolutional neural network acceleration method and device and computer equipment | |
CN112639836A (en) | Data processing device, electronic equipment and data processing method | |
CN107111878A (en) | Data processing method, apparatus and system | |
CN114254740B (en) | Convolution neural network accelerated calculation method, calculation system, chip and receiver | |
CN111145075B (en) | Data processing system | |
CN113962378A (en) | Convolution hardware accelerator based on RS data stream and method thereof | |
CN111831207B (en) | Data processing method, device and equipment thereof | |
CN114066838A (en) | Method and device for detecting surface defects of product based on multi-scale attention mechanism | |
CN111428870B (en) | Information processing apparatus and memory control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210319 |
|
WW01 | Invention patent application withdrawn after publication |