CN111010492A - Image processing circuit and related image processing method - Google Patents
Image processing circuit and related image processing method Download PDFInfo
- Publication number
- CN111010492A CN111010492A CN201811169586.9A CN201811169586A CN111010492A CN 111010492 A CN111010492 A CN 111010492A CN 201811169586 A CN201811169586 A CN 201811169586A CN 111010492 A CN111010492 A CN 111010492A
- Authority
- CN
- China
- Prior art keywords
- convolution filter
- convolution
- image processing
- image data
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image processing circuit and a related image processing method. In operation of the image processing circuit, the receiving circuit receives image data, and the feature acquisition module performs feature acquisition on the image data using one or more convolution filters of various topologies to generate a plurality of image data features, such as smooth features, edge features, and the like, determined by the topological characteristics and weighting of the convolution filters. The convolution filter used by the characteristic acquisition module is not limited to the existing square convolution filter, but provides a non-square convolution filter with various convolution filter topologies, so that the characteristic acquisition module can acquire more abundant image characteristics to identify the content of the image data.
Description
Technical Field
The present invention relates to image processing, and more particularly, to an image processing circuit with image recognition function and a related image processing method.
Background
At present, the image processing circuit using the deep learning method involves a convolution neural network (convolutional neural network), and this type of neural network can be trained by giving enough images to obtain the optimized parameters of the network model, and the optimized parameters are used to extract the image features for the judgment circuit to identify and decide, and the convolution operation is the key step for obtaining the image features. Convolution filters (convolution filters) used in current neural networks using convolution operations are square matrices, such as 3 × 3 or 5 × 5 convolution filters. The use of a square convolution filter, while computationally convenient and intuitive, is not consistent with the distance (L1norm) of each pixel from the center pixel. For example, if 8 pixels are located 2 away from the center pixel, and only 4 pixels are calculated by using a convolution filter of 3 × 3, some pixels will be calculated at the same distance but some pixels will not be calculated, which may cause imbalance in the weights of the calculated pixels with respect to the pixels of the input image, and further cause difficulty in obtaining the image features.
Disclosure of Invention
It is therefore an object of the present invention to provide an image processing circuit that uses a non-square convolution filter to perform operations to solve the problems of the prior art.
In an embodiment of the present invention, an image processing circuit is disclosed, which includes a receiving circuit, a feature obtaining module and a determining circuit. In operation of the image processing circuit, the receiving circuit receives image data, the feature acquisition module uses at least a first convolution filter to operate on the image data acquisition feature or feature data generated from the image data to generate a feature map, wherein the first convolution filter is a non-square convolution filter, and the determining circuit identifies the content of the image data according to the feature map.
In one embodiment of the present invention, an image processing method is disclosed, which comprises the following steps: receiving image data; using a first convolution filter to operate on the image data acquisition feature or a feature data generated from the image data to generate a feature map, wherein the first convolution filter is a non-square convolution filter; and identifying the content of the image data according to the feature map.
Drawings
FIG. 1 is a diagram of a diamond convolution filter according to an embodiment of the present invention.
FIG. 2 is a diagram of a hole diamond-type (hatched-rhombus) convolution filter according to an embodiment of the present invention.
Fig. 3 is a diagram of a cross convolution filter (cross convolution filter) and a hole cross convolution filter (cut-cross convolution filter) according to an embodiment of the present invention.
FIG. 4 is a diagram of an X-shape convolution filter and a hole X-shape convolution filter according to an embodiment of the present invention.
Fig. 5 is a diagram illustrating a rice-glyph convolution filter (steady-convolution filter) and a hole rice-glyph convolution filter (partitioned-steady-convolution filter) according to an embodiment of the invention.
Fig. 6 is a schematic diagram of a convolution operation performed on image data using the convolution filter of the present embodiment.
FIG. 7 is a diagram of an image processing circuit according to an embodiment of the invention.
FIG. 8 is a diagram illustrating feature maps generated by a plurality of image processing circuits.
FIG. 9 is a flowchart illustrating an image processing method according to an embodiment of the invention.
Detailed Description
Fig. 1 is a schematic diagram of a diamond convolution filter according to an embodiment of the present invention, where fig. 1 illustrates a 3 × 3 diamond convolution filter and a 5 × 5 diamond convolution filter. In the 3 x 3 diamond convolution filter, it contains a central parameter C0 and four parameters C11, C12, C13, C14 with a pixel distance from the central parameter C0, wherein the central parameter C0 and the four parameters C11, C12, C13, C14 can be any suitable real number, and the positions of the remaining peripheries are not set with any value or have a parameter "0". In addition, in the 5 × 5 diamond convolution filter, it includes a center parameter C0, four parameters C11, C12, C13, C14 having a pixel distance from the center parameter C0, and eight parameters C21, C22, C23, C24, C25, C26, C27, C28 having a pixel distance from the center parameter C0, wherein the above parameters may be any suitable real numbers, and the positions of the rest of the periphery are not set with any values or have a parameter "0".
In the embodiment shown in fig. 1, the diamond convolution filter includes all the parameters C11, C12, C13, and C14 having a pixel distance from the center parameter C0, and/or includes all the parameters C21, C22, C23, C24, C25, C26, C27, and C28 having a two-pixel distance from the center parameter C0, so that the diamond convolution filter has a well-balanced pixel distance design, and therefore, the diamond convolution filter may have a better effect on image feature acquisition and improve the efficiency of image feature acquisition.
It should be noted that the diamond convolution filter shown in fig. 1 is only an example, and in other embodiments of the present invention, the variation in the related design should fall within the scope of the present invention as long as the diamond convolution filter includes all pixels having a distance of N pixels from the center parameter, and N is any positive integer greater than zero. In other words, if the diamond convolution filter includes a parameter that is M pixels away from the center parameter, the diamond convolution filter includes all the parameters that are M pixels away from the center parameter, and M is any positive integer.
Fig. 2 is a schematic diagram of a hole diamond (expanded-rhombs) convolution filter according to an embodiment of the present invention, in which fig. 2 shows three 5-by-5 hole diamond convolution filters for illustration. Compared to the 5 x 5 diamond convolution filter shown in fig. 1, the hollow diamond convolution filter on the left side of fig. 2 only includes eight parameters C21, C22, C23, C24, C25, C26, C27, and C28 having a distance of two pixels from the central parameter C0, but does not include the central parameter C0 and four parameters C11, C12, C13, and C14 having a distance of one pixel from the central parameter C0, i.e., the central parameter C0, the four parameters C11, C12, C13, and C14 and the rest of the peripheral positions do not have any value or have a parameter "0". The hole diamond convolution filter in the middle of fig. 2 only includes four parameters C11, C12, C13, C14 having a distance of one pixel from the center parameter C0 and eight parameters C21, C22, C23, C24, C25, C26, C27, C28 having a distance of two pixels from the center parameter C0, i.e., the center parameter C0 and the remaining periphery are not set to any value or have a parameter "0". The hole diamond convolution filter on the right side of fig. 2 only includes eight parameters C21, C22, C23, C24, C25, C26, C27, C28 that are two pixels apart from the center parameter C0 and from the center parameter C0, i.e., the parameters C11, C12, C13, C14 and the rest of the periphery are not set to any value or have a parameter "0".
Fig. 3 is a schematic diagram of a cross convolution filter (cross convolution filter) and a hole cross convolution filter (hatched-cross convolution filter) according to an embodiment of the present invention, referring to fig. 3, only the pixel positions marked with diagonal lines have real parameters, and the remaining blank regions have no value or have a parameter "0". In this embodiment, if the image features are more obvious in the vertical or horizontal direction, the efficiency of image feature acquisition can be improved by the cross convolution filter and the void cross convolution filter of this embodiment.
Fig. 4 is a schematic diagram of an X-shape convolution filter (X-shape convolution filter) and a hole X-shape convolution filter (scaled-X-shape convolution filter) according to an embodiment of the present invention, referring to fig. 4, only the pixel positions marked with diagonal lines have real parameters, and the remaining blank regions have no value or have a parameter "0". In this embodiment, if the image features are obvious in the 45-degree oblique direction, the efficiency of image feature acquisition can be improved by the X-convolution filter and the void X-convolution filter of this embodiment.
Fig. 5 is a schematic diagram of a rice-shaped convolution filter (steady-state convolution filter) and a hollow-shaped rice-shaped convolution filter (scaled-state convolution filter) according to an embodiment of the present invention, referring to fig. 5, only the pixel positions marked with diagonal lines have real parameters, and the remaining blank regions have no value or have a parameter "0". In this embodiment, if the image features are more obvious in the vertical or horizontal direction or the 45-degree oblique direction, the efficiency of image feature acquisition can be improved by the mike-shaped convolution filter and the hollow mike-shaped convolution filter of this embodiment.
It should be noted that the embodiments shown in fig. 2-5 are only exemplary, and persons of ordinary skill in the art should understand that the hollow rhombus convolution filters, cross convolution filters, hollow cross convolution filters, X-shaped convolution filters, hollow X-shaped convolution filters, mitre-shaped convolution filters, and hollow mitre-shaped convolution filters of the above embodiments can have other sizes, such as 7 × 7, 9 × 9 …, etc.
FIG. 6 is a diagram illustrating a convolution operation performed on an image data (image frame) 610 by using the convolution filter of the present embodiment. As shown in fig. 6, assuming that the image data 610 includes a plurality of pixels, after performing the convolution operation to align the pixel with the center parameter of the convolution filter, each pixel performs a weighted addition calculation on the pixel value within the range of the convolution filter and the corresponding convolution filter parameter to obtain a processed pixel value. For example, taking the 5 x 5 diamond convolution filter and pixel P11 shown in fig. 1 as an example, the processed pixel value P11' can be calculated by using the following formula:
P11'=P11*C0+P12*C14+P13*C27+P21*C13+P22*C26+P31*C25
since P11 is located at the boundary of the image, zero padding (zero padding) is performed around the image data 610 in the convolution operation, and some of the parameters C21, C11, C22, C23, and C12 of the 5-by-5 diamond convolution filter are not represented in the above formula because the corresponding pixel values are zero. In the same manner, each pixel in the image data 610 is subjected to the weighted addition operation using the convolution filter to obtain the corresponding processed pixel values P12 ', P13', …, respectively, so as to obtain a feature map 620.
It is noted that the image data 610 shown in FIG. 6 may also be a feature map obtained from the image data, i.e., the convolution operation may further acquire features from a feature map to generate another feature map.
Fig. 7 is a diagram of an image processing circuit 700 according to an embodiment of the invention. As shown in fig. 7, the image processing circuit 700 includes a receiving circuit 710, a feature obtaining module 720 and a determining circuit 730, wherein the feature obtaining module 720 includes a plurality of feature obtaining circuits 722_1 to 722_ K. In the present embodiment, the image processing circuit 700 is applied in a monitoring system, and the image processing circuit 700 receives an image data Din from a monitor and performs an image recognition operation to determine the content of the image data Din, such as determining whether a human image appears in the image data Din.
In operation of the image processing circuit 700, the receiving circuit 710 first receives the image data Din, performs some front-end processing on the image data Din, and then sends the processed image data to the feature obtaining module 720. Next, the feature obtaining circuit 722_1 in the feature obtaining module 720 performs a feature obtaining operation on the image data Din by using one or more convolution filters to generate at least one feature map; the feature obtaining circuits 722_ 2-722 _ K sequentially perform another feature obtaining operation on the feature map generated by the previous stage using one or more convolution filters to generate another feature map. Taking fig. 8 as an example, assuming that the image data Din has 46 × 46 pixels, and each pixel has three pixel values of red, green, and blue, the feature obtaining circuit 722_1 may perform 32 convolution operations (feature obtaining operations) on the image data Din using a plurality of convolution filters, and may generate 32 feature maps having a size of 40 × 40 without performing zero padding on the periphery of the image data Din. In this embodiment, the convolution filter used by the feature extraction circuit 722_1 can be a 7 × 7 convolution filter, and the 7 × 7 convolution filter can be generated according to any one of the non-square convolution filters of the embodiments of fig. 1-5, and/or a conventional square convolution filter, for example, the feature extraction circuit 722_1 can use a 7 × 7 diamond convolution filter to perform a feature extraction operation on the image data Din to generate a first feature map, the feature extraction circuit 722_1 can then use a conventional 7 × 7 square convolution filter to perform a feature extraction operation on the image data Din to generate a second feature map, the feature extraction circuit 722_1 can then use a 7 × 7 hole diamond convolution filter to perform a feature extraction operation on the image data Din to generate a third feature map, and the feature extraction circuit 722_1 can then use a 7 × 7 hole diamond convolution filter to generate a third feature map, …, until the feature capture circuit 722_1 generates the 32 th feature map, the 32 feature maps are output as the first layer of the feature capture module 720.
In the above embodiment, the feature obtaining circuit 722_1 adopts a plurality of convolution filters of different types to perform operations to generate a plurality of feature maps during the process of generating the feature maps, and then combines the feature maps together as an output of one layer, so that the feature obtaining circuit 722_1 has a convolution module (multiple topology context filter module) with multiple topology forms, and thus can be more suitable for the diversity in the convolutional neural network and more efficiently perform division to extract image features.
Next, the feature obtaining circuit 722_2 may perform down-sampling (down-sample) on each 40 × 40 feature map output by the feature obtaining circuit 722_1 to make the size of each feature map become 20 × 20, and then the feature obtaining circuit 722_2 performs 64 convolution operations on the down-sampled feature maps by using a plurality of 5 × 5 convolution filters, instead of performing zero padding on the periphery of the down-sampled feature maps, 64 feature maps with a size of 16 × 16 may be generated, wherein the 64 feature maps serve as the second layer output of the feature obtaining module 720, and the convolution filters used by the feature obtaining circuit 722_2 may include any one of the non-square convolution filters of the embodiments of fig. 1 to 5, and/or one of the convolution filters conventionally used. Next, the feature obtaining circuit 722_3 may perform a downsampling operation on each 16 × 16 feature map output by the feature obtaining circuit 722_2, so that the size of each feature map becomes 8 × 8, and then the feature obtaining circuit 722_3 may perform 128 times of convolution operations on the downsampled feature maps by using a plurality of 3 × 3 convolution filters, instead of performing zero padding on the periphery of the downsampled feature maps, 128 feature maps with the size of 6 × 6 may be generated, wherein the 128 feature maps are used as the third layer output of the feature obtaining module 720, and the convolution filter used by the feature obtaining circuit 722_3 may include any one of the non-square convolution filters in the embodiments of fig. 1 to 5 and/or a convolution filter in a conventional manner. Finally, the feature extraction circuit 722_4 uses 3 × 3 convolution filters to perform 128 convolution operations on the feature map output by the feature extraction circuit 722_3, and without zero padding on the periphery of the feature map, 128 feature maps with a size of 4 × 4 can be generated, wherein the 128 feature maps are output as the fourth layer of the feature extraction module 720, and the convolution filter used by the feature extraction circuit 722_4 can include any of the non-square convolution filters of the embodiments of fig. 1-5, and/or a conventional convolution filter.
It should be noted that the content shown in fig. 8 is only an exemplary illustration and is not a limitation of the present invention. In other embodiments of the present invention, the size of the image data Din and the size of each feature map are varied according to the designer's consideration, and the number of the feature obtaining circuits 722_1 to 722_ K and the number of layers of outputs of the feature obtaining module 720 can be varied according to the design of the engineer, so long as any one of the feature obtaining circuits 722_1 to 722_ K is used for performing the feature obtaining operation with the non-square convolution filter of the present invention, the variation of the related design is within the scope of the present invention.
Finally, the determining circuit 730 can perform the related operations to identify the content of the image data Din according to the last layer output of the feature obtaining module 720.
FIG. 9 is a flowchart illustrating an image processing method according to an embodiment of the invention. With reference to the disclosure of the above embodiments, the flow of the image processing method is as follows.
Step 900: the process begins.
Step 902: an image data is received.
Step 904: a non-square convolution filter is used to operate on the image data or a feature data generated by the image data to generate a feature map.
Step 906: and identifying the content of the image data according to the feature map.
Briefly summarized, in the image processing circuit and the image processing method of the present invention, the image acquisition operation is performed by using a non-square convolution filter, so that the efficiency of image feature acquisition can be increased in some cases. In addition, in the image identification system with deep learning, the non-square convolution filter of the invention can be matched with convolution filters in other forms so that the feature acquisition circuit has convolution modules with multiple topological forms, thus being more suitable for the diversity in the convolution neural network and more efficiently performing division work to extract image features.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.
Description of the symbols
610. Din image data
620 characteristic diagram
700 image processing circuit
710 receiving circuit
720 characteristic acquisition module
722_1 to 722_ K characteristic acquisition circuit
730 judging circuit
900 to 906 steps
C0 center parameter
C11, C12, C13, C14, C21, C22, C23, C24, C25, C26, C27 and C28 parameters
P11, P12, P13, P21, P22, P23, P31, P32 and P33 pixels
P11 ', P12 ', P13 ', P21 ', P22 ', P23 ', P31 ', P32 ' and P33 ' processed pixel values
Claims (10)
1. An image processing circuit, comprising:
a receiving circuit for receiving an image data;
a feature obtaining module, coupled to the receiving circuit, for performing an operation on the image data or a feature data generated from the image data by using at least a first convolution filter to generate a feature map, wherein the first convolution filter is a non-square convolution filter;
and the judging circuit is coupled with the characteristic acquiring module and used for identifying the content of the image data according to the characteristic diagram.
2. The image processing circuit of claim 1 wherein the first convolution filter includes all parameters that are N pixels away from a center parameter of the first convolution filter, and N is a positive integer greater than zero.
3. The image processing circuit of claim 2 wherein if the first convolution filter includes a parameter that is M pixels away from the center parameter of the first convolution filter, then the first convolution filter includes all parameters that are M pixels away from the center parameter of the first convolution filter, and M is any positive integer.
4. The image processing circuit of claim 2 wherein the first convolution filter is a diamond convolution filter.
5. The image processing circuit of claim 2 wherein the first convolution filter is a hole diamond convolution filter.
6. The image processing circuit of claim 1 wherein the first convolution filter is a cross convolution filter or a hole cross convolution filter.
7. The image processing circuit of claim 1 wherein the first convolution filter is an X-convolution filter or a hole X-convolution filter.
8. The image processing circuit of claim 1 wherein the first convolution filter is a mitre convolution filter or a hole mitre convolution filter.
9. The image processing circuit of claim 1 wherein the feature capture module further uses a second convolution filter to operate on the image data or the feature data generated from the image data to generate another feature map, wherein the second convolution filter is a square convolution filter.
10. An image processing method, comprising:
receiving image data;
using at least a first convolution filter to operate on the image data or a feature data generated by the image data to generate a feature map, wherein the first convolution filter is a non-square convolution filter; and
and identifying the content of the image data according to the feature map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811169586.9A CN111010492B (en) | 2018-10-08 | 2018-10-08 | Image processing circuit and related image processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811169586.9A CN111010492B (en) | 2018-10-08 | 2018-10-08 | Image processing circuit and related image processing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111010492A true CN111010492A (en) | 2020-04-14 |
CN111010492B CN111010492B (en) | 2022-05-13 |
Family
ID=70110633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811169586.9A Active CN111010492B (en) | 2018-10-08 | 2018-10-08 | Image processing circuit and related image processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111010492B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117313818A (en) * | 2023-09-28 | 2023-12-29 | 四川大学 | Method for training lightweight convolutional neural network and terminal equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005073071A (en) * | 2003-08-26 | 2005-03-17 | Kyocera Mita Corp | Image processing apparatus |
US20160253788A1 (en) * | 2015-02-27 | 2016-09-01 | Siliconfile Technologies Inc. | Device for removing noise on image using cross-kernel type median filter and method therefor |
KR101733228B1 (en) * | 2016-04-28 | 2017-05-08 | 주식회사 메디트 | Apparatus for three dimensional scanning with structured light |
US20170200078A1 (en) * | 2014-08-28 | 2017-07-13 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Convolutional neural network |
CN107665491A (en) * | 2017-10-10 | 2018-02-06 | 清华大学 | The recognition methods of pathological image and system |
US20180101743A1 (en) * | 2016-10-10 | 2018-04-12 | Gyrfalcon Technology, Inc. | Digital Integrated Circuit For Extracting Features Out Of An Input Image Based On Cellular Neural Networks |
CN108182455A (en) * | 2018-01-18 | 2018-06-19 | 齐鲁工业大学 | A kind of method, apparatus and intelligent garbage bin of the classification of rubbish image intelligent |
-
2018
- 2018-10-08 CN CN201811169586.9A patent/CN111010492B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005073071A (en) * | 2003-08-26 | 2005-03-17 | Kyocera Mita Corp | Image processing apparatus |
US20170200078A1 (en) * | 2014-08-28 | 2017-07-13 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Convolutional neural network |
US20160253788A1 (en) * | 2015-02-27 | 2016-09-01 | Siliconfile Technologies Inc. | Device for removing noise on image using cross-kernel type median filter and method therefor |
KR101733228B1 (en) * | 2016-04-28 | 2017-05-08 | 주식회사 메디트 | Apparatus for three dimensional scanning with structured light |
US20180101743A1 (en) * | 2016-10-10 | 2018-04-12 | Gyrfalcon Technology, Inc. | Digital Integrated Circuit For Extracting Features Out Of An Input Image Based On Cellular Neural Networks |
CN107665491A (en) * | 2017-10-10 | 2018-02-06 | 清华大学 | The recognition methods of pathological image and system |
CN108182455A (en) * | 2018-01-18 | 2018-06-19 | 齐鲁工业大学 | A kind of method, apparatus and intelligent garbage bin of the classification of rubbish image intelligent |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117313818A (en) * | 2023-09-28 | 2023-12-29 | 四川大学 | Method for training lightweight convolutional neural network and terminal equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111010492B (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111784603B (en) | RAW domain image denoising method, computer device and computer readable storage medium | |
CN109376681A (en) | A kind of more people's Attitude estimation method and system | |
CN111861961A (en) | Multi-scale residual error fusion model for single image super-resolution and restoration method thereof | |
Kim et al. | Multiple level feature-based universal blind image quality assessment model | |
CN109313805A (en) | Image processing apparatus, image processing system, image processing method and program | |
CN105590319A (en) | Method for detecting image saliency region for deep learning | |
CN114004754A (en) | Scene depth completion system and method based on deep learning | |
CN111353956B (en) | Image restoration method and device, computer equipment and storage medium | |
CN112562255A (en) | Intelligent image detection method for cable channel smoke and fire condition in low-light-level environment | |
CN114419349B (en) | Image matching method and device | |
CN115588190A (en) | Mature fruit identification and picking point positioning method and device | |
CN109933639B (en) | Layer-superposition-oriented multispectral image and full-color image self-adaptive fusion method | |
CN113139906B (en) | Training method and device for generator and storage medium | |
CN116052218B (en) | Pedestrian re-identification method | |
CN111010492B (en) | Image processing circuit and related image processing method | |
KR20080079443A (en) | Method and apparatus for extracting object from image | |
CN115641632A (en) | Face counterfeiting detection method based on separation three-dimensional convolution neural network | |
CN111126185A (en) | Deep learning vehicle target identification method for road intersection scene | |
TWI677230B (en) | Image processing circuit and associated image processing method | |
CN112085164B (en) | Regional recommendation network extraction method based on anchor-free frame network | |
CN116071625B (en) | Training method of deep learning model, target detection method and device | |
CN107945119A (en) | Correlated noise method of estimation in image based on bayer-pattern | |
CN112016487A (en) | Intelligent identification method and equipment | |
CN115190226B (en) | Parameter adjustment method, neural network model training method and related devices | |
CN113793358B (en) | Target tracking and positioning method and device and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |