CN109784372B - Target classification method based on convolutional neural network - Google Patents

Target classification method based on convolutional neural network Download PDF

Info

Publication number
CN109784372B
CN109784372B CN201811544116.6A CN201811544116A CN109784372B CN 109784372 B CN109784372 B CN 109784372B CN 201811544116 A CN201811544116 A CN 201811544116A CN 109784372 B CN109784372 B CN 109784372B
Authority
CN
China
Prior art keywords
image
row
column
size
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811544116.6A
Other languages
Chinese (zh)
Other versions
CN109784372A (en
Inventor
陈禾
魏鑫
贾明飞
刘文超
陈亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201811544116.6A priority Critical patent/CN109784372B/en
Publication of CN109784372A publication Critical patent/CN109784372A/en
Application granted granted Critical
Publication of CN109784372B publication Critical patent/CN109784372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a target classification method based on a convolutional neural network, which does not adopt a convolutional kernel to directly traverse a target image, but adopts a sliding window with the same size as an output characteristic image to traverse the whole target image in rows and columns, thereby extracting corresponding pixel points of the target image as sub-images, multiplying each characteristic parameter of the convolutional kernel with each sub-image correspondingly to obtain an intermediate image, and finally taking the sum of the intermediate images as the output characteristic image.

Description

Target classification method based on convolutional neural network
Technical Field
The invention belongs to the technical field of image classification, and particularly relates to a target classification method based on a convolutional neural network.
Background
The classification of images is an important category in the technical field of image processing, and how to quickly and accurately classify images is a very popular research topic in the current image field. In recent five years, the convolutional neural network has achieved good effects in the fields of image feature extraction, classification and identification and the like.
When the convolution neural network performs convolution calculation, a convolution kernel needs to be sampled to traverse an input feature map, and address jump occurs every time traversal is performed, wherein the address jump refers to that address jump operation needs to be performed on sequentially stored input feature map data in the convolution calculation process, and the address jump operation on a hardware platform costs more logic control than that on a software platform, so that the address jump is multiplied when an ARM (microprocessor) NEON (vector coprocessing unit) reads data along with the increase of the traversal times, and the hardware processing efficiency is greatly reduced.
Disclosure of Invention
In order to solve the problems, the invention provides a target classification method based on a convolutional neural network, which can reduce the number of address jump when a microprocessor reads data in the convolution implementation process to the greatest extent, and further greatly improve the hardware processing efficiency.
A target classification method based on a convolutional neural network comprises the following steps:
s1: acquiring a target image with the size of M multiplied by M;
s2: traversing the whole target image row by row and column by column from the upper left corner of the target image by adopting a sliding window with the size of (M-N +1) x (M-N +1), wherein during each traversal, the sliding window extracts pixel points in the sliding window from the target image as subimages, and then the number of the subimages is N x N; the method comprises the following steps that N is the size of a convolution kernel adopted in a convolution neural network, and the convolution kernel with the size of NxN comprises NxN preset characteristic parameters;
s3: multiplying a first preset characteristic parameter of a convolution kernel by each pixel point of a first sub-image to obtain a first intermediate image, and by analogy, multiplying the residual preset characteristic parameters of the convolution kernel by each pixel point of the residual sub-images one by one to obtain N multiplied by N intermediate images, wherein the size of the intermediate images is (M-N +1) x (M-N + 1);
s4: summing corresponding pixel points of the N multiplied by N intermediate images to obtain a characteristic output image with the size of (M-N +1) multiplied by (M-N + 1);
s5: performing pooling operation on the characteristic output image to obtain a sampling image with the size of (M-N +1)/2 x (M-N + 1)/2;
s6: and classifying the sampling images by adopting a set full-connection layer.
Further, the classifying the sampling images by using the set full-link layer specifically includes the following steps:
the set full connection layer is [ (M-N +1)/2]2Line XThe method comprises the following steps that a row of weight matrix is provided, wherein X is the number of image types, each row of the weight matrix corresponds to a different preset identification feature, each row of the weight matrix corresponds to a different image type, and element values in each row represent the weight of the preset identification feature of the image type corresponding to the row;
multiplying the first pixel point of the sampling image by the first row of the weight matrix to obtain a 1-row and X-column cache matrix;
by analogy, traversing all pixel points of the sampling image row by row and column by column to obtain [ (M-N +1)/2]2A cache matrix with 1 row and X columns;
correspondingly adding all the cache matrixes to obtain an output matrix with 1 row and X columns;
and taking the image type corresponding to the column where the maximum value in the output matrix is located as the belonging type of the sampling image.
Has the advantages that:
the invention provides a target classification method based on a convolutional neural network, which does not adopt a convolutional kernel to directly traverse a target image, but adopts a sliding window with the same size as an output characteristic image to traverse the whole target image in rows and columns, thereby extracting corresponding pixel points of the target image as sub-images, multiplying each characteristic parameter of the convolutional kernel with each sub-image correspondingly to obtain an intermediate image, and finally taking the sum of the intermediate images as the output characteristic image.
Drawings
FIG. 1 is a flow chart of a target classification method based on a convolutional neural network according to the present invention;
FIG. 2 is a schematic diagram comparing the prior art convolution implementation method provided by the present invention with the convolution implementation method of the present invention;
fig. 3 is a schematic diagram comparing the conventional full-connection implementation method provided by the present invention with the full-connection implementation method of the present invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Example one
Referring to fig. 1, the figure is a flowchart of a target classification method based on a convolutional neural network according to this embodiment. A target classification method based on a convolutional neural network is characterized by comprising the following steps:
s1: a target image of size M × M is acquired.
S2: traversing the whole target image row by row and column by column from the upper left corner of the target image by adopting a sliding window with the size of (M-N +1) x (M-N +1), wherein during each traversal, the sliding window extracts pixel points in the sliding window from the target image as subimages, and then the number of the subimages is N x N; wherein, N is the size of the convolution kernel adopted in the convolution neural network, and the convolution kernel with the size of N × N includes N × N preset characteristic parameters.
S3: multiplying the first preset characteristic parameter of the convolution kernel by each pixel point of the first sub-image to obtain a first intermediate image, and repeating the steps to multiply the residual preset characteristic parameters of the convolution kernel one by each pixel point of the residual sub-images to obtain N multiplied by N intermediate images, wherein the size of the intermediate images is (M-N +1) multiplied by (M-N + 1).
That is, the first characteristic parameter is multiplied by each pixel point of the subimage extracted by the first sliding window correspondingly to obtain a first intermediate image; the second characteristic parameter is correspondingly multiplied by each pixel point of the subimage extracted by the second sliding window to obtain a second intermediate image; and by analogy, obtaining N multiplied by N intermediate images.
S4: and summing corresponding pixel points of the N multiplied by N intermediate images to obtain a characteristic output image with the size of (M-N +1) multiplied by (M-N + 1).
S5: and performing pooling operation on the characteristic output image to obtain a sampling image with the size of (M-N +1)/2 x (M-N + 1)/2.
S6: and classifying the sampling images by adopting a set full-connection layer.
Referring to fig. 2, this figure is a schematic diagram comparing the principles of the conventional convolution implementation method provided in this embodiment with the convolution implementation method of the present invention. The left half of fig. 2 is a schematic diagram of the conventional convolution implementation method, and the right half is a schematic diagram of the convolution implementation method provided in this embodiment.
Assuming that the size of the target image is M × M and the size of the convolution kernel is N × N; in the convolution implementation mode in the prior art, an NxN convolution kernel is directly adopted to traverse the whole target image, and each pixel point of a characteristic output image is a result obtained by multiplying each characteristic parameter of the convolution kernel by the pixel point of the target image extracted by each traversal one by one and then summing the result; for example, if M is 32 and N is 5, the pixel value of the first pixel of the feature output image is obtained by multiplying 25 pixels, which intersect with the first 5 rows and the first 5 columns of the target image extracted by the convolution kernel during the first pass, by 25 feature parameters of the convolution kernel in a one-to-one correspondence manner to obtain 25 products, and then summing the 25 products.
However, the target image is stored in the memory in the ARM NEON in the row sequence, that is, the addresses of each pixel of the target image are arranged in the sequence from left to right of each row, the addresses of the pixels in the same row extracted during each pass of the convolution kernel are continuously changed, and the addresses of the pixels in different rows are jumped; therefore, each time a pixel point of the target image with the size of N multiplied by N ranges is extracted, N-1 address jumps occur. For a target image of size M, if an N convolution kernel is used to traverse the image, the number of address transitions is (N-1) × (M-N +1) × (M-N +1) for one image. For example, in the case where M is 32 and N is 5, the number of address transitions between different rows traversing a 5 × 5 region is 4, the convolution kernel is slid 27 times to the right to traverse all rows of the target image and 27 times to traverse all columns of the target image, and then the number of address transitions is 4 × 27 × 27 is 2916.
For the convolution implementation method provided in this embodiment, the convolution kernel is fixed, but a sliding window with the same size as the output characteristic image (M-N +1) × (M-N +1) is used to traverse the whole target image, and the number of address transitions is (M-N +1) × N. Similarly, when M is 32 and N is 5, the size of the sliding window in this embodiment is 28 × 28, the number of address transitions between different rows in the sliding window is 27, the sliding window slides 4 times to the right to traverse all rows of the target image, and slides 4 times to traverse all columns of the target image, and the number of address transitions is 27 × 4 × 4 is 432.
It can be seen that the larger the sliding window is, the fewer the number of address transitions will be, and the maximum size of the sliding window is equal to the size of the output image. The convolution implementation manner provided by this embodiment takes the size of the sliding window as the same as the size of the output feature image, and can reduce the number of address transitions to the greatest extent compared with the convolution manner in the prior art.
It should be noted that, as described above, for the implementation of convolution in the prior art, each pixel point of the feature output image is a result of one-to-one multiplication and summation of each feature parameter of the convolution kernel and the pixel point of the target image extracted during each pass of the feature output image. In the convolution implementation mode of the embodiment, which adopts a convolution kernel with no motion and a sliding window to traverse the target image, the convolution result same as that of the conventional convolution implementation mode can be obtained under the condition that the address jump frequency is reduced and the hardware processing efficiency is improved.
As shown in the right half of fig. 2, in the operation of the right half, in practice, in the first step, each feature parameter of the convolution kernel is first multiplied by an element value of its corresponding region in the target image, and the multiplication result is stored as an intermediate image, after each point of the convolution kernel is multiplied by a corresponding element in the target image and stored, all buffered intermediate images are added correspondingly to obtain an output feature image.
For example, when M is 32 and N is 5, the first pixel of the feature output image in the prior art is the result of one-to-one multiplication and addition of 25 pixels intersected with the 5 rows and the 5 columns before the target image and 25 feature parameters of the convolution kernel. The first pixel point of the feature output image of this embodiment is the result of accumulating the first pixel points of 25 intermediate images, and the first pixel point of the first intermediate image is the product of the first feature parameter of the convolution kernel and the pixel point of the 1 st row and the 1 st column in the first sliding window, in other words, the product of the first feature parameter of the convolution kernel and the pixel point of the 1 st row and the 1 st column in the target image; the first pixel point of the second intermediate image is the product of the second characteristic parameter of the convolution kernel and the pixel point of the 1 st line and the 1 st column in the second sliding window, in other words, the product of the second characteristic parameter of the convolution kernel and the pixel point of the 1 st line and the 2 nd column in the target image; by analogy, the first pixel point of the twenty-fifth intermediate image is the product of the twenty-fifth characteristic parameter of the convolution kernel and the pixel point of the 5 th row and the 5 th column of the target image; therefore, the first pixel of the feature output image of this embodiment is actually the 25 pixels intersected with the first 5 rows and the first 5 columns of the target image, and is multiplied by the 25 feature parameters of the convolution kernel in a one-to-one correspondence manner, and then added.
Therefore, in the embodiment, the target image is not directly traversed by adopting the convolution kernel, but the whole target image is traversed by adopting the sliding window with the same size as the output characteristic image in rows and columns, so that the corresponding pixel points of the target image are extracted to be used as the sub-images, then the characteristic parameters of the convolution kernel are respectively and correspondingly multiplied with the sub-images to obtain the intermediate image, and finally the sum value of the intermediate image is used as the output characteristic image.
It should be noted that, in this embodiment, a Zedboard development board of Xilinx corporation is used as an implementation platform; the resources on the platform utilized by the invention comprise an ARM processor, NEON, DDR (double data rate synchronous dynamic random access memory) and an SD card (memory card). In the process of hardware implementation, the embodiment stores the data in the convolution kernel in the NEON register, and then reads the remaining registers into the data of the target image as much as possible, thereby fully utilizing the parallel advantage of the SIMD of the NEON register to perform floating point operation and avoiding frequent address jump.
Example two
Based on the above embodiments, the present embodiment further describes how to classify the sample image by using the set full-link layer. Specifically, the step of classifying the sampling images by using the set full-link layer specifically includes the following steps:
the set full connection layer is [ (M-N +1)/2]2The image recognition method comprises the following steps of (1) row and X column weight matrixes, wherein X is the number of image types, each row of the weight matrixes corresponds to a different preset recognition feature, each column of the weight matrixes corresponds to a different image type, and element values in each column represent the weight of the preset recognition feature of the image type corresponding to the column;
multiplying the first pixel point of the sampling image by the first row of the weight matrix to obtain a 1-row and X-column cache matrix;
by analogy, traversing all pixel points of the sampling image row by row and column by column to obtain [ (M-N +1)/2]2A cache matrix with 1 row and X columns;
correspondingly adding all the cache matrixes to obtain an output matrix with 1 row and X columns;
and taking the image type corresponding to the column where the maximum value in the output matrix is located as the belonging type of the sampling image.
It should be noted that, each row of the weight matrix corresponds to a different preset identification feature, each column of the weight matrix corresponds to a different image type, and the element value in each column represents the weight of the preset identification feature of the image type corresponding to the column, each value in the output matrix obtained after multiplying the sampling matrix by the weight matrix can be regarded as a scoring result, the scoring result represents the conformity degree between the sampling image and the image type under the condition of the evaluation criteria given by each image type, namely the preset identification feature and the standard weight, and the element value in each column, the scoring result is higher, and the conformity degree between the sampling image and the image type is higher; in other words, the image type corresponding to the column in which the maximum value in the output matrix is located is the category to which the sampled image belongs.
Referring to fig. 3, this figure is a schematic diagram comparing the principles of the conventional full-connection implementation method and the full-connection implementation method of the present invention provided in this embodiment. The left half of fig. 3 is a schematic diagram of the conventional full-connection implementation method, and the right half is a schematic diagram of the full-connection implementation method provided in this embodiment.
In the prior art, the classification of the sampling image by using the set full-connection layer is equivalent to using 1 line, [ (M-N + 1)/2-]2The sampled image matrix of the columns is associated with one [ (M-N +1)/2 ]]2Multiplying the weight matrixes of the X rows by the weight matrixes of the X columns to obtain an output matrix of 1 row and X columns; that is, each pixel point of the output matrix is the result of the corresponding multiplication and addition of each pixel point of the sampled image and each row of pixel points of the weight matrix; however, as mentioned above, if there is address jump between different rows, the existing full-connection method will generate ((M-N +1)/2) ^2-1 address jumps when each pixel of the sampled image is multiplied by any column of the weight matrix and added; in the whole full-connection process, in order to obtain an output matrix, (((M-N +1)/2) ^2-1) × (X-1) address hopping occurs, which greatly influences the calculation efficiency of the ARM NEON processor.
However, as shown in the right half of fig. 3, the full-connection mode provided in this embodiment is to multiply each pixel point of the sampled image by one row of the weight matrix, and actually, one pixel point is [ (M-N +1)/2 ]]2Reading out each row of the matrix with X rows and X columns to obtain [ (M-N + 1)/2%]2Multiplying each pixel point of the adopted image by a sub-weight matrix to obtain [ (M-N +1)/2 ]]2The cache matrixes of 1 row and X column are correspondingly added to obtain an output matrix of 1 row and X column;
the elements of each row of the weight matrix are stored in sequence, so that the ARM can not generate address jump when reading the elements of a certain row of the weight matrix; therefore, the full-connection mode provided by the embodiment greatly reduces the number of address jump times of ARM read data in the full-connection process under the precondition of acquiring the full-connection result which is the same as the existing full-connection implementation mode, and further greatly improves the hardware processing efficiency.
It should be noted that the derivation process for obtaining the same full connection result in the full connection manner provided in this embodiment and the existing full connection implementation manner is similar to the derivation process for obtaining the same result in the convolution implementation manner provided in the previous embodiment and the existing convolution implementation manner, and this is not described in detail in this embodiment.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it will be understood by those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (1)

1. A target classification method based on a convolutional neural network is characterized by comprising the following steps:
s1: acquiring a target image with the size of M multiplied by M;
s2: traversing the whole target image row by row and column by column from the upper left corner of the target image by adopting a sliding window with the size of (M-N +1) x (M-N +1), wherein during each traversal, the sliding window extracts pixel points in the sliding window from the target image as subimages, and then the number of the subimages is N x N; the method comprises the following steps that N is the size of a convolution kernel adopted in a convolution neural network, and the convolution kernel with the size of NxN comprises NxN preset characteristic parameters;
s3: multiplying a first preset characteristic parameter of a convolution kernel by each pixel point of a first sub-image to obtain a first intermediate image, and by analogy, multiplying the residual preset characteristic parameters of the convolution kernel by each pixel point of the residual sub-images one by one to obtain N multiplied by N intermediate images, wherein the size of the intermediate images is (M-N +1) x (M-N + 1);
s4: summing corresponding pixel points of the N multiplied by N intermediate images to obtain a characteristic output image with the size of (M-N +1) multiplied by (M-N + 1);
s5: performing pooling operation on the characteristic output image to obtain a sampling image with the size of (M-N +1)/2 x (M-N + 1)/2;
s6: classifying the sampling images by adopting a set full-connection layer, and specifically comprising the following steps of:
the set full connection layer is [ (M-N +1)/2]2The method comprises the following steps that a weight matrix of X rows and X columns is provided, wherein X is the number of image types, each row of the weight matrix corresponds to a different preset identification feature, each column of the weight matrix corresponds to a different image type, and element values in each column represent the weight of the preset identification feature of the image type corresponding to the column;
multiplying the first pixel point of the sampling image by the first row of the weight matrix to obtain a 1-row and X-column cache matrix;
by analogy, traversing all pixel points of the sampling image row by row and column by column to obtain [ (M-N +1)/2]2A cache matrix with 1 row and X columns;
correspondingly adding all the cache matrixes to obtain an output matrix with 1 row and X columns;
and taking the image type corresponding to the column where the maximum value in the output matrix is located as the belonging type of the sampling image.
CN201811544116.6A 2018-12-17 2018-12-17 Target classification method based on convolutional neural network Active CN109784372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811544116.6A CN109784372B (en) 2018-12-17 2018-12-17 Target classification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811544116.6A CN109784372B (en) 2018-12-17 2018-12-17 Target classification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN109784372A CN109784372A (en) 2019-05-21
CN109784372B true CN109784372B (en) 2020-11-13

Family

ID=66498029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811544116.6A Active CN109784372B (en) 2018-12-17 2018-12-17 Target classification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN109784372B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017100A (en) * 2019-05-31 2020-12-01 Oppo广东移动通信有限公司 Convolution operation method and related product
CN110503189B (en) * 2019-08-02 2021-10-08 腾讯科技(深圳)有限公司 Data processing method and device
CN110796245B (en) * 2019-10-25 2022-03-22 浪潮电子信息产业股份有限公司 Method and device for calculating convolutional neural network model
CN111222465B (en) * 2019-11-07 2023-06-13 深圳云天励飞技术股份有限公司 Convolutional neural network-based image analysis method and related equipment
CN111738904B (en) * 2020-06-17 2022-05-17 武汉工程大学 Method and device for calculating geometric moment of target object in image
CN112596965B (en) * 2020-12-14 2024-04-09 上海集成电路研发中心有限公司 Digital image bad cluster statistical method and integrated circuit automatic tester
CN112674998B (en) * 2020-12-23 2022-04-22 北京工业大学 Blind person traffic intersection assisting method based on rapid deep neural network and mobile intelligent device
CN114692843A (en) * 2020-12-25 2022-07-01 中科寒武纪科技股份有限公司 Device, board card and method for calculating neural network and readable storage medium
CN113887542B (en) * 2021-12-06 2022-04-05 孙晖 Target detection method, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN107563433A (en) * 2017-08-29 2018-01-09 电子科技大学 A kind of infrared small target detection method based on convolutional neural networks
CN107704921A (en) * 2017-10-19 2018-02-16 北京智芯原动科技有限公司 The algorithm optimization method and device of convolutional neural networks based on Neon instructions
CN107798381A (en) * 2017-11-13 2018-03-13 河海大学 A kind of image-recognizing method based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160358069A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Neural network suppression

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658011A (en) * 2015-01-31 2015-05-27 北京理工大学 Intelligent transportation moving object detection tracking method
CN107563433A (en) * 2017-08-29 2018-01-09 电子科技大学 A kind of infrared small target detection method based on convolutional neural networks
CN107704921A (en) * 2017-10-19 2018-02-16 北京智芯原动科技有限公司 The algorithm optimization method and device of convolutional neural networks based on Neon instructions
CN107798381A (en) * 2017-11-13 2018-03-13 河海大学 A kind of image-recognizing method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
应用于ARM-NEON的图像二维卷积高效实现方法;刘湘等;《第十二届全国信号和智能信息处理与应用学术会议论文集》;20181019;第498-503页 *

Also Published As

Publication number Publication date
CN109784372A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109784372B (en) Target classification method based on convolutional neural network
CN108664981B (en) Salient image extraction method and device
CN108876792B (en) Semantic segmentation method, device and system and storage medium
CN110991311B (en) Target detection method based on dense connection deep network
EP3098762A1 (en) Data-optimized neural network traversal
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
US11538244B2 (en) Extraction of spatial-temporal feature representation
CN112613581B (en) Image recognition method, system, computer equipment and storage medium
CN108345827B (en) Method, system and neural network for identifying document direction
CN109949224B (en) Deep learning-based cascade super-resolution reconstruction method and device
CN109325589A (en) Convolutional calculation method and device
CN111738344A (en) Rapid target detection method based on multi-scale fusion
CN112365514A (en) Semantic segmentation method based on improved PSPNet
EP3093757A2 (en) Multi-dimensional sliding window operation for a vector processor
CN113920516B (en) Calligraphy character skeleton matching method and system based on twin neural network
CN107705270A (en) The treating method and apparatus of medium filtering, electronic equipment, computer-readable storage medium
CN112800955A (en) Remote sensing image rotating target detection method and system based on weighted bidirectional feature pyramid
CN110599455A (en) Display screen defect detection network model, method and device, electronic equipment and storage medium
CN110782430A (en) Small target detection method and device, electronic equipment and storage medium
CN112749576B (en) Image recognition method and device, computing equipment and computer storage medium
CN113221855B (en) Small target detection method and system based on scale sensitive loss and feature fusion
CN108921017B (en) Face detection method and system
CN111738069A (en) Face detection method and device, electronic equipment and storage medium
CN106845550B (en) Image identification method based on multiple templates
CN113256528B (en) Low-illumination video enhancement method based on multi-scale cascade depth residual error network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant