CN106845550B - Image identification method based on multiple templates - Google Patents

Image identification method based on multiple templates Download PDF

Info

Publication number
CN106845550B
CN106845550B CN201710056968.XA CN201710056968A CN106845550B CN 106845550 B CN106845550 B CN 106845550B CN 201710056968 A CN201710056968 A CN 201710056968A CN 106845550 B CN106845550 B CN 106845550B
Authority
CN
China
Prior art keywords
image
templates
axis direction
template
image area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710056968.XA
Other languages
Chinese (zh)
Other versions
CN106845550A (en
Inventor
肖东晋
张立群
刘顺宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aeva (beijing) Technology Co Ltd
Original Assignee
Aeva (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aeva (beijing) Technology Co Ltd filed Critical Aeva (beijing) Technology Co Ltd
Priority to CN201710056968.XA priority Critical patent/CN106845550B/en
Publication of CN106845550A publication Critical patent/CN106845550A/en
Application granted granted Critical
Publication of CN106845550B publication Critical patent/CN106845550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image identification method, which comprises the following steps: receiving an image to be identified; performing convolution calculation on the image by utilizing two templates simultaneously so as to obtain category scores of each region of the image corresponding to the two templates; determining whether the identified object is contained in the image region based on the category score.

Description

Image identification method based on multiple templates
Technical Field
The invention relates to the field of image processing, in particular to an image identification method based on multiple templates.
Background
A Convolutional Neural Network (CNN) is a feedforward Neural Network, and compared with a traditional BP Neural Network, the Convolutional Neural Network (CNN) has the advantages of high recognition efficiency, good rotational scaling invariance, and the like, and has been widely applied in various fields such as digital image processing and face recognition.
The application principle of the traditional convolutional neural network model is as follows: firstly, designing a convolutional neural network template framework according to the attributes of an image to be input, wherein the designed convolutional neural network template framework is of a multilayer structure and comprises 1 input layer, a plurality of convolutional layers and a plurality of downsampling layers are arranged behind the input layer according to various sequences, and finally, the convolutional neural network template framework is an output layer. The input layer is used for receiving an original image; each convolution layer comprises a plurality of feature maps with the same size, and the pixel of each feature map corresponds to the pixel set of the corresponding window positions of a plurality of feature maps specified by the previous layer; each down-sampling layer comprises a plurality of feature maps with the same size, each feature map of the down-sampling layer corresponds to a feature map of a convolution layer in the previous layer, and the feature map pixels of the down-sampling layer correspond to the sampling area of the corresponding feature map in the previous layer. The nodes of a certain layer are connected with the nodes of the previous layer and the nodes of the next layer through edges.
After the convolutional neural network template with the specific network architecture is built, when a certain picture needs to be identified, the convolutional neural network template needs to be trained, and the training process is as follows: initializing parameters of the convolutional neural network template to random values, including: the weight value of the edge, the value of the convolution kernel, etc.; then, inputting the training sample into the convolutional neural network template, repeatedly stimulating the convolutional neural network template, and continuously adjusting the weight value of the edge, the value of the convolutional kernel and the like until the convolutional neural network template capable of identifying the picture is obtained through training. In subsequent application, the classification and intelligent identification can be achieved only by inputting the picture to be analyzed or other samples into the trained convolutional neural network template.
In order to separate and identify each object from a complex scene, a large number of templates are required to perform traversal convolution calculation on the image, the calculation amount is large, the calculation time is long, and real-time object identification is difficult to realize.
Disclosure of Invention
Aiming at the problems that in the prior art, each object is separated and identified from a complex scene, a large number of templates are needed to be used for performing traversal convolution calculation on an image, and the calculation time is too long, the invention provides an image identification method, which comprises the following steps: receiving an image to be identified; performing convolution calculation on the image by utilizing two templates simultaneously so as to obtain category scores of each region of the image corresponding to the two templates; determining whether the identified object is contained in the image region based on the category score.
Further, the method further comprises judging that the two templates are the same size before performing convolution calculation on the image by using the two templates simultaneously, and if the two templates are different in size, terminating the method.
Further, the two templates are symmetrical templates.
Further, the two templates are trained through a specific procedure and a large number of data sets.
Further, performing a convolution calculation on the image using two templates simultaneously includes: according to a certain rule, the two templates are traversed through the whole image pixel by pixel.
Further, traversing the two templates pixel-by-pixel through the entire image according to a certain rule includes:
A) taking an image area with the same size as the template along the x-axis direction and the y-axis direction by taking the initial position of the image as a starting point;
B) performing convolution calculation on the image area and the two templates respectively to obtain category scores of the image area corresponding to the two templates respectively;
C) adding 1 to the starting point coordinate along the x-axis direction, and taking an image area with the same size as the template along the x-axis direction and the y-axis direction based on the starting point coordinate;
D) judging whether the taken image area exceeds the image range along the x-axis direction, if the taken image area does not exceed the image range along the x-axis direction, returning to the step B), repeating the steps B) to D) until the taken image area exceeds the image range along the x-axis direction, and advancing to the step E);
E) setting the x value of the starting point coordinate as a starting position coordinate value, increasing the y value by 1, and taking an image area with the same size as the template along the x-axis direction and the y-axis direction based on the starting point coordinate;
F) judging whether the taken image area exceeds the range of the image along the y-axis direction, if the taken image area does not exceed the range of the image along the y-axis direction, returning to the step B), and repeating the steps B) to E) until the taken image area exceeds the range of the image along the y-axis direction.
Further, the central processing unit reads a pixel value matrix of an image area to be subjected to convolution calculation from the image, and the convolution calculation of the image area and the two templates is completed by the image processing unit under the control of the central processing unit.
Further, the convolution calculation of the image region and the template includes multiplying the value of each point of the template by the value corresponding to the image region, summing the obtained values, and taking the resulting sum as the category score of the image region.
Further, when the category score of a certain image area is greater than a specific threshold value, judging that the image area contains the identified object; when the category score of a certain image region is less than or equal to a specific threshold, it is determined that the recognized object is not included in the image region.
Further, the position scores of the first 95% after the target scores of all positive samples in the training picture set are sorted in descending order are counted as the specific threshold of the template.
According to the scheme provided by the invention, two templates are used for convolution at the same time, so that the reading times of data are reduced, a large amount of data reading time can be saved for the calculation of the whole image, and the identification speed is obviously improved.
Drawings
To further clarify the above and other advantages and features of embodiments of the present invention, a more particular description of embodiments of the invention will be rendered by reference to the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. In the drawings, the same or corresponding parts will be denoted by the same or similar reference numerals for clarity.
Fig. 1 shows a schematic view of an image to be recognized and a template.
FIG. 2 illustrates a flow chart for traversing a template through an entire image.
FIG. 3 illustrates a schematic diagram of identifying a person template according to one embodiment of the invention.
FIG. 4 illustrates a flow diagram 400 for performing a simultaneous convolution traversal of an image using two templates, according to an embodiment of the present invention.
Detailed Description
In the following description, the invention is described with reference to various embodiments. One skilled in the relevant art will recognize, however, that the embodiments may be practiced without one or more of the specific details, or with other alternative and/or additional methods, materials, or components. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of embodiments of the invention. Similarly, for purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the embodiments of the invention. However, the invention may be practiced without specific details. Further, it should be understood that the embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale.
Reference in the specification to "one embodiment" or "the embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
First, the related concepts used in processing an image using a template are introduced:
the template refers to a matrix block, the mathematical meaning of which is a convolution calculation.
And (3) convolution calculation: this can be seen as a weighted summation process, using each pixel in the image region to be multiplied by each element of the convolution kernel (i.e., the weight matrix), and the sum of all products as the new value of the region center pixel.
And (3) convolution kernel: the weights used in the convolution are represented by a matrix which has the same size as the used image area, and the matrix is a weight matrix with odd rows and columns.
Convolution calculation example:
convolution calculation of the pixel region R of 3 × 3 with the convolution kernel G:
assuming that R is a 3 × 3 pixel region, the convolution kernel is G:
Figure BDA0001216078870000041
convolution sum ═ R1G1+ R2G2+ R3G3+ R4G4+ R5G5+ R6G6+ R7G7+ R8G8+ R9G9
The invention proposes to calculate a category score of an image using a template, and to detect whether or not it is an identified object based on the category score. A specific process of calculating the category score of the image is described below with reference to fig. 1 and 2.
Fig. 1 shows a schematic view of an image to be recognized and a template. As shown in fig. 1, the rectangular box 110 is an image, which is composed of a plurality of pixels and has a specific width and height. Shaded box 120 is a template. The template 120 is convolved with the image of the covered area, i.e. the value of each point of the template is multiplied by the corresponding value of the image of the covered area, the obtained values are summed, and the final sum is used as the category score of the image area. The category score represents the response strength of the region and the template, and the higher the response strength is, the lower the score is.
In the process of identifying the image, the template needs to traverse the whole image. FIG. 2 illustrates a flow chart for traversing a template through an entire image.
First, in step 210, an image to be processed is received. Next, the template is traversed starting from the start position of the image for the convolution calculation. For example, let the coordinates of the start position of the image be (0,0), and with the start position (0,0) as the starting point, an image region having the same size as the template is taken from the starting point in the x-axis direction and the y-axis direction.
In step 220, the region is convolved with the templates, i.e., the pixel values of the region are multiplied by the corresponding values of the two templates respectively and then summed to obtain the category score of the image region for the template.
In step 230, the start point coordinates are incremented by 1 in the x-axis direction, and an image area having the same size as the template is taken in the x-axis direction and the y-axis direction with the position (1,0) as the start point.
In step 240, it is determined whether the image area taken is outside the image in the x-axis direction. If the acquired image area does not exceed the range of the image, the process returns to step 220, and the acquired image area is convolved with the template to obtain the category score of the image area for the template.
Next, the process loops between steps 220 and 240 until it is determined that the image area taken is out of the range of the image, then the process goes to step 250, where the x value of the starting point coordinate is set as the starting position coordinate value and the y value is increased by 1. And taking an image area with the same size as the template along the x-axis direction and the y-axis direction from the new starting point position. In step 260, it is determined whether the image area taken is outside the image in the y-axis direction. If the acquired image area does not exceed the range of the image, the process returns to step 220, and the acquired image area is convolved with the template to obtain the category score of the image area for the template. Next, looping through steps 220 through 250 is performed until it is determined that the image area taken is out of range of the image along the y-axis, and the convolution of the entire image is complete.
In an embodiment of the invention, the template used is trained through a specific procedure and a large number of data sets. The training process of the template is briefly described below with reference to specific examples.
In an embodiment of the invention, the problem of training deformable component templates by using pictures containing a large number of target instances marked by rectangular boxes is reduced to a binary classification problem, and then an SVM is used for training and classifying the targets. FIG. 3 illustrates a schematic diagram of identifying a person template according to one embodiment of the invention. A detection template comprises 1 root filter and 8 component filters, and the template threshold is the position score of the first 95 percent of all positive sample target scores in the statistical training picture set after descending sorting as the threshold of the detection template.
Template training
Initializing a root filter: for each target class, the size of the root filter is automatically selected based on statistics of the target rectangle box size in the training dataset.
Root filter updating: given the initial root filter from the previous training, for each rectangular box in the training set, a position with the highest filter score is found under the condition that the root filter and the rectangular box are significantly overlapped (overlapped by more than 50%).
Initialization of component filters: eight component filters are initialized from the above trained root filters using a simple heuristic. Area a is first chosen such that 8a equals 80% of the root filter area. And selecting a rectangular area with the area of a and the maximum sum of squares of the positive weights of all the cell units in the area from the root filter by using an exhaustive method, clearing all the weights of the area and continuing to select until eight rectangular areas are selected.
Updating the model: by constructing new training data triplets (<x1,z1,y1>,...,<xn,zn,yn>) Wherein xi is a sample, yi is a sample category, and zi is a label most suitable for xi in the model learned by the last iteration to update the model. For each positive sample rectangular box in the training data set, detecting with an existing detector at all possible positions and scales while ensuring at least 50% overlap, selecting the position with the highest score as the positive sample corresponding to this rectangular box, placing in a sample bufferIn a zone.
And selecting a position with a high detection score in the picture without the target object as a negative sample, and continuously adding the negative sample into the sample buffer until the file maximum limit is reached. And training new models on positive and negative samples in the buffer area, wherein all samples are marked with part positions.
In order to separate and identify each object from the complex scene, a number of templates are used to perform a traversal convolution calculation on the image. In order to improve the speed of class identification and consider the similarity of templates, in the template training process, the trained templates sometimes appear in pairs, namely symmetrical templates, so that when convolution calculation is performed each time, two templates are used for convolution, the data reading times are reduced, for the calculation of the whole image, a large amount of data reading time can be saved, and the identification speed is obviously improved.
FIG. 4 illustrates a flow diagram 400 for performing a simultaneous convolution traversal of an image using two templates, according to an embodiment of the present invention.
In step 401, an image to be recognized is received.
At step 402, a determination is made as to whether the two templates used are the same size. If the two templates are different in size, the process terminates.
If the two templates are the same size, then at step 403, the image is convolved simultaneously with both templates from the starting position. Both templates traverse the entire image simultaneously. For example, let the coordinates of the start position of the image be (0,0), and with the start position (0,0) as the starting point, an image region having the same size as the template is taken from the starting point in the x-axis direction and the y-axis direction. And performing convolution calculation on the region and the two templates respectively, namely multiplying the pixel value of the region by the corresponding values of the two templates respectively and then summing to obtain the category scores of the image region corresponding to the two templates respectively.
In practical applications, a Central Processing Unit (CPU) may read a pixel value matrix of an area to be subjected to convolution calculation from an image, R (0,0), and perform convolution calculation using two templates, respectively, S1(0,0) ═ R (0,0) ⊙ G1 and S2(0,0) ═ R (0,0) ⊙ G2., because of the good characteristics of a Graphics Processing Unit (GPU) in terms of matrix operation, the convolution calculation may be completed by the GPU under the control of the CPU, then add 1 to the coordinates of the starting point in the x-axis direction, take an image area of the same size as the template in the x-axis direction and the y-axis direction, with the position (1,0) as the starting point, and determine whether the taken image area exceeds the range of the image in the x-axis direction, if the taken image area does not exceed the range of the image, convolve the area with two templates, obtain S1(1,0) ═ R (1,0) ⊙ G28 and S2(1,0), and if the taken image area does not exceed the range of the image area, then take a value of the image area after convolution calculation, and determine whether the image area exceeds the range of the image area in the x-axis direction, and the image area after the image area, take a value matrix calculation, and calculate the image area in the x-axis direction, and calculate the image area in the image area after the image area exceeds the image area along the x-axis direction, and the image area after the image area, and determine whether the image area after the image area.
At step 404, the category score of each image region is used to determine whether the image region contains the identified object. And when the category score of a certain image area is larger than a specific threshold value, judging that the identified object is contained in the image area. When the category score of a certain image region is less than or equal to a specific threshold, it is determined that the recognized object is not included in the image region.
The use of two same-sized templates, which may be symmetric templates, to simultaneously traverse the entire image is described above in connection with fig. 4. And those skilled in the art will appreciate that the manner in which two templates of the same size are used to simultaneously traverse the entire image is not limited to that described in step 403. For example, the template may first traverse the image in the y-axis direction, then add one pixel in the x-axis direction, and traverse the image again in the y-axis direction until the convolution of the entire image is complete.
In another embodiment of the invention, the starting point for the convolution traversal of the image is not the starting position or origin of coordinates of the image, but from a certain pixel point (x) in the central region of the imagei,yi) And starting to take an image area with the same size as the template, and then performing convolution calculation on the area and the template to obtain the category score of the image area for the template. Next, the entire image is traversed by translating pixel by pixel to the peripheral region of the image according to a certain law. For example, a pixel point (x) may be first locatedi,yi) And acquiring an image area with the same size as the template on the row by increasing and/or decreasing the pixel by pixel and performing convolution calculation, then increasing and/or decreasing the row number by one pixel, acquiring an image area with the same size as the template on the row by increasing and/or decreasing the pixel by pixel and performing convolution calculation until the whole image convolution is completed.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various combinations, modifications, and changes can be made thereto without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention disclosed herein should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (6)

1. An image recognition method, comprising:
receiving an image to be identified;
performing convolution calculation on the image by utilizing two templates simultaneously so as to obtain category scores corresponding to the two templates and each area of the image, wherein the two templates have the same size, the two templates are symmetrical templates obtained through a template training process, the symmetrical templates are templates which appear in pairs in the template training process, and the templates are trainedThe process comprises the following steps: initializing a root filter: for each target category, automatically selecting the size of a root filter according to the statistical value of the size of a target rectangular frame in the training data set; root filter updating: giving an initial root filter obtained by the last training step, and finding a position with the highest filter score for each rectangular frame in the training set under the condition that the root filter and the rectangular frame are overlapped by more than 50%; initialization of component filters: initializing eight component filters according to the root filter trained above, firstly selecting the area a to make 8a equal to 80% of the area of the root filter, selecting a rectangular region with the area a and the largest sum of the square of the positive weights of all units in the region from the root filter by using an exhaustion method, clearing all the weights of the region, and then continuing to select until eight rectangular regions are selected; updating the model: by constructing new training data triplets (<x1,z1,y1>,...,<xn,zn,yn>) Wherein xi is a sample, yi is a sample category, and zi is a label most suitable for xi in the model learned by the last iteration to update the model;
determining whether the identified object is contained in the image region based on the category score,
wherein performing a convolution calculation on the image using two templates simultaneously comprises:
A) taking an image area with the same size as the template along the x-axis direction and the y-axis direction by taking the initial position of the image as a starting point;
B) performing convolution calculation on the image area and the two templates respectively to obtain category scores of the image area corresponding to the two templates respectively;
C) adding 1 to the starting point coordinate along the x-axis direction, and taking an image area with the same size as the template along the x-axis direction and the y-axis direction based on the starting point coordinate;
D) judging whether the taken image area exceeds the image range along the x-axis direction, if the taken image area does not exceed the image range along the x-axis direction, returning to the step B), repeating the steps B) to D) until the taken image area exceeds the image range along the x-axis direction, and advancing to the step E);
E) setting the x value of the starting point coordinate as a starting position coordinate value, increasing the y value by 1, and taking an image area with the same size as the template along the x-axis direction and the y-axis direction based on the starting point coordinate;
F) judging whether the taken image area exceeds the range of the image along the y-axis direction, if the taken image area does not exceed the range of the image along the y-axis direction, returning to the step B), and repeating the steps B) to E) until the taken image area exceeds the range of the image along the y-axis direction.
2. The method of claim 1, wherein the two templates are trained using a specific program and a large number of data sets.
3. A method as claimed in claim 1, characterized in that a central processing unit reads from the image the matrix of pixel values of the image area to be subjected to the convolution calculation, and the convolution calculation of the image area with the two templates is done by the image processing unit under the control of said central processing unit.
4. A method as claimed in claim 3, wherein the convolution calculation of an image region with a template comprises multiplying the value of each point of the template by the value corresponding to the image region, summing the resulting values, and using the sum as the classification score for the image region.
5. The method of claim 1, wherein when the category score of an image region is greater than a specific threshold, it is determined that the image region contains the identified object; when the category score of a certain image region is less than or equal to a specific threshold, it is determined that the recognized object is not included in the image region.
6. The method of claim 5, wherein the position scores of the first 95% after the target scores of all positive samples in the training picture set are sorted in descending order are counted as the specific threshold of the template.
CN201710056968.XA 2017-01-22 2017-01-22 Image identification method based on multiple templates Active CN106845550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710056968.XA CN106845550B (en) 2017-01-22 2017-01-22 Image identification method based on multiple templates

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710056968.XA CN106845550B (en) 2017-01-22 2017-01-22 Image identification method based on multiple templates

Publications (2)

Publication Number Publication Date
CN106845550A CN106845550A (en) 2017-06-13
CN106845550B true CN106845550B (en) 2020-03-17

Family

ID=59122959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710056968.XA Active CN106845550B (en) 2017-01-22 2017-01-22 Image identification method based on multiple templates

Country Status (1)

Country Link
CN (1) CN106845550B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308476B (en) * 2018-09-06 2019-08-27 邬国锐 Billing information processing method, system and computer readable storage medium
CN113361553B (en) * 2020-03-06 2024-02-02 株式会社理光 Image processing method, image processing apparatus, storage medium, and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521826A (en) * 2011-11-22 2012-06-27 苏州科雷芯电子科技有限公司 Image registration device and method
KR20150120805A (en) * 2014-04-18 2015-10-28 한양대학교 산학협력단 Method and system for detecting human in range image
CN105069428A (en) * 2015-07-29 2015-11-18 天津市协力自动化工程有限公司 Multi-template iris identification method based on similarity principle and multi-template iris identification device based on similarity principle
CN105160330A (en) * 2015-09-17 2015-12-16 中国地质大学(武汉) Vehicle logo recognition method and vehicle logo recognition system
CN105260740A (en) * 2015-09-23 2016-01-20 广州视源电子科技股份有限公司 Element recognition method and apparatus
CN105320935A (en) * 2015-07-29 2016-02-10 江苏邦融微电子有限公司 Multiple-template fingerprint identification method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504381B (en) * 2015-01-09 2017-12-26 新智认知数据服务有限公司 Non-rigid object detection method and its system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521826A (en) * 2011-11-22 2012-06-27 苏州科雷芯电子科技有限公司 Image registration device and method
KR20150120805A (en) * 2014-04-18 2015-10-28 한양대학교 산학협력단 Method and system for detecting human in range image
CN105069428A (en) * 2015-07-29 2015-11-18 天津市协力自动化工程有限公司 Multi-template iris identification method based on similarity principle and multi-template iris identification device based on similarity principle
CN105320935A (en) * 2015-07-29 2016-02-10 江苏邦融微电子有限公司 Multiple-template fingerprint identification method
CN105160330A (en) * 2015-09-17 2015-12-16 中国地质大学(武汉) Vehicle logo recognition method and vehicle logo recognition system
CN105260740A (en) * 2015-09-23 2016-01-20 广州视源电子科技股份有限公司 Element recognition method and apparatus

Also Published As

Publication number Publication date
CN106845550A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN110738207B (en) Character detection method for fusing character area edge information in character image
CN107229904B (en) Target detection and identification method based on deep learning
CN109934826A (en) A kind of characteristics of image dividing method based on figure convolutional network
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN112132093B (en) High-resolution remote sensing image target detection method and device and computer equipment
CN109784372B (en) Target classification method based on convolutional neural network
CN104866868A (en) Metal coin identification method based on deep neural network and apparatus thereof
CN108009222B (en) Three-dimensional model retrieval method based on better view and deep convolutional neural network
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN111476249B (en) Construction method of multi-scale large-receptive-field convolutional neural network
CN111222519B (en) Construction method, method and device of hierarchical colored drawing manuscript line extraction model
CN110738160A (en) human face quality evaluation method combining with human face detection
CN110245621B (en) Face recognition device, image processing method, feature extraction model, and storage medium
Wang et al. Blur image classification based on deep learning
CN109753996B (en) Hyperspectral image classification method based on three-dimensional lightweight depth network
CN109377511B (en) Moving target tracking method based on sample combination and depth detection network
CN111898621A (en) Outline shape recognition method
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN111738114B (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN113920516B (en) Calligraphy character skeleton matching method and system based on twin neural network
CN111753682A (en) Hoisting area dynamic monitoring method based on target detection algorithm
CN111951283A (en) Medical image identification method and system based on deep learning
CN112861718A (en) Lightweight feature fusion crowd counting method and system
CN112836653A (en) Face privacy method, device and apparatus and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant