CN110647844A - Shooting and identifying method for articles for children - Google Patents

Shooting and identifying method for articles for children Download PDF

Info

Publication number
CN110647844A
CN110647844A CN201910900593.XA CN201910900593A CN110647844A CN 110647844 A CN110647844 A CN 110647844A CN 201910900593 A CN201910900593 A CN 201910900593A CN 110647844 A CN110647844 A CN 110647844A
Authority
CN
China
Prior art keywords
feature
image
page
children
library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910900593.XA
Other languages
Chinese (zh)
Inventor
江周平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yikuai Interactive Network Technology Co Ltd
Original Assignee
Shenzhen Yikuai Interactive Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yikuai Interactive Network Technology Co Ltd filed Critical Shenzhen Yikuai Interactive Network Technology Co Ltd
Priority to CN201910900593.XA priority Critical patent/CN110647844A/en
Publication of CN110647844A publication Critical patent/CN110647844A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/45Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a shooting and identifying method for articles for children, which comprises the following steps: the method comprises the steps of obtaining a page feature library of a printed matter, a classification model library of an article and a corresponding multimedia content library in advance; shooting a printed matter or an article to obtain an image to be identified; extracting feature points of the image to be recognized, and performing page feature matching based on the extracted feature points to obtain original page information of the image to be recognized in a page feature library; identifying and detecting the image to be identified based on the neural network to obtain article classification information; and playing the multimedia content. According to the invention, the object is learned by the children through a shooting and recognition mode, so that the interactivity and the interestingness of the learning process are improved, and the learning ability of the children is improved.

Description

Shooting and identifying method for articles for children
Technical Field
The invention relates to the technical field of image recognition, in particular to a shooting and recognizing method for articles for children.
Background
Preschool education is an important link for developing the intelligence level of children, and with the development of modern social science and technology, the living standard of people is improved, and more parents pay attention to the link.
Children, especially young children, often have strong curiosity and awareness, and are full of intense interest in their surroundings. The children's cognition on the surrounding objects is formed by continuously accumulating in life, the traditional mode is mainly that the children are informed and described to the children by the household fingers, the children are low in concentration and easy to lose patience due to the fact that the children are small in age, and the mode is single and boring and is not beneficial to physical and mental development of the children.
Disclosure of Invention
The invention aims to provide a shooting and identifying method for an article for a child, which helps the child to recognize objects in a shooting and identifying mode and improves interactivity and interestingness of a recognition process.
In order to achieve the purpose, the invention adopts the following technical scheme:
a shooting and identifying method for articles for children comprises the following steps:
s1, pre-obtaining a page feature library of the printed matter and a classification model library of the article, and simultaneously obtaining a multimedia content library corresponding to the page feature library and the classification model library of the article;
s2, shooting the printed matter or the article to obtain an image to be identified;
s3, extracting feature points of the image to be recognized, retrieving the image to be recognized in a page feature library based on the extracted feature points, and matching page features to obtain original page information of the image to be recognized in the page feature library;
s4, identifying and detecting the image to be identified based on the neural network to obtain article classification information;
and S5, selecting corresponding multimedia content from the multimedia content library and playing the multimedia content based on the original page information obtained in the step S3 or the item classification information obtained in the step S4.
Preferably, the page feature library of step S1 is obtained by the following method: the method comprises the steps of firstly obtaining an original image of a printed matter, and then extracting feature points of the original image to obtain the page feature library.
Preferably, the feature point extraction in the steps S1 and S3 is implemented by a feature extraction SIFT and SURF algorithm.
Preferably, the feature point extraction in the steps S1 and S3 is realized by the following method:
carrying out image graying processing;
extracting feature points by using a key point detection algorithm;
identifying the direction of the feature points based on histogram statistics;
and describing the feature points to obtain a feature descriptor.
Preferably, the page feature matching in step S3 is implemented by an eigenvalue euclidean distance, a cosine similarity of eigenvectors, and a correlation coefficient algorithm.
Preferably, the page feature matching in step S3 is implemented by the following method:
performing dimension reduction, hash transformation and sorting processing on the feature descriptors corresponding to the feature points extracted from the image to be recognized, then comparing the hash values with the hash values of the feature points stored in a page feature library, and if the distance is smaller than a preset first threshold value, determining that the pair of feature points are matched;
and counting the number of the matched feature points, and if the number of the matched feature points is greater than a preset second threshold value, determining that the image to be identified is matched with the corresponding original page image.
Preferably, the step S4 is specifically realized by the following sub-steps:
s41, zooming the image to be recognized;
s42, extracting a multi-scale feature map of the image to be identified by using a multi-scale convolution neural network;
s43, converting the multi-scale feature map into a pyramid feature hierarchical structure by using a feature fusion network, and fusing feature maps with corresponding sizes;
s44, converting the fused feature map into feature vectors, inputting the feature vectors into a neural network consisting of two fully-connected layers, and predicting the category information and the frame information of the object in the candidate area frame corresponding to the feature vectors from each feature vector;
and S45, outputting the obtained object type information and the frame information as the object detection result, thereby obtaining the article classification information.
Preferably, the step S44 is implemented by the following method:
firstly, predicting m frames aiming at each grid point in a feature map after fusion in a pyramid feature hierarchical structure, wherein each frame comprises (s +4) values, wherein s is the number of categories of an object, and 4 corresponds to the coordinate, length and width of a central point;
then, according to output data of each scale feature map, obtaining a prediction candidate region, further sending the prediction candidate region to a preset classification model, respectively calculating the confidence degrees of the N types of objects, screening out the corresponding categories with the confidence degrees larger than a preset threshold value, and obtaining a series of candidate target frames, wherein the output data comprises category information, confidence degree scores and target frame information;
the preset classification model is obtained by the following method: a large number of pictures of N types of objects are shot and collected, the type of each icon is marked, the coordinate information of the corresponding object in the pictures is sent to the network structure for training, and a classification model of the N types of objects is obtained.
Preferably, the step S45 is specifically realized by the following method:
and comparing the confidence score of the output candidate target frame with a preset threshold, reserving the candidate target frame with the confidence score higher than the threshold, performing non-maximum suppression processing on the reserved candidate target frame, selecting the candidate target frame with the highest target category occurrence probability, and outputting the target frame information, the category information and the confidence score to obtain a final result.
After adopting the technical scheme, compared with the background technology, the invention has the following advantages:
according to the invention, the object is learned by the children in a shooting and recognition mode, so that the interactivity and the interestingness of the learning process are improved, the learning ability of the children is improved, and the physical and mental development of the children is facilitated.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a schematic view of a page feature matching process according to the present invention;
FIG. 3 is a schematic flow chart of the article identification and detection according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
With reference to fig. 1-3, the invention discloses a shooting and identifying method for articles for children, which comprises the following steps:
and S1, acquiring a page feature library of the printed matter and a classification model library of the article in advance, and acquiring a multimedia content library corresponding to the page feature library and the classification model library of the article. The specific obtaining method of the page feature library comprises the following steps: the method comprises the steps of firstly obtaining an original image of a printed matter, and then extracting feature points of the original image to obtain a page feature library. The feature point extraction method may employ any feature point extraction algorithm including, but not limited to, SIFT, SURF, and algorithm variants thereof, and the present invention is not particularly limited. In this embodiment, the feature point extraction may be implemented by:
a. and (5) carrying out image graying processing. Therefore, the collected image is a color image (for example, an RGB three-channel color image), and a graying process is required to be performed first, so as to facilitate the execution of the subsequent steps. In this embodiment, the formula for calculating graying is as follows:
Gray=(R*30+G*59+B*11+50)/100
wherein Gray is a Gray value.
b. And extracting the characteristic points by using a key point detection algorithm. Continuously carrying out step-down sampling on an original image to obtain a series of images with different sizes, further carrying out Gaussian filtering on the images with different scales, subtracting two images after similar-scale Gaussian filtering of the same image to obtain a Gaussian difference image, carrying out extreme value detection, wherein an extreme value point meeting a curvature condition is a characteristic point. The gaussian difference image D (x, y, σ) operates as follows, G (x, y, σ) is a gaussian filter function, I (x, y) corresponds to the original image, L (x, y, σ) represents the gaussian filtered image at the scale σ:
D(x,y,σ)=(G(x,y,σ(s+1))-G(x,y,σ(s)))*I(x,y)
=L(x,y,σ(s+1))-L(x,y,σ(s))
c. and identifying the direction of the feature points based on the histogram statistics. After the gradient calculation of the feature points is completed, the gradient and the direction of the pixels in the neighborhood are counted by using the histogram. The gradient histogram divides the direction range of 0-360 degrees into 18 bins, with 20 degrees per bin. The direction of the peak of the histogram represents the dominant direction of the feature point. L is a scale space value where the key point is located, and the gradient m and the direction theta of each pixel point are calculated according to the following formula:
Figure BDA0002211696170000051
θ(x,y)=tan-1((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
and describing the feature points to obtain a feature descriptor. Determining a neighborhood with the size of 21 multiplied by 21 for the feature point, and rotating the neighborhood to the main direction; calculating the horizontal gradient and the vertical gradient of pixel points in the neighborhood, thus determining a characteristic descriptor with the size of 19 multiplied by 2 to 722 dimensions for each characteristic point; the description of the feature points includes coordinates, dimensions, and directions. It should be noted here that, since the obtained feature descriptor is high-dimensional (722 dimensions in this embodiment), for convenience of subsequent processing, dimension reduction and hash transformation are performed, in this embodiment, a principal component analysis dimension reduction method is used to perform dimension reduction processing, that is, PCA in fig. 2, and 20 dimensions are obtained after the dimension reduction processing, and after the locality sensitive hash transformation, that is, LSH in fig. 2, the 20-dimensional feature descriptor is mapped to 1 32-bit floating point value. The specific operation of the PCA is as follows:
firstly, a feature matrix X is constructed by using feature data of a large number of collected images, the feature values of the matrix X are obtained, the feature values are sorted according to the size, and the feature vectors corresponding to the feature values are obtained to form a transformation matrix W. In the case of the existing transformation matrix W, for any one of the feature data Y of the acquired image, Z is equal to YWTThe original feature matrix Y is projectedThe feature matrix Y of the high dimension is reduced to a new feature matrix Z of the low dimension, which is linearly independent, as seen from the matrix Z.
The specific operation of LSH is as follows:
(1) selecting a locality sensitive hash function satisfying sensitivity (d1, d2, p1, p 2);
(2) determining the number L of hash tables, the number K of hash functions in each table and parameters related to the sensitive hashes according to the accuracy of the search results;
(3) and hashing all data into corresponding buckets through a locality sensitive hash function to form one or more hash tables.
And S2, shooting the printed matter or the article to obtain the image to be identified. The shooting operation in this step is realized by a handheld terminal device, which is integrated with a camera and is provided with a key for triggering shooting, and of course, a processor and a memory are arranged in the terminal device for realizing image acquisition. The terminal device can be designed in the shape of a camera or in the shape of a magnifying glass.
And S3, extracting the feature points of the image to be recognized, retrieving the image to be recognized in a page feature library based on the extracted feature points, matching the page features, and obtaining the original page information of the image to be recognized in the page feature library. The algorithm adopted by the page feature matching includes, but is not limited to, an eigenvalue euclidean distance, a cosine similarity of eigenvectors, a correlation coefficient, and the like, and the present invention is not particularly limited. Referring to fig. 2, in the present embodiment, the page feature matching may be implemented by the following method:
s31, performing dimensionality reduction, hash transformation and sorting processing on the feature descriptors corresponding to the feature points extracted from the image to be recognized, comparing the hash values with the hash values of the feature points stored in the page feature library, and if the distance is smaller than a preset first threshold value, determining that the pair of feature points are matched. The matching distance calculation process includes calculating the distance between the hash value of the feature point and 2L data in the page feature library, defining the distance as, but not limited to, the absolute value of the difference between the two numbers, and determining that the feature point pair is matched if the distance is smaller than a set first threshold.
And S32, counting the number of the matched feature points, and if the number of the matched feature points is larger than a preset second threshold value, determining that the image to be recognized is matched with the corresponding original page image.
And S4, identifying and detecting the image to be identified based on the neural network to obtain the article classification information. Referring to fig. 3, this step is realized by the following sub-steps:
and S41, carrying out scaling processing on the image to be recognized, and enabling the image to meet the size capable of being detected by the subsequent neural network through the scaling processing. In the present embodiment, the image is scaled to a size of N × N by a method of downsampling interpolation. Of course, the method of nearest neighbor interpolation, linear interpolation or spline interpolation can be adopted, and the method adopted by the scaling processing is not particularly limited.
And S42, extracting the multi-scale feature map of the image to be recognized by utilizing the multi-scale convolution neural network. In this example, an image with a resolution of N × N was processed by five convolution layers, and five feature maps with resolutions of 19 × 19, 10 × 10, 5 × 5, 3 × 3, and 1 × 1 were obtained.
The multi-scale convolution neural network (i.e. the feature extraction network) comprises three input modules, first to fifth convolution modules, first to fifth pooling modules and a normalization module, wherein:
the three input modules are respectively an image to be detected, object category information and frame information input module, and the image to be detected is used as the input of the first convolution module; the first convolution module, the first pooling module, the second convolution module, the second pooling module, the third convolution module, the third pooling module, the fourth convolution module, the fourth pooling module, the fifth convolution module and the fifth pooling module are sequentially cascaded; the output of the fourth convolution module is also input to the normalization module together with the output of the bounding box information input module.
And S43, converting the multi-scale feature map into a pyramid feature hierarchical structure by using a feature fusion network, and fusing the feature maps with corresponding sizes. In this embodiment, the feature map is downsampled by convolution with a step size of 2 to generate a pyramid feature hierarchy, and five feature maps after interpolation and amplification are fused with feature maps with corresponding sizes in the pyramid feature hierarchy.
And S44, converting the fused feature map into feature vectors, inputting the feature vectors into a neural network consisting of two fully-connected layers, and predicting the category information and the frame information of the area candidate in-frame object corresponding to the feature vectors from each feature vector. The method comprises the following specific steps:
firstly, for each grid point in the fused 19 × 19, 10 × 10, 5 × 5, 3 × 3, 1 × 1 feature map in the pyramid feature hierarchy, m frames are predicted, each frame includes (s +4) values, where s is the number of categories of the object, and 4 corresponds to the center point coordinate, length, and width.
Then, according to output data of each scale feature map, obtaining a prediction candidate region, further sending the prediction candidate region to a preset classification model, respectively calculating the confidence degrees of the N types of objects, screening out the corresponding categories with the confidence degrees larger than a preset threshold value, and obtaining a series of candidate target frames, wherein the output data comprises category information, confidence degree scores and target frame information;
the preset classification model is obtained by the following method: a large number of pictures of N types of objects are shot and collected, the type of each icon is marked, the coordinate information of the corresponding object in the pictures is sent to the network structure for training, and a classification model of the N types of objects is obtained.
And S45, outputting the obtained object type information and the frame information as the object detection result, thereby obtaining the article classification information. The method comprises the following specific steps:
and comparing the confidence score of the output candidate target frame with a preset threshold, reserving the candidate target frame with the confidence score higher than the threshold, performing non-maximum suppression processing on the reserved candidate target frame, selecting the candidate target frame with the highest target category occurrence probability, and outputting the target frame information, the category information and the confidence score to obtain a final result.
And S5, selecting corresponding multimedia content from the multimedia content library and playing the multimedia content based on the original page information obtained in the step S3 or the item classification information obtained in the step S4. The multimedia file may be an audio file, an image file, or a video file, and the present invention is not particularly limited. The display screen (the display screen is an optional component) or the loudspeaker integrated on the handheld terminal equipment can be utilized for playing the multimedia, and the display screen or the loudspeaker can also be connected with an external intelligent terminal and played by utilizing the screen and the loudspeaker of the external intelligent terminal through the WIFI or Bluetooth function of the handheld terminal equipment, wherein the WIFI, the Bluetooth and the external intelligent terminal are optional components.
It should be understood by those skilled in the art that, in the present invention, the step S3 performs printed matter identification, and the step S4 performs article identification, and in the specific implementation, the steps S3 and S4 may be performed simultaneously, and the identification result is based on the step that the result is obtained first, or may be identified in a sequential order, which is not limited in the present invention.
The practical use process of the invention is as follows: the children use the hand-held terminal device (such as magnifying glass) to shoot objects, wherein the objects can be printed matters (such as cards, books and the like) and articles (such as animals, plants, fruits, household appliances, furniture, daily necessities and the like). And identifying the picture shot by the child, and playing corresponding multimedia when a corresponding result is identified, wherein if the result is not identified finally, corresponding voice prompt is also carried out.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A shooting and identifying method for articles for children is characterized by comprising the following steps:
s1, pre-obtaining a page feature library of the printed matter and a classification model library of the article, and simultaneously obtaining a multimedia content library corresponding to the page feature library and the classification model library of the article;
s2, shooting the printed matter or the article to obtain an image to be identified;
s3, extracting feature points of the image to be recognized, retrieving the image to be recognized in a page feature library based on the extracted feature points, and matching page features to obtain original page information of the image to be recognized in the page feature library;
s4, identifying and detecting the image to be identified based on the neural network to obtain article classification information;
and S5, selecting corresponding multimedia content from the multimedia content library and playing the multimedia content based on the original page information obtained in the step S3 or the item classification information obtained in the step S4.
2. The shooting identification method for children' S articles according to claim 1, wherein the page feature library of step S1 is obtained by: the method comprises the steps of firstly obtaining an original image of a printed matter, and then extracting feature points of the original image to obtain the page feature library.
3. A photographing identification method for an article for children as claimed in claim 1 or 2, characterized in that: the feature point extraction in the steps S1 and S3 is realized by a feature extraction SIFT and SURF algorithm.
4. A photographing identification method for an article for children as claimed in claim 1 or 2, characterized in that: the feature point extraction in the steps S1 and S3 is realized by the following method:
carrying out image graying processing;
extracting feature points by using a key point detection algorithm;
identifying the direction of the feature points based on histogram statistics;
and describing the feature points to obtain a feature descriptor.
5. The shooting identification method for children articles according to claim 1, characterized in that: the page feature matching in step S3 is implemented by an eigenvalue euclidean distance, cosine similarity of eigenvectors, and correlation coefficient algorithm.
6. The shooting identification method for children articles according to claim 1, characterized in that: the page feature matching in step S3 is implemented by the following method:
performing dimension reduction, hash transformation and sorting processing on the feature descriptors corresponding to the feature points extracted from the image to be recognized, then comparing the hash values with the hash values of the feature points stored in a page feature library, and if the distance is smaller than a preset first threshold value, determining that the pair of feature points are matched;
and counting the number of the matched feature points, and if the number of the matched feature points is greater than a preset second threshold value, determining that the image to be identified is matched with the corresponding original page image.
7. The shooting identification method for children articles according to claim 1, characterized in that: the step S4 is specifically realized by the following sub-steps:
s41, zooming the image to be recognized;
s42, extracting a multi-scale feature map of the image to be identified by using a multi-scale convolution neural network;
s43, converting the multi-scale feature map into a pyramid feature hierarchical structure by using a feature fusion network, and fusing feature maps with corresponding sizes;
s44, converting the fused feature map into feature vectors, inputting the feature vectors into a neural network consisting of two fully-connected layers, and predicting the category information and the frame information of the object in the candidate area frame corresponding to the feature vectors from each feature vector;
and S45, outputting the obtained object type information and the frame information as the object detection result, thereby obtaining the article classification information.
8. The shooting identification method for children's articles according to claim 7, characterized in that: the step S44 is implemented by the following method:
firstly, predicting m frames aiming at each grid point in a feature map after fusion in a pyramid feature hierarchical structure, wherein each frame comprises (s +4) values, wherein s is the number of categories of an object, and 4 corresponds to the coordinate, length and width of a central point;
then, according to output data of each scale feature map, obtaining a prediction candidate region, further sending the prediction candidate region to a preset classification model, respectively calculating the confidence degrees of the N types of objects, screening out the corresponding categories with the confidence degrees larger than a preset threshold value, and obtaining a series of candidate target frames, wherein the output data comprises category information, confidence degree scores and target frame information;
the preset classification model is obtained by the following method: a large number of pictures of N types of objects are shot and collected, the type of each icon is marked, the coordinate information of the corresponding object in the pictures is sent to the network structure for training, and a classification model of the N types of objects is obtained.
9. The shooting identification method for children's articles according to claim 8, characterized in that: the step S45 is specifically realized by the following method:
and comparing the confidence score of the output candidate target frame with a preset threshold, reserving the candidate target frame with the confidence score higher than the threshold, performing non-maximum suppression processing on the reserved candidate target frame, selecting the candidate target frame with the highest target category occurrence probability, and outputting the target frame information, the category information and the confidence score to obtain a final result.
CN201910900593.XA 2019-09-23 2019-09-23 Shooting and identifying method for articles for children Pending CN110647844A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910900593.XA CN110647844A (en) 2019-09-23 2019-09-23 Shooting and identifying method for articles for children

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910900593.XA CN110647844A (en) 2019-09-23 2019-09-23 Shooting and identifying method for articles for children

Publications (1)

Publication Number Publication Date
CN110647844A true CN110647844A (en) 2020-01-03

Family

ID=68992545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910900593.XA Pending CN110647844A (en) 2019-09-23 2019-09-23 Shooting and identifying method for articles for children

Country Status (1)

Country Link
CN (1) CN110647844A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464894A (en) * 2020-12-14 2021-03-09 深圳市优必选科技股份有限公司 Interaction method and device and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126668A (en) * 2016-06-28 2016-11-16 北京小白世纪网络科技有限公司 A kind of image characteristic point matching method rebuild based on Hash
CN106649629A (en) * 2016-12-02 2017-05-10 华中师范大学 System connecting books with electronic resources
CN106777066A (en) * 2016-12-12 2017-05-31 北京奇虎科技有限公司 A kind of method and apparatus of image recognition matched media files
CN110188802A (en) * 2019-05-13 2019-08-30 南京邮电大学 SSD algorithm of target detection based on the fusion of multilayer feature figure
CN110209865A (en) * 2019-05-24 2019-09-06 广州市云家居云科技有限公司 A kind of object identification and matching process based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126668A (en) * 2016-06-28 2016-11-16 北京小白世纪网络科技有限公司 A kind of image characteristic point matching method rebuild based on Hash
CN106649629A (en) * 2016-12-02 2017-05-10 华中师范大学 System connecting books with electronic resources
CN106777066A (en) * 2016-12-12 2017-05-31 北京奇虎科技有限公司 A kind of method and apparatus of image recognition matched media files
CN110188802A (en) * 2019-05-13 2019-08-30 南京邮电大学 SSD algorithm of target detection based on the fusion of multilayer feature figure
CN110209865A (en) * 2019-05-24 2019-09-06 广州市云家居云科技有限公司 A kind of object identification and matching process based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓超 等著: "《数字图像处理与模式识别研究》", 《数字图像处理与模式识别研究》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464894A (en) * 2020-12-14 2021-03-09 深圳市优必选科技股份有限公司 Interaction method and device and computer equipment
CN112464894B (en) * 2020-12-14 2023-09-01 深圳市优必选科技股份有限公司 Interaction method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN108334848B (en) Tiny face recognition method based on generation countermeasure network
Sun et al. Facial expression recognition in the wild based on multimodal texture features
CN106919920B (en) Scene recognition method based on convolution characteristics and space vision bag-of-words model
Zhao et al. Learning mid-level filters for person re-identification
US8750573B2 (en) Hand gesture detection
US8792722B2 (en) Hand gesture detection
Shahab et al. ICDAR 2011 robust reading competition challenge 2: Reading text in scene images
CN108875542B (en) Face recognition method, device and system and computer storage medium
CN114783003B (en) Pedestrian re-identification method and device based on local feature attention
AU2011207120B2 (en) Identifying matching images
Lyu et al. Small object recognition algorithm of grain pests based on SSD feature fusion
Chen et al. TriViews: A general framework to use 3D depth data effectively for action recognition
CN102385592A (en) Image concept detection method and device
CN107644105A (en) One kind searches topic method and device
US8724890B2 (en) Vision-based object detection by part-based feature synthesis
Farhangi et al. Improvement the bag of words image representation using spatial information
CN110647844A (en) Shooting and identifying method for articles for children
Zou et al. Supervised feature learning via L2-norm regularized logistic regression for 3D object recognition
Lian et al. Fast pedestrian detection using a modified WLD detector in salient region
Mir et al. Criminal action recognition using spatiotemporal human motion acceleration descriptor
CN114299610A (en) Method and system for recognizing actions in infrared video
Xia et al. Ranking the local invariant features for the robust visual saliencies
Chen et al. Action recognition using motion history image and static history image-based local binary patterns
Guyomard et al. Contextual detection of drawn symbols in old maps
CN110765997B (en) Interactive reading realization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200103

RJ01 Rejection of invention patent application after publication