CN110555435A - Point-reading interaction realization method - Google Patents

Point-reading interaction realization method Download PDF

Info

Publication number
CN110555435A
CN110555435A CN201910853992.5A CN201910853992A CN110555435A CN 110555435 A CN110555435 A CN 110555435A CN 201910853992 A CN201910853992 A CN 201910853992A CN 110555435 A CN110555435 A CN 110555435A
Authority
CN
China
Prior art keywords
image
page
feature
point
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910853992.5A
Other languages
Chinese (zh)
Other versions
CN110555435B (en
Inventor
江周平
杨锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anxin Zhitong Technology Co ltd
Original Assignee
Shenzhen Yikuai Interactive Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yikuai Interactive Network Technology Co Ltd filed Critical Shenzhen Yikuai Interactive Network Technology Co Ltd
Priority to CN201910853992.5A priority Critical patent/CN110555435B/en
Publication of CN110555435A publication Critical patent/CN110555435A/en
Application granted granted Critical
Publication of CN110555435B publication Critical patent/CN110555435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

the invention discloses a point-reading interaction realization method, which comprises the following steps: obtaining a cover page feature library and a content page feature library; acquiring a local image of the cover page by using an image acquisition assembly of a point reading pen point, extracting characteristic points of the local image by using a processor, and matching the extracted characteristic points with a cover page characteristic library to obtain printed matter information; acquiring a page image by using an image acquisition component of the point reading pen, and performing OCR (Optical Character Recognition) Recognition on the numbers in the page image by using a processor to obtain page information; the region of interest on the content page of the printed matter is touched by a touch and talk pen, the region image of the region of interest is acquired by the image acquisition assembly, the processor extracts the feature points of the region image, the extracted feature points are matched with the content page feature library, and touch and talk position information is obtained according to the matching result. The invention does not need to prefabricate codes on books, and gets rid of the limitation of reading contents due to the limitation of codes.

Description

point-reading interaction realization method
Technical Field
the invention relates to the technical field of multimedia education, in particular to a point-reading interaction realization method.
background
the point reading is an intelligent reading and learning mode realized by utilizing an optical image recognition technology and a digital voice technology, embodies the perfect integration of an electronic multimedia technology and an education industry, and realizes the people-oriented concept of science and technology.
With existing point-reading devices, it is often necessary to pre-process the book, print or attach a specific code to the book, or otherwise the contents of the book cannot be identified. In addition, due to the limitation of the encoding rule, the total encoding number is limited, and for books with more contents, the method for reading and encoding presents obvious limitation.
disclosure of Invention
The invention aims to provide a click-to-read interaction implementation method which does not need to perform encoding on books in advance and gets rid of the limitation of click-to-read contents due to encoding limitations.
in order to achieve the purpose, the invention adopts the following technical scheme:
a reading interaction realization method is realized based on a reading pen, the reading pen comprises a pen main body, a pressure sensing assembly and an image acquisition assembly, a processor and a memory are arranged in the pen main body, the pressure sensing assembly is arranged at the position of a pen point of the pen main body, the image acquisition assembly is arranged on the pen main body and is positioned above the pen point, the pressure sensing assembly, the image acquisition assembly and the memory are respectively connected with the processor, and the method comprises the following steps:
s1, respectively extracting feature points of the cover page and the content page of the printed matter in advance to obtain a cover page feature library and a content page feature library, and storing the cover page feature library and the content page feature library in the memory;
s2, touching the cover page of the printed matter by using a touch and talk pen, collecting a local image of the cover page by using an image collecting assembly, extracting characteristic points of the local image by using a processor, and matching the extracted characteristic points with a cover page characteristic library to obtain information of the printed matter;
s3, touching the page position of the content page of the printed matter by using the point-reading pen, acquiring a page image by the image acquisition component, and performing OCR (optical character recognition) on the numbers in the page image by the processor to obtain page information;
S4, touching the interested area on the content page of the printed matter by using the touch and talk pen, acquiring the area image of the interested area by the image acquisition component, extracting the characteristic points of the area image by the processor, matching the extracted characteristic points with the content page characteristic library, and acquiring touch and talk position information according to the matching result.
preferably, the method further includes step S5, obtaining a corresponding audio file based on the printed matter information, the page number information and the click-to-read position information, and playing.
Further, the feature point extraction in the steps S1, S2 and S4 is realized by the following method:
Carrying out image graying processing;
extracting feature points by using a key point detection algorithm;
identifying the direction of the feature points based on histogram statistics;
and describing the feature points to obtain a feature descriptor.
preferably, the extracting the feature points by using the key point detection algorithm specifically comprises:
Continuously carrying out step-down sampling on an original image to obtain a series of images with different sizes, further carrying out Gaussian filtering on the images with different scales, subtracting two images after similar-scale Gaussian filtering of the same image to obtain a Gaussian difference image, carrying out extreme value detection, wherein an extreme value point meeting a curvature condition is a characteristic point.
preferably, the step S1 specifically includes the following sub-steps:
s11, extracting feature points of the cover page image according to the cover page of the printed matter, then performing dimensionality reduction on the feature descriptor, performing Hash transformation and sequencing after dimensionality reduction, and storing the result in a cover page feature library;
and S12, aiming at the content page of the printed matter, firstly, dividing the content page image into a group of image blocks, wherein the dividing method comprises but is not limited to uniform division and selected area division, then, extracting the feature points of the image blocks, finally, performing dimension reduction processing on the feature descriptors, performing hash transformation and sequencing after the dimension reduction processing, and storing the hash transformation and the sequencing in a content page feature library.
preferably, the matching of the extracted feature points with the cover page feature library in the step S2 is specifically realized by the following method:
Carrying out dimensionality reduction processing, Hash transformation and sorting on feature descriptors corresponding to the feature points extracted from the local image, then comparing the Hash values with Hash values of the feature points stored in a cover page feature library, and if the distance is smaller than a preset first threshold value, determining that the feature points are matched;
And counting the number of the matched feature points, and if the number of the matched feature points is greater than a preset second threshold value, determining that the local image is matched with the corresponding cover page image.
Preferably, the matching of the extracted feature points with the content page feature library in the step S4 is specifically realized by the following method:
performing dimension reduction processing, Hash transformation and sorting on feature descriptors corresponding to the feature points extracted from the regional image, then comparing the Hash values with Hash values of the feature points stored in a content page feature library, and if the distance is smaller than a preset first threshold value, determining that the feature points are matched;
and counting the number of the matched feature points, and if the number of the matched feature points is greater than a preset second threshold, determining that the area image is matched with the corresponding image block.
Preferably, the hash transformation uses a locality sensitive hash function to map the multidimensional characteristics into a single value, and it is satisfied that a point pair with a far distance in the multidimensional space has a large value difference after mapping, and a point pair with a near distance has a small value difference after mapping.
preferably, the dimensionality reduction processing screens out a plurality of dimensional features with high distinguishability from the high dimensional features by adopting a principal component analysis dimensionality reduction method.
After adopting the technical scheme, compared with the background technology, the invention has the following advantages:
1. The invention realizes the identification of the point reading content area based on the image characteristic point extraction and matching mode, does not need to prefabricate codes on books, and gets rid of the limitation of point reading content due to the code limitation.
2. the invention respectively identifies the cover page, the page number and the interested reading area, realizes the query mode of book-page number-content position, and has small data processing amount and high processing efficiency in the identification and matching process.
3. After the feature point extraction operation, the invention performs dimension reduction, hash transformation and sorting processing, reduces the data volume and is convenient for improving the efficiency of the subsequent identification and matching steps.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a schematic flow chart of cover sheet identification according to the present invention;
FIG. 3 is a flow chart of a content page according to the present invention;
Fig. 4 is a schematic view of a method for calculating the installation height of the image capturing assembly according to the present invention.
Detailed Description
in order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
the invention discloses a touch and talk interaction realization method which is realized based on a touch and talk pen. Before the method for implementing the click-to-read interaction is described in detail, the structure of the click-to-read pen is explained to facilitate better understanding of the present invention.
The invention relates to a point reading pen which comprises a pen main body, a pressure sensing assembly and an image acquisition assembly, wherein a processor and a memory are arranged in the pen main body, the pressure sensing assembly is arranged at the pen point position of the pen main body, the image acquisition assembly is arranged on the pen main body and is positioned above the pen point, and the pressure sensing assembly, the image acquisition assembly and the memory are respectively connected with the processor. When the point reading pen is used, a user points the point reading pen on a printed matter, the pressure sensing assembly detects a pressure signal and transmits the pressure signal to the processor, and the processor controls the image acquisition assembly to take a picture. In this embodiment, the pressure sensing component adopts a piezoelectric sensor, and the image acquisition component adopts a camera. The height determination of the image capturing component directly affects the shooting result and the recognition result, which is described as follows (shown in fig. 4):
the method for determining the height h of the camera at the point reading pen is as follows: the selection visual angle isWhen a common lens is used, the image block matching can be ensured only by taking an area with the radius of r, the size of r depends on content segmentation, and the condition that the size of r is required to meet the requirement
Taking a4 paper as an example, w is equal to 21cm corresponding to the width of a4, l is equal to 29cm corresponding to the length of a4, assuming that a single a4 page needs to be divided into 20 sub-image regions, each region is about 5cm × 5cm in size, for redundancy (parts are necessarily blocked), it is necessary to ensure that the radius of the capture region is about 5cm when the camera takes a vertical downward shot, and then the height h of the camera is determined according to the radius.
When a common lens with a 60-degree view angle is selected, an area with a radius of about 5cm is shot, the pen holding height and the pen holding inclination are further considered, the camera can be set to be higher, and h can be equal to 9 cm.
with reference to fig. 1-3, the method for implementing read-on-demand interaction of the present invention comprises the following steps:
And S1, respectively extracting feature points of the cover page and the content page of the printed matter in advance, thereby obtaining a cover page feature library and a content page feature library, and storing the cover page feature library and the content page feature library in a memory. The method comprises the following steps:
And S11, extracting feature points of the cover page image according to the cover page of the printed matter, then performing dimension reduction processing on the feature descriptors, performing hash transformation and sorting after dimension reduction processing, and storing the hash transformation and sorting into a cover page feature library.
And S12, aiming at the content page of the printed matter, firstly dividing the content page image into a group of image blocks, then extracting the feature points of the image blocks, finally performing dimension reduction processing on the feature descriptors, performing Hash transformation and sequencing after the dimension reduction processing, and storing the Hash transformation and sequencing into a content page feature library.
S2, the printed matter cover page is touched by the touch-and-talk pen, the image acquisition assembly acquires a local image of the cover page, the processor extracts the characteristic points of the local image, and the extracted characteristic points are matched with the cover page characteristic library to obtain the printed matter information (namely which book is determined). The matching of the extracted feature points with the cover page feature library is specifically realized by the following method:
Carrying out dimensionality reduction processing, Hash transformation and sorting on feature descriptors corresponding to feature points extracted from the local image, then comparing the Hash values with Hash values of the feature points stored in a cover page feature library, and if the distance is smaller than a preset first threshold value, determining that the feature points are matched;
And counting the number of the matched feature points, and if the number of the matched feature points is greater than a preset second threshold value, determining that the local image is matched with the corresponding cover page image.
S3, the page position of the content page of the printed matter is touched by the point-reading pen, the page image is collected by the image collecting assembly, and the processor performs OCR recognition on the numbers in the page image to obtain page information.
s4, touching the interested area on the content page of the printed matter by using the touch and talk pen, acquiring the area image of the interested area by the image acquisition component, extracting the characteristic points of the area image by the processor, matching the extracted characteristic points with the content page characteristic library, and acquiring touch and talk position information according to the matching result. The matching of the extracted feature points with the content page feature library is specifically realized by the following method:
Carrying out dimensionality reduction processing, Hash transformation and sequencing on feature descriptors corresponding to feature points extracted from the regional image, then comparing the Hash values with Hash values of the feature points stored in a content page feature library, and if the distance is smaller than a preset first threshold value, determining that the feature points are matched;
and counting the number of the matched feature points, and if the number of the matched feature points is greater than a preset second threshold, determining that the area image is matched with the corresponding image block.
in the present embodiment, the feature point extracting operations involved in steps S1, S2, and S4 are implemented by the following method:
a. and (5) carrying out image graying processing. Therefore, the collected image is a color image (for example, an RGB three-channel color image), and a graying process is required to be performed first, so as to facilitate the execution of the subsequent steps. In this embodiment, the formula for calculating graying is as follows:
Gray=(R*30+G*59+B*11+50)/100
wherein Gray is a Gray value.
b. And extracting the characteristic points by using a key point detection algorithm. Continuously carrying out step-down sampling on an original image to obtain a series of images with different sizes, further carrying out Gaussian filtering on the images with different scales, subtracting two images after similar-scale Gaussian filtering of the same image to obtain a Gaussian difference image, carrying out extreme value detection, wherein an extreme value point meeting a curvature condition is a characteristic point. The gaussian difference image D (x, y, σ) operates as follows, G (x, y, σ) is a gaussian filter function, I (x, y) corresponds to the original image, L (x, y, σ) represents the gaussian filtered image at the scale σ:
D(x,y,σ)=(G(x,y,σ(s+1))-G(x,y,σ(s)))*I(x,y)
=L(x,y,σ(s+1))-L(x,y,σ(s))
c. and identifying the direction of the feature points based on the histogram statistics. After the gradient calculation of the feature points is completed, the gradient and the direction of the pixels in the neighborhood are counted by using the histogram. The gradient histogram divides the direction range of 0-360 degrees into 18 bins, with 20 degrees per bin. The direction of the peak of the histogram represents the dominant direction of the feature point. L is a scale space value where the key point is located, and the gradient m and the direction theta of each pixel point are calculated according to the following formula:
θ(x,y)=tan-1((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
and describing the feature points to obtain a feature descriptor. Determining a neighborhood with the size of 21 multiplied by 21 for the feature point, and rotating the neighborhood to the main direction; calculating the horizontal gradient and the vertical gradient of pixel points in the neighborhood, thus determining a characteristic descriptor with the size of 19 multiplied by 2 to 722 dimensions for each characteristic point; the description of the feature points includes coordinates, dimensions, and directions. It should be noted here that, since the obtained feature descriptor is high-dimensional (722 dimensions in this embodiment), for convenience of subsequent processing, dimension reduction and hash transformation are performed, in this embodiment, a principal component analysis dimension reduction method is used to perform dimension reduction processing, that is, PCA in fig. 2, and 20 dimensions are obtained after the dimension reduction processing, and after the locality sensitive hash transformation, that is, LSH in fig. 2, the 20-dimensional feature descriptor is mapped to 1 32-bit floating point value. The specific operation of the PCA is as follows:
Firstly, using characteristic data of a large number of collected images to construct a characteristic matrix X, obtaining characteristic values of the matrix X, sorting the characteristic values according to sizes, obtaining characteristic vectors corresponding to the characteristic values to construct a transformation matrix W, and under the condition of the existing transformation matrix W, for any one piece of characteristic data Y of the collected images, enabling Z to be YWTthe original feature matrix Y is projected to the matrix Z, the high-dimensional feature matrix Y is reduced to a low-dimensional new feature matrix Z, and the new features are linearly independent.
the specific operation of LSH is as follows:
(1) Selecting a locality sensitive hash function satisfying sensitivity (d1, d2, p1, p 2);
(2) determining the number L of hash tables, the number K of hash functions in each table and parameters related to the sensitive hashes according to the accuracy of the search results;
(3) Hashing all data into corresponding buckets through a locality sensitive hash function to form one or more hash tables;
the matching calculation distance process is as follows:
And calculating the distance between the hash value of the query feature point and 2L data in the database, wherein the distance is defined as but not limited to the absolute value of the difference between the two numbers, and if the distance is smaller than a set second threshold, the feature point is judged to be matched.
and S5, acquiring and playing the corresponding audio file based on the printed matter information, the page number information and the click-to-read position information.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. The point-reading interaction implementation method is realized based on a point-reading pen, the point-reading pen comprises a pen main body, a pressure sensing assembly and an image acquisition assembly, a processor and a memory are arranged in the pen main body, the pressure sensing assembly is arranged at the pen point position of the pen main body, the image acquisition assembly is arranged on the pen main body and is positioned above the pen point, the pressure sensing assembly, the image acquisition assembly and the memory are respectively connected with the processor, and the method comprises the following steps:
S1, respectively extracting feature points of the cover page and the content page of the printed matter in advance to obtain a cover page feature library and a content page feature library, and storing the cover page feature library and the content page feature library in the memory;
s2, touching the cover page of the printed matter by using a touch and talk pen, collecting a local image of the cover page by using an image collecting assembly, extracting characteristic points of the local image by using a processor, and matching the extracted characteristic points with a cover page characteristic library to obtain information of the printed matter;
S3, touching the page position of the content page of the printed matter by using the point-reading pen, acquiring a page image by the image acquisition component, and performing OCR (optical character recognition) on the numbers in the page image by the processor to obtain page information;
S4, touching the interested area on the content page of the printed matter by using the touch and talk pen, acquiring the area image of the interested area by the image acquisition component, extracting the characteristic points of the area image by the processor, matching the extracted characteristic points with the content page characteristic library, and acquiring touch and talk position information according to the matching result.
2. the method for implementing read-on-click interaction of claim 1, wherein: the method further comprises a step S5 of obtaining a corresponding audio file based on the printed matter information, the page number information and the click-to-read position information, and playing the audio file.
3. The method for realizing point-read interaction of claim 1, wherein the feature point extraction in steps S1, S2 and S4 is realized by the following steps:
Carrying out image graying processing;
extracting feature points by using a key point detection algorithm;
identifying the direction of the feature points based on histogram statistics;
And describing the feature points to obtain a feature descriptor.
4. the method for realizing point-reading interaction of claim 3, wherein the extracting of the feature points by using the key point detection algorithm specifically comprises:
continuously carrying out step-down sampling on an original image to obtain a series of images with different sizes, further carrying out Gaussian filtering on the images with different scales, subtracting two images after similar-scale Gaussian filtering of the same image to obtain a Gaussian difference image, carrying out extreme value detection, wherein an extreme value point meeting a curvature condition is a characteristic point.
5. The method for implementing read-on-demand interaction of claim 3, wherein the step S1 specifically includes the following sub-steps:
S11, extracting feature points of the cover page image according to the cover page of the printed matter, then performing dimensionality reduction on the feature descriptor, performing Hash transformation and sequencing after dimensionality reduction, and storing the result in a cover page feature library;
And S12, aiming at the content page of the printed matter, firstly, dividing the content page image into a group of image blocks, wherein the dividing method comprises but is not limited to uniform division and selected area division, then, extracting the feature points of the image blocks, finally, performing dimension reduction processing on the feature descriptors, performing hash transformation and sequencing after the dimension reduction processing, and storing the hash transformation and the sequencing in a content page feature library.
6. the method for implementing point-read interaction as claimed in claim 5, wherein the step S2 of matching the extracted feature points with the cover page feature library is implemented by:
carrying out dimensionality reduction processing, Hash transformation and sorting on feature descriptors corresponding to the feature points extracted from the local image, then comparing the Hash values with Hash values of the feature points stored in a cover page feature library, and if the distance is smaller than a preset first threshold value, determining that the feature points are matched;
And counting the number of the matched feature points, and if the number of the matched feature points is greater than a preset second threshold value, determining that the local image is matched with the corresponding cover page image.
7. The method for implementing point-read interaction as claimed in claim 5, wherein the step S4 of matching the extracted feature points with the content page feature library is implemented by:
performing dimension reduction processing, Hash transformation and sorting on feature descriptors corresponding to the feature points extracted from the regional image, then comparing the Hash values with Hash values of the feature points stored in a content page feature library, and if the distance is smaller than a preset first threshold value, determining that the feature points are matched;
and counting the number of the matched feature points, and if the number of the matched feature points is greater than a preset second threshold, determining that the area image is matched with the corresponding image block.
8. The method for implementing read-on-click interaction of any one of claims 5 to 7, wherein the dimension reduction processing adopts a principal component analysis dimension reduction method.
CN201910853992.5A 2019-09-10 2019-09-10 Point-reading interaction realization method Active CN110555435B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910853992.5A CN110555435B (en) 2019-09-10 2019-09-10 Point-reading interaction realization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910853992.5A CN110555435B (en) 2019-09-10 2019-09-10 Point-reading interaction realization method

Publications (2)

Publication Number Publication Date
CN110555435A true CN110555435A (en) 2019-12-10
CN110555435B CN110555435B (en) 2022-06-07

Family

ID=68739604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910853992.5A Active CN110555435B (en) 2019-09-10 2019-09-10 Point-reading interaction realization method

Country Status (1)

Country Link
CN (1) CN110555435B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191059A (en) * 2019-12-31 2020-05-22 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer storage medium and electronic equipment
CN112199522A (en) * 2020-08-27 2021-01-08 深圳一块互动网络技术有限公司 Interaction implementation method, terminal, server, computer equipment and storage medium
CN113223007A (en) * 2021-06-28 2021-08-06 浙江华睿科技股份有限公司 Visual odometer implementation method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447499A (en) * 2015-10-23 2016-03-30 北京爱乐宝机器人科技有限公司 Book interaction method, apparatus, and equipment
CN106126668A (en) * 2016-06-28 2016-11-16 北京小白世纪网络科技有限公司 A kind of image characteristic point matching method rebuild based on Hash
CN107705641A (en) * 2017-09-26 2018-02-16 青岛罗博数码科技有限公司 It is a kind of to put the device and method for reading common printed reading matter
CN108710877A (en) * 2018-04-28 2018-10-26 北京奇禄管理咨询有限公司 A kind of image-pickup method
CN110058705A (en) * 2019-04-28 2019-07-26 视辰信息科技(上海)有限公司 It draws this aid reading method, calculate equipment, point reading side apparatus and electronic equipment
CN110059218A (en) * 2019-04-26 2019-07-26 兰州理工大学 A kind of speech retrieval method and system based on inverse fast Fourier transform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447499A (en) * 2015-10-23 2016-03-30 北京爱乐宝机器人科技有限公司 Book interaction method, apparatus, and equipment
CN106126668A (en) * 2016-06-28 2016-11-16 北京小白世纪网络科技有限公司 A kind of image characteristic point matching method rebuild based on Hash
CN107705641A (en) * 2017-09-26 2018-02-16 青岛罗博数码科技有限公司 It is a kind of to put the device and method for reading common printed reading matter
CN108710877A (en) * 2018-04-28 2018-10-26 北京奇禄管理咨询有限公司 A kind of image-pickup method
CN110059218A (en) * 2019-04-26 2019-07-26 兰州理工大学 A kind of speech retrieval method and system based on inverse fast Fourier transform
CN110058705A (en) * 2019-04-28 2019-07-26 视辰信息科技(上海)有限公司 It draws this aid reading method, calculate equipment, point reading side apparatus and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王佩军 等著: "《摄影测量学》", 30 September 2005 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191059A (en) * 2019-12-31 2020-05-22 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer storage medium and electronic equipment
CN111191059B (en) * 2019-12-31 2023-05-05 腾讯科技(深圳)有限公司 Image processing method, device, computer storage medium and electronic equipment
CN112199522A (en) * 2020-08-27 2021-01-08 深圳一块互动网络技术有限公司 Interaction implementation method, terminal, server, computer equipment and storage medium
CN112199522B (en) * 2020-08-27 2023-07-25 深圳一块互动网络技术有限公司 Interactive implementation method, terminal, server, computer equipment and storage medium
CN113223007A (en) * 2021-06-28 2021-08-06 浙江华睿科技股份有限公司 Visual odometer implementation method and device and electronic equipment

Also Published As

Publication number Publication date
CN110555435B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
Zhou et al. Principal visual word discovery for automatic license plate detection
KR101959831B1 (en) Apparatus and method for image recognition processing
CN110555435B (en) Point-reading interaction realization method
JP6143111B2 (en) Object identification device, object identification method, and program
US20190180094A1 (en) Document image marking generation for a training set
JP6278276B2 (en) Object identification device, object identification method, and program
CN110569818A (en) intelligent reading learning method
Garz et al. Layout analysis for historical manuscripts using sift features
WO2011044058A2 (en) Detecting near duplicate images
CN109947273B (en) Point reading positioning method and device
US9542756B2 (en) Note recognition and management using multi-color channel non-marker detection
AU2017201281A1 (en) Identifying matching images
Su et al. Robust video fingerprinting based on visual attention regions
Saïdani et al. Pyramid histogram of oriented gradient for machine-printed/handwritten and Arabic/Latin word discrimination
Liu et al. Text segmentation based on stroke filter
CN110991371A (en) Intelligent reading learning method based on coordinate recognition
CN110796119A (en) Interactive reading implementation method
Groeneweg et al. A fast offline building recognition application on a mobile telephone
CN110765997B (en) Interactive reading realization method
WO2019071476A1 (en) Express information input method and system based on intelligent terminal
Henderson et al. Robust feature matching in long-running poor-quality videos
Yao et al. Locating text based on connected component and SVM
CN110647844A (en) Shooting and identifying method for articles for children
Liu Digits Recognition on Medical Device
Huang et al. A Method for Content-Based Image Retrieval with a Two-Stage Feature Matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230705

Address after: 1320-2, Floor 13, Building 1, Yard 59, Gao Liangqiao Xiejie Street, Haidian District, Beijing 100082

Patentee after: Beijing Anxin Zhitong Technology Co.,Ltd.

Address before: Room 403, C4, building 2, software industry base, No. 87, 89, 91, South 10th Road, Gaoxin, Binhai community, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: Shenzhen yikuai Interactive Network Technology Co.,Ltd.

TR01 Transfer of patent right