CN105069754B - System and method based on unmarked augmented reality on the image - Google Patents
System and method based on unmarked augmented reality on the image Download PDFInfo
- Publication number
- CN105069754B CN105069754B CN201510471459.4A CN201510471459A CN105069754B CN 105069754 B CN105069754 B CN 105069754B CN 201510471459 A CN201510471459 A CN 201510471459A CN 105069754 B CN105069754 B CN 105069754B
- Authority
- CN
- China
- Prior art keywords
- image
- database
- retrieved
- characteristic point
- retrieval
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Processing Or Creating Images (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of system and method based on unmarked augmented reality on the image, it includes image indexing system, scene is scanned using mobile phone camera, retrieves the image occurred in scene in the database, finds image consistent therewith as retrieval result;With augmented reality system, will introductory video corresponding with the image be rendered into the position of the image in scene, carry out real enhancing;It has unmarked augmented reality on the image, and accuracy rate is high, it is in larger distance to adapt to, adapts to the advantages of more image source.
Description
Technical field
The present invention relates to digital application technical fields, are related to a kind of image recognition technology, are based on more particularly to one kind
The system and method for unmarked augmented reality on image.
Background technology
It is well known that image recognition technology is to be widely used a kind of general technology at present, pass through the feature to image
It is identified to judge the corresponding relevant information of image.Such as bar code loading technique, each bar code has its specific character set;Often
A character occupies certain width;With certain verifying function;Simultaneously also have to do not go together information automatic identification function,
And the processing rotationally-varying point of figure.Such as Quick Response Code loading technique, Quick Response Code be with certain specific geometric figure according to certain rules
In plane(On two-dimensional directional)The chequered with black and white graphic recording data symbol information of distribution;The ingenious land productivity on coding
With " 0 " of computer-internal logical foundations, the concept of " 1 " bit stream is formed, several and the corresponding geometry of binary system are used
Body represents word numerical information, by image input device or photoelectric scanning device automatically identifying and reading to realize that information is located automatically
Reason.Although bar code loading technique and Quick Response Code loading technique have obtained widely applying in actual life, to people's production, life
The defects of work brings great convenience, but this is congenital there are trace, first, the aesthetic feeling of picture can be destroyed;It second is that can
To be imitated.
Augmented reality(Augmented Reality, abbreviation AR) it is by computer technology, by virtual letter in simple terms
Breath is applied to real world, and true environment and virtual object have been added to same picture in real time or space is deposited simultaneously
.Augmented reality provides under normal circumstances, different from the appreciable information of the mankind.It not only presents real world
Information, and virtual information is shown simultaneously, two kinds of information are complementary to one another, are superimposed.There are many augmented realities defines, and is
Everybody well accepted definition is that dummy object is added in real world, provides more enriching experiences and letter to the user
It ceases, in the Graphics overlay to real world that computer is generated by augmented reality.AR(Augmented Reality increase
Strong reality)Technology is a kind of completely new human-computer interaction technology, should by virtual information by intelligent terminal and visualization technique
Use real world so that virtual information and real world are added to same picture simultaneously or space is presented to the user.With
Intelligent terminal is popularized, and the application of AR technologies is further extensive, can be experienced by installing AR applications on intelligent terminal.Tool
Body, the workflow of AR applications is as follows:Terminal passes through camera photographing image frame;Picture frame is identified, determines AR mesh
Mark object;To the AR target objects in picture frame into line trace, the position of AR target objects is determined;It obtains and the AR objects
The associated AR virtual informations of body, render picture frame, the AR virtual informations are superimposed upon on AR target objects and are carried out
Display shows AR target objects and AR virtual contents so that user interacts simultaneously on a terminal screen.Existing AR applications skill
There are the scarce limits that calculation amount is larger, video source modeling effect is unstable, system detectio accuracy rate is not high for art.
Invention content
It is an object of the invention to overcome the shortcomings of in existing augmented reality unmarked on the image, a kind of base is provided
In the system and method for unmarked augmented reality on the image.
To achieve these goals, the present invention is achieved through the following technical solutions:Based on unmarked on the image
The system of augmented reality, it is characterised in that:It includes image indexing system, i.e., scene is scanned using mobile phone camera, in data
The image occurred in scene is retrieved in library, finds image consistent therewith as retrieval result;With augmented reality system, Ji Jiangyu
The corresponding introductory video of the image is rendered into the position of the image in scene, carries out real enhancing.
The image retrieval describes sub- ORB to carry out characteristic point detection and description, specific mistake to image using invariance
Journey retrieval flow is as follows:
Step 1 describes sub- ORB feature point detectors using invariance to each image in database and figure to be retrieved
As carrying out characteristic point detection, (ORB is the abbreviation of Oriented FAST and Rotated BRIEF, and ORB features are by FAST
The detection method of characteristic point combines with BRIEF Feature Descriptors, and improves and optimize on the basis of they are original,
FAST is the abbreviation of features from accelerated segment test, and BRIEF is Binary Robust
The abbreviation of Independent Elementary Features) and son is described to describe each characteristic point using invariance ORB,
Obtain the binary feature of 256bit.
Step 2 opens image to certain in database, to each characteristic point of image to be retrieved, passes through feature point description
The characteristic point of neighbour is found in the comparison of son in this image of database.
Step 3, the Feature Points Matching to obtaining remove the characteristic point of error hiding to carrying out initial screening;Principle is:(1) it is, special
The Euclidean distance of sign point matching pair is more than the removal of certain threshold value(Distance herein is all Euclidean distance);(2), arest neighbors and
The Euclidean distance ratio of the Feature Points Matching pair of secondary neighbour is less than the removal of certain threshold value;If the matching after screening is to being less than one
Determine threshold value, then show the image and the non-uniform image of image to be retrieved in database.
Step 4, to the matching double points after screening, carry out the calculating of affine matrix and interior point with RANSAC algorithms
Search the postsearch screening i.e. to matching double points(Interior point is to being obtained after the postsearch screening of matching double points by RANSAC algorithms
With point pair, projected image of the characteristic point of image after affine transformation on retrieval image is exactly interior point in library;RANSAC is
The abbreviation of RANdom SAmple Consensus, it is the sample data set for including abnormal data according to one group, calculates data
Mathematical model parameter, obtain the algorithm of effective sample data.)
Point number in step 5, statistics, if the matching after postsearch screening to being less than certain threshold value, shows in database
The image and the non-uniform image of image to be retrieved.
Step 6, the affine matrix obtained using step 4, to five reference points of image in library, quadrangle and central point carry out
Affine transformation, five points after being converted;If five points after transformation are unsatisfactory for following relationship, showing should in database
Image and the non-uniform image of image to be retrieved.Principle is:(1) after, converting, central point is still at four angles center.(2) it is, to be retrieved
Number of the characteristic point after the conversion inside four angle points is greater than certain threshold value.(3), area will be in certain allowed band.
The difference of opposite side is verified error by step 7.Image is all rectangle in library, by the affine change of step 6
It changes, obtains projected image of the image on retrieval image in library, which is quadrangle, and four edges are successively:Edge1,
Edge2, edge3, edge4. wherein edge1 and edge3 are opposite side, and edge2 and edge4 are that opposite side Error represent geometry school
Error is tested, then:
Error = max(abs(edge1 -edge3) / (edge1 +edge3),abs(edge2 - edge4) /
(edge2 + edge4));
Max represents maximum value, and abs represents absolute value.
Step 8, to each image in database, repeat step 2-7, take the image that error is verified with minimum geometry
As final retrieval result.
The affine matrix A of estimation can be obtained by image retrieval, in order to obtain more stable affine matrix, is needed to it
Affine matrix carries out tuning;Specific tuning step is as follows:
Affine matrix A is applied in image to be retrieved by step 1(Image to be retrieved is carried out with affine matrix A affine
Transformation), the image after being converted(Affine matrix A and image to be retrieved carry out the result of matrix multiplication).
Step 2 extracts the changing image obtained in step 1 characteristic point and description again.
The affine matrix B of step 3, estimation changing image and image in library.
Matrix A is multiplied by step 4 with matrix B, obtains the affine matrix C after final tuning.
Step 5, each frame for reading in the corresponding video of retrieval result, each frame image application affine matrix C passes through
OpenGL is rendered into the position of the image in scene, carries out real enhancing.(OpenGL writes Open Graphics Library entirely
It is to define one across programming language, the graphic package interface of the profession of cross-platform programming interface specification.It is for three-dimensional
Image or two dimensional image are one powerful, call convenient underlying graphics library.)
In the present invention, to Feature Points Matching to carrying out initial screening by simple method, to remove the point of error hiding, significantly
The calculation amount for reducing subsequent affine Matrix Estimation, the Feature Points Matching of error hiding is to including:Distance is more than certain threshold value
The ratio of the distance of the Feature Points Matching pair of Feature Points Matching pair, the distance of arest neighbors Feature Points Matching pair and time neighbour is less than one
Determine the Feature Points Matching pair of threshold value.
The affine matrix of estimation is verified by simple geometry verification strategy, takes and verifies error with minimum geometry
Image as final retrieval result, obtain very effective retrieval effectiveness.
By solving the affine transformation of the image after converting, to obtain the affine matrix after tuning.It has obtained more stable
Video source modeling effect.
The present invention is based on unmarked augmented reality on the image system and method compared with prior art, have following excellent
Point:
Real world information and virtual world information " seamless " are integrated, are surmounted by the 1, augmented reality
The sensory experience of reality.Augmented reality be script in the certain time spatial dimension of real world be difficult experience
Visual information by science and technology such as computers, is superimposed again after analog simulation, by virtual Information application to real world, by people
Class sense organ is perceived, so as to reach the sensory experience of exceeding reality.True environment and virtual object are added in real time
Same picture or space exist simultaneously.
2nd, this system ingehious design algorithm based on local feature, accuracy rate is up to more than 98%.
3rd, real-time rendering effect is obtained.
4th, this system is suitable in larger distance:Target image is from camera 10cm-100cm(Target image appears in camera shooting
The size of head is 64x64 pixels to slightly larger than resolution ratio of camera head size)It can effectively detect.
5th, this system is suitable for the plane of delineation rotation of 360 full angles.
6th, more image source is suitable for, image is shown including printing artwork master, cromogram, display screen(Mobile phone, tablet)Deng.
Description of the drawings
Fig. 1 is retrieval flow figure of the present invention.
Specific embodiment
The specific embodiment of the present invention is further described with reference to attached drawing under below, understands the present invention.
Referring to attached drawing 1, a kind of system based on unmarked augmented reality on the image, it includes image indexing system, i.e.,
Scene is scanned using mobile phone camera, the image occurred in scene is retrieved in the database, finds image conduct consistent therewith
Retrieval result;With augmented reality system, will introductory video corresponding with the image be rendered into the position of the image in scene, into
Row reality enhancing.
Method of the kind based on unmarked augmented reality on the image, the image retrieval describe sub- ORB using invariance
To carry out characteristic point detection to image and description, detailed process retrieval flow are as follows:
Step 1 describes sub- ORB feature point detectors using invariance to each image in database and figure to be retrieved
As carrying out characteristic point detection, and son is described to describe each characteristic point using invariance ORB, obtain the binary feature of 256bit.
Step 2 opens image to certain in database, to each characteristic point of image to be retrieved, passes through feature point description
The characteristic point of neighbour is found in the comparison of son in this image of database.
Step 3, the Feature Points Matching to obtaining remove the characteristic point of error hiding to carrying out initial screening;Principle is:(1) it is, special
The Euclidean distance of sign point matching pair is more than the removal of certain threshold value;(2), the Feature Points Matching pair of arest neighbors and secondary neighbour is European
Distance ratio is less than the removal of certain threshold value;If the matching after screening shows the figure in database to being less than certain threshold value
Picture and the non-uniform image of image to be retrieved.
Step 4, to the matching double points after screening, carry out the calculating of affine matrix and interior point with RANSAC algorithms
Search the postsearch screening i.e. to matching double points.
Point number in step 5, statistics, if the matching after postsearch screening to being less than certain threshold value, shows in database
The image and the non-uniform image of image to be retrieved.
Step 6, the affine matrix obtained using step 4, to five reference points of image in library, quadrangle and central point carry out
Affine transformation, five points after being converted;If five points after transformation are unsatisfactory for following relationship, showing should in database
Image and the non-uniform image of image to be retrieved.Principle is:(1) after, converting, central point is still at four angles center.(2) it is, to be retrieved
Number of the characteristic point after the conversion inside four angle points is greater than certain threshold value.(3), area will be in certain allowed band.
The difference of opposite side is verified error by step 7.Image is all rectangle in library, by the affine change of step 6
It changes, obtains projected image of the image on retrieval image in library, which is quadrangle, and four edges are successively:Edge1,
Edge2, edge3, edge4. wherein edge1 and edge3 are opposite side, and edge2 and edge4 are that opposite side Error represent geometry school
Error is tested, then:
Error = max(abs(edge1 -edge3) / (edge1 +edge3),abs(edge2 - edge4) /
(edge2 + edge4));
Max represents maximum value, and abs represents absolute value.
Step 8, to each image in database, repeat step 2-7, take the image that error is verified with minimum geometry
As final retrieval result.
The affine matrix A of estimation can be obtained by above-mentioned image retrieval, in order to obtain more stable affine matrix, is needed
Tuning is carried out to its affine matrix;Specific tuning step is as follows:
Affine matrix A is applied in image to be retrieved, the image after being converted by step 1.
Step 2 extracts the changing image obtained in step 1 characteristic point and description again.
The affine matrix B of step 3, estimation changing image and image in library.
Matrix A is multiplied by step 4 with matrix B, obtains the affine matrix C after final tuning.
Step 5, each frame for reading in the corresponding video of retrieval result, each frame image application affine matrix C passes through
OpenGL is rendered into the position of the image in scene, carries out real enhancing.
Embodiment does not form the limit to the scope of the present invention only to facilitate understand technical scheme of the present invention
System, any letter that every technical spirit without departing from technical solution of the present invention perhaps according to the present invention makees above scheme
Single modification, equivalent variations and modification, still fall within the scope of the present invention.
Claims (2)
1. the system based on unmarked augmented reality on the image, it is characterised in that:It includes image indexing system, i.e., using hand
Machine camera scans scene, retrieves the image occurred in scene in the database, finds image consistent therewith and is tied as retrieval
Fruit;With augmented reality system, will introductory video corresponding with the image be rendered into the position of the image in scene, carry out reality
Enhancing;Wherein, the image retrieval describes sub- ORB to carry out characteristic point detection and description to image, specifically using invariance
Procedural retrieval flow is as follows:
Step 1, described using invariance sub- ORB feature point detectors to each image in database and image to be retrieved into
Row characteristic point detects, and describes son using invariance ORB to describe each characteristic point, obtains the binary feature of 256bit;
Step 2 opens image to certain in database, to each characteristic point of image to be retrieved, passes through feature point description
Compare, the characteristic point of neighbour is found in this image of database;
Step 3, the Feature Points Matching to obtaining remove the characteristic point of error hiding to carrying out initial screening;Principle is:(1), characteristic point
The Euclidean distance of matching pair is more than the removal of certain threshold value;(2), the Euclidean distance of the Feature Points Matching pair of arest neighbors and secondary neighbour
Ratio is less than the removal of certain threshold value;If the matching after screening to being less than certain threshold value, show in database the image and
The non-uniform image of image to be retrieved;
Step 4, to the matching double points after screening, carry out the calculating of affine matrix and looking into for interior point with RANSAC algorithms
It looks for;
Point number in step 5, statistics, if the matching after postsearch screening to being less than certain threshold value, shows the figure in database
Picture and the non-uniform image of image to be retrieved;
Step 6, the affine matrix obtained using step 4, to five reference points of image in library, quadrangle and central point progress are affine
Transformation, five points after being converted;If five points after transformation are unsatisfactory for following relationship, show the image in database
With the non-uniform image of image to be retrieved;Principle is:(1) after, converting, central point is still at four angles center;(2), feature to be retrieved
It puts the number inside four angle points after the conversion and is greater than certain threshold value;(3), area will be in certain allowed band;
The difference of opposite side is verified error by step 7;Image is all rectangle in library, by the affine transformation of step 6,
Projected image of the image on retrieval image in library is obtained, which is quadrangle, and four edges are successively:Edge1, edge2,
Edge3, edge4. wherein edge1 and edge3 are opposite side, and edge2 and edge4 are that opposite side .Error represents geometry verification error,
Then:
Error=max (abs (edge1-edge3)/(edge1+edge3),
abs(edge2-edge4)/(edge2+edge4));
Max represents maximum value, and abs represents absolute value;
Step 8, to each image in database, repeat step 2-7, take the image conduct that error is verified with minimum geometry
Final retrieval result.
2. the system according to claim 1 based on unmarked augmented reality on the image, it is characterised in that:It further includes pair
The step of affine matrix A progress tunings of estimation are obtained by the image retrieval;Specific tuning step is as follows:
Affine matrix A is applied in image to be retrieved, the image after being converted by step 1;
Step 2 extracts the changing image obtained in step 1 characteristic point and description again;
The affine matrix B of step 3, estimation changing image and image in library;
Matrix A is multiplied by step 4 with matrix B, obtains the affine matrix C after final tuning;And
Step 5, each frame for reading in the corresponding video of retrieval result, by each frame image application affine matrix C, pass through OpenGL
The position of the image in scene is rendered into, carries out real enhancing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510471459.4A CN105069754B (en) | 2015-08-05 | 2015-08-05 | System and method based on unmarked augmented reality on the image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510471459.4A CN105069754B (en) | 2015-08-05 | 2015-08-05 | System and method based on unmarked augmented reality on the image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105069754A CN105069754A (en) | 2015-11-18 |
CN105069754B true CN105069754B (en) | 2018-06-26 |
Family
ID=54499112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510471459.4A Expired - Fee Related CN105069754B (en) | 2015-08-05 | 2015-08-05 | System and method based on unmarked augmented reality on the image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105069754B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488541A (en) * | 2015-12-17 | 2016-04-13 | 上海电机学院 | Natural feature point identification method based on machine learning in augmented reality system |
CN105719522A (en) * | 2016-01-25 | 2016-06-29 | 成都趣动力教育科技有限公司 | Dual-client-terminal speech communication method, device and system |
CN106204743B (en) * | 2016-06-28 | 2020-07-31 | Oppo广东移动通信有限公司 | Control method and device for augmented reality function and mobile terminal |
CN106250938B (en) * | 2016-07-19 | 2021-09-10 | 易视腾科技股份有限公司 | Target tracking method, augmented reality method and device thereof |
CN106251404B (en) * | 2016-07-19 | 2019-02-01 | 央数文化(上海)股份有限公司 | Orientation tracking, the method and relevant apparatus, equipment for realizing augmented reality |
CN106447643A (en) * | 2016-09-19 | 2017-02-22 | 西安你的主意电子商务有限公司 | AR technology based interactive image processing method |
CN106845435A (en) * | 2017-02-10 | 2017-06-13 | 深圳前海大造科技有限公司 | A kind of augmented reality Implementation Technology based on material object detection tracing algorithm |
CN106874865A (en) * | 2017-02-10 | 2017-06-20 | 深圳前海大造科技有限公司 | A kind of augmented reality implementation method based on image recognition |
CN106897982B (en) * | 2017-02-23 | 2019-06-14 | 淮阴工学院 | Real Enhancement Method based on the unmarked identification of image |
CN109614859B (en) * | 2018-11-01 | 2021-01-12 | 清华大学 | Visual positioning feature extraction and matching method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010142896A1 (en) * | 2009-06-08 | 2010-12-16 | Total Immersion | Methods and devices for identifying real objects, for following up the representation of said objects and for augmented reality in an image sequence in a client-server mode |
CN102142005A (en) * | 2010-01-29 | 2011-08-03 | 株式会社泛泰 | System, terminal, server, and method for providing augmented reality |
CN103218854A (en) * | 2013-04-01 | 2013-07-24 | 成都理想境界科技有限公司 | Method for realizing component marking during augmented reality process and augmented reality system |
CN103389978A (en) * | 2012-05-07 | 2013-11-13 | 联想(北京)有限公司 | Method and system for acquiring information through augmented reality technologies |
CN104508697A (en) * | 2012-05-31 | 2015-04-08 | 英特尔公司 | Method, server, and computer-readable recording medium for providing augmented reality service |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010088772A1 (en) * | 2009-02-06 | 2010-08-12 | Magna International Inc. | Module load enabling bracket |
-
2015
- 2015-08-05 CN CN201510471459.4A patent/CN105069754B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010142896A1 (en) * | 2009-06-08 | 2010-12-16 | Total Immersion | Methods and devices for identifying real objects, for following up the representation of said objects and for augmented reality in an image sequence in a client-server mode |
CN102142005A (en) * | 2010-01-29 | 2011-08-03 | 株式会社泛泰 | System, terminal, server, and method for providing augmented reality |
CN103389978A (en) * | 2012-05-07 | 2013-11-13 | 联想(北京)有限公司 | Method and system for acquiring information through augmented reality technologies |
CN104508697A (en) * | 2012-05-31 | 2015-04-08 | 英特尔公司 | Method, server, and computer-readable recording medium for providing augmented reality service |
CN103218854A (en) * | 2013-04-01 | 2013-07-24 | 成都理想境界科技有限公司 | Method for realizing component marking during augmented reality process and augmented reality system |
Non-Patent Citations (2)
Title |
---|
ORB: an efficient alternative to SIFT or SURF;Rublee E 等;《New York:IEEE》;20111231;2564-2571 * |
基于改进ORB的图像特征点匹配;许宏科 等;《科学技术与工程》;20140628;第14卷(第18期);105-128 * |
Also Published As
Publication number | Publication date |
---|---|
CN105069754A (en) | 2015-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105069754B (en) | System and method based on unmarked augmented reality on the image | |
CN112581629B (en) | Augmented reality display method, device, electronic equipment and storage medium | |
CN109753885B (en) | Target detection method and device and pedestrian detection method and system | |
Dash et al. | Designing of marker-based augmented reality learning environment for kids using convolutional neural network architecture | |
Dutta et al. | A color edge detection algorithm in RGB color space | |
CN109740572B (en) | Human face living body detection method based on local color texture features | |
US8442327B2 (en) | Application of classifiers to sub-sampled integral images for detecting faces in images | |
CN109064525B (en) | Picture format conversion method, device, equipment and storage medium | |
CN108960012B (en) | Feature point detection method and device and electronic equipment | |
CN111539238B (en) | Two-dimensional code image restoration method and device, computer equipment and storage medium | |
CN106897982B (en) | Real Enhancement Method based on the unmarked identification of image | |
CN111667005A (en) | Human body interaction system adopting RGBD visual sensing | |
CN112651953A (en) | Image similarity calculation method and device, computer equipment and storage medium | |
Niu et al. | Image retargeting quality assessment based on registration confidence measure and noticeability-based pooling | |
CN114758145A (en) | Image desensitization method and device, electronic equipment and storage medium | |
KR20110087620A (en) | Layout based page recognition method for printed medium | |
TWI536280B (en) | Text localization system for street view image and device thereof | |
CN111179281A (en) | Human body image extraction method and human body action video extraction method | |
CN104331912B (en) | A kind of garment material method for quickly filling based on matrix of edge | |
CN116862920A (en) | Portrait segmentation method, device, equipment and medium | |
CN114820681A (en) | RGB camera-based library position detection method and system | |
CN111105394B (en) | Method and device for detecting characteristic information of luminous pellets | |
JP7144384B2 (en) | Object detection device, method and program | |
Lee et al. | Hand gesture recognition using blob detection for immersive projection display system | |
CN111325194B (en) | Character recognition method, device and equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180626 Termination date: 20200805 |
|
CF01 | Termination of patent right due to non-payment of annual fee |