CN106933355A - The quick method for obtaining moving object information in real time in augmented reality - Google Patents

The quick method for obtaining moving object information in real time in augmented reality Download PDF

Info

Publication number
CN106933355A
CN106933355A CN201710107963.5A CN201710107963A CN106933355A CN 106933355 A CN106933355 A CN 106933355A CN 201710107963 A CN201710107963 A CN 201710107963A CN 106933355 A CN106933355 A CN 106933355A
Authority
CN
China
Prior art keywords
information
site
moving object
site information
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710107963.5A
Other languages
Chinese (zh)
Inventor
刘子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Fu Fei Technology Co Ltd
Original Assignee
Beijing Fu Fei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fu Fei Technology Co Ltd filed Critical Beijing Fu Fei Technology Co Ltd
Publication of CN106933355A publication Critical patent/CN106933355A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of quick method for obtaining moving object information in real time in augmented reality, methods described is included on background map and sets the site that multiple includes site information, and moving object is placed on background map;Corresponding window image at moving object position is obtained, the terminal control module being transferred on display terminal after site information is extracted;Terminal control module loci information is processed, and information of the moving object on background map is determined by site information.Site packet includes the positional information in site in the present invention, and real world is associated with the positional information in augmented reality by site information, can quickly, in real time, exactly obtain information of the moving object in real world, completes the positioning on display terminal.

Description

The quick method for obtaining moving object information in real time in augmented reality
Technical field
It is quick in more particularly to a kind of augmented reality to obtain moving object information in real time the present invention relates to augmented reality field Method.
Background technology
Augmented reality (Augmented Reality, AR), is in virtual reality (Virtual Reality, VR) technology base The new technology grown up on plinth, it passes through computer graphical and visualization technique, multimedia technology, interaction technique by computer Or intelligent terminal calculates the virtual objects (figure, image, word, sound etc.) being rendered to, what is held water is added to In the true and real world that user can directly perceive, both combine together, so as to play so-called " enhancing " effect.Mesh Before, the augmented reality system based on video realizes that augmented reality generally requires four steps:The IMAQ of real scene;In real time Tracking and registration;Dummy object is drawn and is rendered;The fusion display of actual situation scene.This mode obtains true first with video camera Real scene video, scene location information is obtained using the mark in video flowing, then calculates virtual by graphic system Object coordinates draw dummy object by transition matrix to the coordinate transform between camera view plane on view plane, will be virtual Object is registered in the real scene of user's perception, and the scene of virtual reality fusion is finally shown on an output device.
Augmented reality is described as one of research field most popular in recent years, and it sets in medical field, military field, industry The aspect such as meter and public entertainment field has potential application, wherein, using augmented reality play can become and network The game mode that game compares favourably, the attraction (dazzling cruel visual effect) that it had both retained online game can promote player again Social activities (because augmented reality has to rely on actual object) is participated in, the dependence to online game is reduced.No matter above-mentioned The application in which kind of field, people are gradually not content with currently to stationary object or the augmented reality of scene, to moving object or field The demand of scape augmented reality increasingly increases.
However, at present there is certain delayed, main cause in augmented reality to the Tracing Registration aspect of moving object It is there is time delay to the determination of moving object information.The time delay reason for causing moving object information is:(1) in one kind In mode, a range of image around current moving object and moving object position is continuously acquired by camera, so Adjacent two field picture is contrasted afterwards to obtain the information of mobile object, the algorithm that this kind of mode is related to is complicated, reduces and obtains Take the promptness of moving object information;Or, (2) in a further mode of operation, the fortune of mobile object are obtained using multiple sensors Dynamic information, but because some sensors itself have time delay characteristic long so that the data of acquisition have certain time delay.
Therefore, the present inventor furthers investigate to the system for obtaining moving object information, a kind of quick, real to provide When, the method for accuracy acquisition moving object information high, syncretizing effect of the enhancing virtual information in reality scene.
The content of the invention
In order to solve the above problems, present inventor has performed studying with keen determination, as a result find:Moving object is placed on and is provided with On the background map in site, and image capture module, data decoder module and first communication module are installed in moving object, lead to The image capture module and data decoder module crossed in moving object carry out the collection of moving object information, extract, logical through first On information transfer to the terminal control module of display terminal that letter module etc. will be collected, using site information by real world with Positional information in augmented reality is associated, and realizes the acquisition of information and the positioning on display terminal of moving object, so that Complete the present invention.
It is an object of the invention to provide following technical scheme:
1st, a kind of quick method for obtaining moving object information in real time in augmented reality, the described method comprises the following steps:
1) site 110 that multiple includes recognizable site information is set on background map 100, moving object is placed in On background map;
2) corresponding regarding comprising multiple sites 110 at moving object position is obtained by image capture module 200 Window image, the site information to be can recognize that in window image is extracted, and transmits terminal of the site information to display terminal Control module 400;
3) the loci information of terminal control module 400 is processed, and determines the information of moving object.
According to the quick method for obtaining moving object information in real time in a kind of augmented reality that the present invention is provided, with following Beneficial effect:
(1) present invention is carried out really based on position of the background map to moving object, speed and the directional information for carrying site Fixed, this mode is totally different from the prior art by way of image comparison or sensor determine moving object information, is solved Be present hysteresis quality in the acquisition present in prior art to moving object information, can obtain quickly, in real time, exactly The information of moving object, syncretizing effect of the enhancing virtual information in reality scene;
(2) site in the present invention on background map is arranged in coordinate points form, is easy to the identification to moving object position; Meanwhile, safe range delimited on background map, when in the window image for obtaining comprising site is reminded, terminal control module is aobvious Show and pointed out in terminal, be easy to manipulation of the user to moving object, can also avoid that moving object cannot be obtained on display terminal The situation of information;
(3) image capture module may include the first camera head and the second camera head in the present invention, and two camera heads exist The installation of diverse location in moving object, is easy to the acquisition to moving object positional information, velocity information and directional information;
(4) first communication module and second communication module can be bluetooth communication in the present invention, meet information reception Meanwhile, energy consumption is small, low cost;
(5) system conversion high is carried out to the site information for transmitting using data decoder module in the present invention, is effectively reduced The transmission time of site information, is more beneficial for realizing the promptness that moving object information is obtained;
(6) system that moving object information is obtained in the present invention can be processed the fuzzy situation of window image, occurred During blurred picture, the determination of moving object information and the effectively and reasonably positioning in display terminal are can effectively ensure that.
Brief description of the drawings
Fig. 1 shows the flow chart of the method for acquisition moving object information in the present invention;
Fig. 2 shows the schematic diagram of background map in a kind of preferred embodiment of the invention;
Fig. 3 shows the module diagram being arranged in moving object in the present invention.
Drawing reference numeral explanation:
100- background maps;
110- sites;
200- image capture modules;
The camera heads of 210- first;
The camera heads of 220- second;
310- first communication modules;
320- second communication modules;
400- terminal control modules;
500- data decoder modules.
Specific embodiment
Below by the present invention is described in detail, the features and advantages of the invention will become more with these explanations For clear, clear and definite.
As shown in figure 1, the invention provides a kind of quick method for obtaining moving object information in real time in augmented reality, should Method comprises the following steps:
1) site 110 that multiple includes recognizable site information is set on background map 100, moving object is placed in On background map;
2) corresponding regarding comprising multiple sites 110 at moving object position is obtained by image capture module 200 Window image, the site information to be can recognize that in window image is extracted and is transmitted terminal control of the site information to display terminal Molding block 400;
3) the loci information of terminal control module 400 is processed, and determines the information of moving object.
Step 1) in, as shown in Fig. 2 the site 110 on the background map 100 is regular on two dimensional surface with what is set Arrangement, preferred sites 110 are arranged in the form of coordinate points, and spacing is equal between each site 110.For example, in background map 100 An XY axis coordinate system is set on two dimensional surface, the positive direction of X-axis is defined as to the right, the positive direction of Y-axis, origin are defined as upwards Coordinate is (0,0).Site density is set according to precision requirement, such as 1,000,000 groups of sites are set on the map of 1m × 1m 110, i.e. site density are 100 myriabit points/m2, then first point in X-axis positive direction is just represented (1mm, 0), and Y-axis is square Upward first point represents (0,1mm), and by that analogy, so each site 110 just can be true in the position of background map 100 It is fixed.Preferably, site density is not less than 6 myriabit points/m2
The background map 100 is made up of paper material or high molecular film material.The site 110 is invisible for naked eyes Microcosmos area, recognizable site information is provided with by modes such as printing, spraying or etchings in site 110.
In a preferred embodiment, the site information in site 110 includes the coordinate data in site 110, coordinate Data are used to represent the positional information of the site 110 in background map 100.The site information is representing binary-coded character The sequential combination of the painted patterns with two or more intensity of reflected light of (1,0) represents, wherein, the shape of painted patterns Shape can be with identical, it is also possible to different, and intensity of reflected light difference is larger (i.e. easily recognizable, differentiation).For example, the site information with The circular expression of the two or more intensities of reflected light of binary-coded character is represented, wherein, with anti-higher than a certain setting intensity Circle correspondence character 1 (or 0) of luminous intensity is penetrated, with the circle correspondence character with the intensity of reflected light less than a certain setting intensity 0 (or 1).
In another preferred embodiment, the site information is two or more that represent binary-coded character The sequential combination of the microcosmos pattern of shape.It is two kinds of microcosmos patterns of shape for representing binary-coded character in the site information During sequential combination, with the microcosmos pattern correspondence character 1 of a certain shape, the microcosmos pattern correspondence character 0 of another shape;In institute's rheme When point information is the sequential combination of microcosmos pattern of the two or more shapes for representing binary-coded character, wherein with the micro- of a certain shape Figure correspondence character 1 (or 0) is seen, other microcosmos patterns correspond to character 0 (or 1).
The region area of length and site 110 in view of site information, site information is in site 110 preferably with many The form of row multiple row is set.
In further preferred embodiment, size according to moving object away from the setting of background map frontside edge away from From interior delimitation safe range, wherein, the peripheral site 110 nearest with the edge of safe range is prompting site in safe range, When in the window image for obtaining comprising site is reminded, terminal control module 400 is pointed out on display terminal, grasps user Control moving object is moved in safe range.If moving object deviates safe range, there is greater risk to deviate background map 100, may cause in window image without site information, the information of moving object cannot be obtained on display terminal.
Step 2) in, as shown in figure 3, in order to obtain window image, complete the extraction of site information and transmit site information To terminal control module 400, image capture module 200, the communication mould of data decoder module 500 and first are installed in moving object Block 310.Image capture module 200 obtains the corresponding window image comprising multiple sites 110 at moving object position, will Window image is transferred to data decoder module 500, and data decoder module 500 is identified to the site information in window image, And string of binary characters (extracting) is translated into, then transmit site information to first communication module 310.
In a preferred embodiment, described image acquisition module 200 can include a camera head, shooting dress The camera put obtains corresponding at moving object position towards background map 100 by modes such as capture images/photos Window image comprising multiple sites 110.The camera head is arranged in moving object, is preferably fixed to moving object bottom Centre position.In the present embodiment, according to the window image that the single camera head current time obtains, it may be determined that motion The positional information of object, according to the window image that last moment or more at some moment and current time obtain, it may be determined that motion The velocity information and directional information of object.
In another preferred embodiment, as shown in figure 3, described image acquisition module 200 may also include two taking the photograph As device, i.e. the first camera head 210 and the second camera head 220, the camera of two camera heads towards background map 100 with Obtain the corresponding window image comprising multiple sites 110 at moving object position.By the first camera head 210 and second Camera head 220 is arranged on the axle of moving object, preferably symmetry axis, the direction of the axle and the moving direction of moving object Unanimously.Preferably, the first camera head 210 is fixed on the centre position of moving object axle, it obtains and determines moving object The window image of positional information and velocity information;Second camera head 220 is fixed on position forward on moving object axle, is led to The combination for crossing the window image of two camera heads acquisition determines the directional information of moving object.
Image capture module 200 preferably includes two camera heads, and this setting can greatly mitigate subsequent terminal control module The difficulty that 400 pairs of moving object informations determine.
In a preferred embodiment, image capture module 200 is the shooting dress with high magnification, fine definition Put.Image capture module 200 in the present invention is with the camera head for being not less than 300,000 pixel camera heads, herein not to shooting The model of device is limited, and it can select and is applied to disposable type of the invention, such as SNC5500 types video camera in the art.
In a preferred embodiment, the window image that image capture module 200 is obtained can be square, rectangle or circle Shape, preferably square, so that close in the number that shooting picture two intersects the site 110 in edge direction.Window image Size meets to be made all the time accommodate at least 6 sites 110 in each window image.For example, with the upper left corner in window image, a left side Site 110 at inferior horn, the upper right corner or the lower right corner is as effective site (for calculating moving object positional information, velocity information With the site 110 of directional information).When effective site is got for fuzzy locus, i.e., the site is unintelligible, it is impossible to know its position Point information, can obtain the site information in effective site according to the site information in other sites 110 of fuzzy locus vicinity.When When whole sites 110 in the window image that a certain moment obtains are fuzzy locus, can according to last moment, more than it is some when The site information that quarter and subsequent time, following some moment obtain effectively site in window images is effective to obtain the moment The site information in site.
In the present invention, data decoder module 500 receives the window image of the transmission of image capture module 200, to form figure Site information as in is identified, and is translated into string of binary characters.For example, the site information is representing binary system The sequential combination of the painted patterns with different intensities of reflected light of character represents, wherein, with higher than a certain setting intensity Intensity of reflected light painted patterns correspondence character 1 (or 0), with less than it is a certain setting intensity intensity of reflected light pigment Pattern correspondence character 0 (or 1).The painted patterns transcoding of correspondence character 1 is character 1 by data decoder module 500, by correspondence character 0 painted patterns transcoding be character 0, now the site information in window image be by transcoding be string of binary characters (binary system Formula site information), realize the identification of site information, extract.
In a preferred embodiment, the data decoder module 500 is also equipped with system transformation function high.The height System conversion refers to that the binary system formula site information that will be extracted is converted into the decimal system or hexadecimal formula site information, is preferably turned Turn to hexadecimal formula site information.Based on the translation function of data decoder module 500, the transmission of site information is effectively reduced Time, it is more beneficial for realizing the promptness that moving object information is obtained.
In a preferred embodiment, the data decoder module 500 is arranged in moving object, preference data solution Code module 500 is decoding chip, the more preferably decoding chip of model sonixSNC7312.In the present invention, the first communication mould Block 310 receives the site information of the transmission of data decoder module 500, and the second communication module 320 through being installed on display terminal will Site information is transferred to terminal control module 400.
In a preferred embodiment, the first communication module 310 and second communication module 320 may be selected from bluetooth Communication module, Wi-Fi communication modules, 4G communication modules or mobile communication module, it is contemplated that moving object and terminal control module The distance between 400 is shorter, and bluetooth communication low energy consumption, preferably first communication module 310 and second communication module 320 It is bluetooth communication, now, first communication module 310 is Bluetooth signal transmitter, and second communication module 320 is Bluetooth signal Receiver.Model in the present invention to Bluetooth signal transmitter or Bluetooth signal receiver is not limited, and it can be in the art Suitable for disposable type of the invention.
Step 3) in, when image capture module 200 only include a camera head when, terminal control module 400 each when Carve and obtain one group of site information.Terminal control module 400 can obtain the position at current time according to the instant site information for obtaining Confidence ceases, and is calculated according to the site information that last moment or more at some moment obtain with current time, it may be determined that motion The velocity information and directional information at object current time.
When image capture module 200 includes two camera heads, each moment of terminal control module 400 obtains two groups not Same site information.Terminal control module 400 determines according to the site information of the window image for extracting from the first camera head 210 The positional information of moving object, the velocity information of moving object is determined according to the positional information obtained in setting time difference, according to The site information for extracting from the window image of the first camera head 210 and the second camera head 220 determines the direction of moving object Information.
Further, system converting unit is set in terminal control module 400, it is possessed system translation function, will obtain The non-decimal formula site information data such as binary system formula site information, the hexadecimal formula site information for obtaining are converted to decimal system formula Site information, is easy to user to read.
In a preferred embodiment, the display terminal for being configured with terminal control module 400 is with display screen Electronic equipment, including smart mobile phone, notebook computer, panel computer, Helmet Mounted Display etc..
In further preferred embodiment, the terminal control module 400 is also equipped with verifying function, and the verification is Refer to the monitoring of the validity of loci information.The verifying function of terminal control module 400, can be further ensured that the standard of data transfer True property.
Because terminal control module 400 has been provided simultaneously with verifying function, the site information may include printing verification data, Coordinate data, length check data, wherein printing verification data be used to determine the availability of site information in site 110, its It is identical in each site information;Coordinate data is used to represent the positional information in the site 110;Length check data are used to represent site Information is converted to the number of character after system character string high with string of binary characters transition form.
Below by way of the setting of site information in example explanation site 110, and the system turn when being transmitted in each module Change:
A () is included the site information in following site 110 by the window image that image capture module 200 is gathered:
01100000 10000000 00000001 00010001 00100000 00010001 11000000 00111010 00010010, now site information represented with the sequential combination of the painted patterns for possessing different intensities of reflected light, face Material pattern 1 is the painted patterns with high reflection luminous intensity, in its correspondence binary-coded character 1, painted patterns 0 are with weak anti- The painted patterns of luminous intensity are penetrated, 0,8 painted patterns in its correspondence binary-coded character are one group;
(b) above-mentioned site information recognize, extract through data decoder module 500 after string of binary characters form site letter Cease and be:
01100000 10000000 00000001 00010001 00100000 00010001 11000000 00111010 00010010;Now site information is string of binary characters, is one group with 8 bit digitals;
C binary system formula site information is converted to hexadecimal formula site information by () data decoder module 500:
60 80 01 11 20 11c0 3a 12, wherein:
60 80---01100000 10000000--- are printing verification data, and the point for not being 6080 is all (fuzzy wrongly typed Except the site information in site);
01 11---00000001 00010001--- are the data with X-coordinate;
20 11---00100000 00010001--- are the data with Y-coordinate;
C0 3a---11000000 00111010--- are secondary printing verification data, and the point for not being c03a is all wrongly typed;
12---00010010--- be length check data, the digit of hexadecimal string and, equivalent to 18.Now position Point information is hexadecimal string, is one group with 2 bit digitals.
D () hexadecimal formula site information is transferred to terminal control through first communication module 310 and second communication module 320 After module 400, readable decimal system formula site information is converted to:
24704 273 8,209 49,210 18, wherein:
24704---60 80--- are printing verification data, now meaningless,
The data of 273---01 11---X coordinates,
The data of 8209---20 11---Y coordinates,
49210---c0 3a--- are secondary printing verification data, now meaningless,
18---12--- is length check data, now meaningless.
Now site information is character to decimal string.
Whole site informations that data decoder module 500 will be extracted from window image are transmitted, first communication module 310 and second communication module 320 transmission obtain whole site informations, thus, what terminal control module 400 was received is also several According to whole site informations that decoder module 500 is obtained, now, terminal control module 400 is using extracting from setting in window image The site information positioned in the Single locus 110 at the place of putting is processed, and determines the information of moving object, wherein, the setting position The Single locus 110 for putting place are referred to as effective site.The setting position can be any given position in window image, such as set Position is the upper left corner in square window image, the lower left corner, the upper right corner or the lower right corner.
Under special circumstances, such as in the case that the site 110 on background map 100 is covered or polluted by debris, image The window image that acquisition module 200 may be obtained at a time is partly or entirely unintelligible, and now data decoder module 500 will Site information in unintelligible part in the form of full 0 or complete 1 transcoding as string of binary characters.Wherein, by debris covering or dirty The site 110 of dye is referred to as fuzzy locus.
In the case of there are fuzzy locus in window image, the present invention is by following steps to the site in window image Information is processed:
(1) terminal control module 400 also has site information identification function, judges fuzzy by terminal control module 400 Whether site includes effective site;
(2) if fuzzy locus do not include effective site, influence is not produced on the data processing of terminal control module 400, eventually End control module 400 is not processed to the site information of fuzzy locus;
(3) if fuzzy locus include effective site, terminal control module 400 according to occur fuzzy locus moment former frame and The site information that a later frame is received is to sentence inactive state or mobile status to determine moving object;
(3a) if former frame is identical with whole site informations that a later frame is received and moiety site is non-fuzzy site, Think that moving object is to sentence inactive state;
(3b) is if former frame is different with the site information that a later frame is received, then it is assumed that moving object is to sentence mobile shape State;
(3c) if former frame is identical with the site information that a later frame is received and site informations of all fuzzy locus, eventually End control module 400 cannot judge the state of moving object, the then site information for being received according to the first two frame and rear two frame respectively Judged;If the first two frame is identical with the site information that rear two frame is received and is the site information of fuzzy locus, continue with Before or after the site information that receives judged;If the site information that preceding ten frame and/or rear ten frame are received is identical and equal Be the site information of fuzzy locus, then the prompting of terminal control module 400 user stops control moving object, with to moving object and Correlation module is checked;
(4) if moving object remains static, terminal control module 400 is according to other sites 100 of effective location proximate Site information obtain the site information in effective site;
(5) if moving object is in mobile status, terminal control module 400 is according to the moment for obtaining fuzzy locus information The site information in setting number of frames and the effective site for being obtained in setting number of frames afterwards, enters before (referred to as fuzzy moment) One step determines that fuzzy moment moving object is moved along a straight line or movement in a curve;It is described to set number of frames as 1~3 frame;
(5a), if fuzzy moment moving object is moved along a straight line, the site information to obscuring moment effective site does not do Treatment;
(5a) if fuzzy moment moving object carries out movement in a curve, according to set before the fuzzy moment number of frames and The site information in the effective site for being obtained in setting number of frames afterwards determines the site information in effective site of fuzzy moment;
Wherein, it is described to set number of frames as 1~3 frame;
The retentivity time of eye of people is about 0.1 to 0.4s, and time between adjacent two frame of image capture module 200 is about It is 0.02s, therefore, the site information in effectively site determines effective site of fuzzy moment in 1~3 frame before and after the selected fuzzy moment After site information, the position to obscuring moment moving object makes corrections, and does not interfere with visual effect.
(6) terminal control module 400 according to determine effective site site information to obscure moment moving object position Put and maked corrections.
System is processed blurred picture by above-mentioned steps in the present invention, when there are fuzzy locus, can effectively be protected Demonstrate,prove the determination of moving object information and the effectively and reasonably positioning in display terminal.
Embodiment
Embodiment 1
A kind of quick method for obtaining moving object information in real time in augmented reality, it is used to obtaining in augmented reality game The information of moving object, the moving object can be mobile tank, and the method comprises the following steps:
1) tank is placed on background map, the site that multiple is printed on recognizable site information, institute is distributed on background map State site information and represent that it is included to represent the sequential combination with two kinds of painted patterns of intensity of reflected light of binary-coded character Positional information of the site in map;
2) obtain corresponding at tank position by being arranged on the first camera of tank bottom and second camera Window image, and window image is sent to decoding chip (sonixSNC7312);
Wherein, the first camera is fixed on the centre position of tank bottom symmetrical axle, obtains it and determines that tank position is believed The window image of breath and velocity information;Second camera is fixed on tank symmetry axis near the position of head, by two shootings The combination of the window image that head is obtained determines the directional information of moving object;
The decoding chip receives the window image of the first camera and second camera transmission, to the position in window image Point information is identified, extracts, and is string of binary characters by the sequential combination transcoding of painted patterns, and further by binary word Symbol string is converted into hexadecimal string, through the Bluetooth signal transmitter on tank, installed in mobile phone (display terminal) On Bluetooth signal receiver after be transferred to terminal handler (terminal control module);;
3) terminal handler is processed the hexadecimal formula site information of acquisition, determines tank on background map Information simultaneously positions it in mobile phone display screen, and the positional information of tank is provided user with decimal system visual information.
In the description of the invention, it is necessary to explanation, term " on ", D score, " interior ", the orientation or position of the instruction such as " outward " The relation of putting is, based on the orientation or position relationship under working condition of the present invention, to be for only for ease of the description present invention and simplification is retouched State, rather than indicate imply signified device or element must have specific orientation, with specific azimuth configuration and operation, Therefore it is not considered as limiting the invention.
Above in association with preferred embodiment the present invention is described, but these implementation methods are only exemplary , only play illustrative effect.On this basis, various replacements and improvement can be carried out to the present invention, these each fall within this In the protection domain of invention.

Claims (10)

1. a kind of method for quickly obtaining moving object information in augmented reality in real time, it is characterised in that
1) site (110) that multiple includes recognizable site information is set on background map (100), moving object is placed in On background map;
2) corresponding regarding comprising multiple sites (110) at moving object position is obtained by image capture module (200) Window image, the site information to be can recognize that in window image is extracted, and transmits terminal of the site information to display terminal Control module (400);
3) terminal control module (400) loci information is processed, and determines the information of moving object.
2. method according to claim 1, it is characterised in that step 1) in, site (110) are arranged in the form of coordinate points Row, spacing is equal between preferably each site (110);And/or
Site information in the site (110) includes the positional information of the site (110) in background map (100), preferably Ground, site information is representing orderly group of the painted patterns of the intensity of reflected light with two or more of binary-coded character Close and represent;And/or
The site information is represented with representing the sequential combination of the microcosmos pattern of two or more shape of binary-coded character.
3. method according to claim 1 and 2, it is characterised in that step 2) in, described image acquisition module (200) bag Include the first camera head (210) and the second camera head (220), the camera of two camera heads towards background map (100) with Window image is obtained, wherein,
By the first camera head (210) and the second camera head (220) on the axle of moving object, the direction of the axle with The moving direction of moving object is consistent, it is preferable that the first camera head (210) is fixed on the centre position of moving object axle, Its window image for obtaining the positional information and velocity information that determine moving object;Second camera head (220) is fixed on fortune Forward position on animal body axle, the combination of the window image obtained by two camera heads determines that the direction of moving object is believed Breath.
4. according to the method that one of claims 1 to 3 is described, it is characterised in that step 2) in, data are set in moving object Decoder module (500), data decoder module (500) is identified to the site information in window image, and is translated into two System character string, transmits site information in the form of string of binary characters.
5. according to the method that one of Claims 1-4 is described, it is characterised in that the data decoder module (500) is also equipped with height System transformation function, the binary system formula site information i.e. string of binary characters that it will be extracted is converted into decimal system formula site information Or hexadecimal formula site information, hexadecimal formula site information is preferably converted to, and with decimal system formula site information or 16 The form for entering standard site information is transmitted.
6. according to the method that one of claim 1 to 5 is described, it is characterised in that step 2) in, is also equipped with moving object One communication module (310),
The first communication module (310) for receive data decoder module (500) transmission site information, through being installed on display Second communication module (320) in terminal, terminal control module (400) is transferred to by site information.
7. method according to claim 5, it is characterised in that the first communication module (310) and second communication module (320) selected from bluetooth communication, Wi-Fi communication modules, 4G communication modules or mobile communication module;More preferably described first leads to Letter module (310) and second communication module (320) are bluetooth communication.
8. according to the method that one of claim 1 to 7 is described, it is characterised in that step 3) in, terminal control module (400) root The positional information of moving object is determined according to the site information of the window image for extracting from the first camera head (210), according to setting The positional information obtained in time difference determines the velocity information of moving object, according to extracting from the first camera head (210) and the The site information of the window image of two camera heads (220) determines the directional information of moving object;
Preferably, the terminal control module (400) possesses system translation function, and it makes the non-decimal formula site information of acquisition Be converted to decimal system formula site information.
9. according to the method that one of claim 1 to 8 is described, it is characterised in that step 3) in, terminal control module (400) connects The site information for receiving is whole site informations that data decoder module (500) is obtained, and terminal control module (400) is using extraction Site information from window image in the Single locus (110) of setting position carries out data processing, to determine moving object Information, wherein, the Single locus (110) of the setting position are referred to as effective site.
10. according to the method that one of claim 1 to 9 is described, it is characterised in that the terminal control module (400) also has Site information identification function,
In the case of there are fuzzy locus in window image, judge whether fuzzy locus wrap by terminal control module (400) Include effective site.
CN201710107963.5A 2017-01-24 2017-02-27 The quick method for obtaining moving object information in real time in augmented reality Pending CN106933355A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017100627914 2017-01-24
CN201710062791 2017-01-24

Publications (1)

Publication Number Publication Date
CN106933355A true CN106933355A (en) 2017-07-07

Family

ID=59424424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710107963.5A Pending CN106933355A (en) 2017-01-24 2017-02-27 The quick method for obtaining moving object information in real time in augmented reality

Country Status (1)

Country Link
CN (1) CN106933355A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107462741A (en) * 2017-07-26 2017-12-12 武汉船用机械有限责任公司 A kind of moving object speed and acceleration measurement device
CN108427501A (en) * 2018-03-19 2018-08-21 网易(杭州)网络有限公司 Control method for movement and device in virtual reality
CN109189210A (en) * 2018-08-06 2019-01-11 百度在线网络技术(北京)有限公司 Mixed reality exchange method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1465115A2 (en) * 2003-03-14 2004-10-06 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
CN101777123A (en) * 2010-01-21 2010-07-14 北京理工大学 System for tracking visual positions on basis of infrared projection mark points
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN104268882A (en) * 2014-09-29 2015-01-07 深圳市热活力科技有限公司 High-speed moving object detecting and speed measuring method and system based on double-linear-array cameras
CN105403235A (en) * 2014-09-15 2016-03-16 吴旻升 Two-dimensional positioning system and method
CN105987683A (en) * 2015-04-16 2016-10-05 北京蚁视科技有限公司 Visual positioning system and method based on high-reflective infrared identification
CN106028001A (en) * 2016-07-20 2016-10-12 上海乐相科技有限公司 Optical positioning method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1465115A2 (en) * 2003-03-14 2004-10-06 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
CN101777123A (en) * 2010-01-21 2010-07-14 北京理工大学 System for tracking visual positions on basis of infrared projection mark points
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN105403235A (en) * 2014-09-15 2016-03-16 吴旻升 Two-dimensional positioning system and method
CN104268882A (en) * 2014-09-29 2015-01-07 深圳市热活力科技有限公司 High-speed moving object detecting and speed measuring method and system based on double-linear-array cameras
CN105987683A (en) * 2015-04-16 2016-10-05 北京蚁视科技有限公司 Visual positioning system and method based on high-reflective infrared identification
CN106028001A (en) * 2016-07-20 2016-10-12 上海乐相科技有限公司 Optical positioning method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107462741A (en) * 2017-07-26 2017-12-12 武汉船用机械有限责任公司 A kind of moving object speed and acceleration measurement device
CN107462741B (en) * 2017-07-26 2019-12-31 武汉船用机械有限责任公司 Moving object speed and acceleration measuring device
CN108427501A (en) * 2018-03-19 2018-08-21 网易(杭州)网络有限公司 Control method for movement and device in virtual reality
CN108427501B (en) * 2018-03-19 2022-03-22 网易(杭州)网络有限公司 Method and device for controlling movement in virtual reality
CN109189210A (en) * 2018-08-06 2019-01-11 百度在线网络技术(北京)有限公司 Mixed reality exchange method, device and storage medium
US11138777B2 (en) 2018-08-06 2021-10-05 Baidu Online Network Technology (Beijing) Co., Ltd. Mixed reality interaction method, apparatus, device and storage medium

Similar Documents

Publication Publication Date Title
CN107589758A (en) A kind of intelligent field unmanned plane rescue method and system based on double source video analysis
CN105512628B (en) Vehicle environmental sensory perceptual system based on unmanned plane and method
CN109492507A (en) The recognition methods and device of the traffic light status, computer equipment and readable medium
CN110443898A (en) A kind of AR intelligent terminal target identification system and method based on deep learning
CN111881861B (en) Display method, device, equipment and storage medium
CN108292141A (en) Method and system for target following
CN110188749A (en) Designated vehicle Vehicle License Plate Recognition System and method under a kind of more vehicles
CN106097794A (en) The Chinese phonetic alphabet based on augmented reality combination is recognized reading learning system and recognizes reading method
CN104268498A (en) Two-dimension code recognition method and terminal
CN107273816B (en) Traffic speed limit label detection recognition methods based on vehicle-mounted forward sight monocular camera
CN106933355A (en) The quick method for obtaining moving object information in real time in augmented reality
CN102831380A (en) Body action identification method and system based on depth image induction
CN108510545A (en) Space-location method, space orientation equipment, space positioning system and computer readable storage medium
CN107038420A (en) A kind of traffic lights recognizer based on convolutional network
CN111767831B (en) Method, apparatus, device and storage medium for processing image
CN103971087B (en) Method and device for searching and recognizing traffic signs in real time
CN110858414A (en) Image processing method and device, readable storage medium and augmented reality system
CN107943077A (en) A kind of method for tracing, device and the unmanned plane of unmanned plane drop target
CN109919157A (en) A kind of vision positioning method and device
CN103977539A (en) Cervical vertebra rehabilitation and health care training aiding system
CN108804989A (en) Painting and calligraphy device, painting and calligraphy equipment and painting and calligraphy householder method
CN106991821A (en) Vehicles peccancy hand-held mobile terminal data collecting system
CN112665588A (en) Ship navigation situation sensing method based on augmented reality
CN111524339B (en) Unmanned aerial vehicle frequency alignment method and system, unmanned aerial vehicle and remote controller
CN108803426A (en) A kind of vehicle device control system based on TOF gesture identifications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170707

WD01 Invention patent application deemed withdrawn after publication