CN104298965A - Intelligent scenic spot and scenery recognition system and method oriented to mobile terminal - Google Patents
Intelligent scenic spot and scenery recognition system and method oriented to mobile terminal Download PDFInfo
- Publication number
- CN104298965A CN104298965A CN201410468259.9A CN201410468259A CN104298965A CN 104298965 A CN104298965 A CN 104298965A CN 201410468259 A CN201410468259 A CN 201410468259A CN 104298965 A CN104298965 A CN 104298965A
- Authority
- CN
- China
- Prior art keywords
- sight spot
- scenery
- module
- scene image
- mobile terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides an intelligent scenic spot and scenery recognition system and method oriented to a mobile terminal and belongs to the technical field of GPS/base station positioning and image recognition. Due to the adoption of the mode that the GPS positioning technology is combined with the base station positioning technology, the problem that when a single GPS is adopted, coverage is not complete can be solved to a large extent. The system and method are easy to operate and wide in application range, and people capable of using mobile phones can fast learn to use the system and method skillfully. The system and method are very high in expansibility, recognized scenic spot and scenery information can be replaced with scenic spot and scenery voice, image and video explanation, information content is enriched greatly, and the travelling quality of tourists is improved.
Description
Technical field
The present invention relates to GPS location technology, architecture technology, image recognition technology, wireless data transmission technology, be specifically related to a kind of tourist attractions of facing moving terminal and scenery intelligent identifying system and method.
Background technology
Along with the raising of national economy and living standards of the people, tourist industry is flourish, and increasing people to go visit in travel at one's own expense's mode, visits to sight spot, various places; When visitor is in a strange position, often need to understand current residing position, residing sight spot, the scene information of viewing and admiring.The existing solution in market mostly adopts GPS positioning system or utilizes fixing induction installation to obtain sight spot, scene information two kinds of modes.
First, utilize the impact due to the signal such as high-rise and trees in big city or mountain area of GPS positioning system, positioning precision can be under some influence and in city, there is a large amount of powerful radio emitting source, or ultra-high-tension power transmission line, all can produce certain impact to the precision of GPS location; Special in rainy weather, because cloud layer is thicker, the position location satellite even possibly cannot searched for; The more important thing is, GPS positioning system only can determine the longitude and latitude position of visitor, and when being in the layer stereo space in similar museum, GPS positioning system is just completely invalid.
Secondly, utilize fixing induction installation (as technology such as infrared ray, less radio-frequency, bluetooths) to obtain sight spot, scene information, the problem of closely sight spot, scene information can be solved; But consider that the scenery at many sight spots is comparatively intensive (showpiece as museum), will interference be there is between each scenery, the normal acquisition of information of the equipment that affects; Meanwhile, at sight spot, the induction installation that arranges of scenery needs often maintenance, changes battery, rolled up the difficulty of scenic spot management, improve maintenance cost.
Moreover, no matter use which kind of mode above, all cannot realize the acquisition of information to sight spot, distant place, scenery, when visitor stands firm a place, when looking around, often find that some are interestedly in sight spot at a distance or scenery, by GPS positioning system, only can obtain the current positional information of visitor, cannot become " eyes " of visitor, and fixing induction installation far just cannot play function more because position is crossed.
Summary of the invention
For the shortcoming of prior art, the present invention proposes a kind of tourist attractions of facing moving terminal and scenery intelligent identifying system and method, by GPS, architecture in conjunction with the mode of sight spot, scene features identification, overcome to reach the problem that GPS precision is not enough, effectively can not identify scenery and system poor universality in multi-story structure, realize the object that tourist attractions and scene information are provided to visitor fast and accurately.
A kind of tourist attractions of facing moving terminal and scenery intelligent identifying system, comprise characteristic storage module, feature collection module, network communication module, feature identification module and locating module, wherein, characteristic storage module installation is in server, feature collection module, network communication module, feature identification module and locating module are arranged in mobile terminal
Characteristic storage module: for storing all sight spots in scenic spot longitude and latitude positional information, scenic spot, scenery finger print information and sight spot, scenery profile information;
Feature collection module: for sending enabling signal to mobile terminal camera and locating module, what receive camera collection works as foreground zone internal object sight spot, the image of scenery, and gray-scale map is converted into the image obtained, obtain image pixel two-dimensional array, and 2-D discrete cosine conversion is carried out to image pixel two-dimensional array, by the sight spot after conversion, scene image pixel two-dimensional array is converted to one-dimension array by row, ask for the pixel average of this one-dimension array again, according to pixel average, one-dimension array is normalized, namely sight spot is obtained, the finger print information of scene image, by sight spot, the finger print information of scene image is sent in network communication module,
Network communication module: for realization character collection module, data communication between feature identification module and characteristic storage module;
Feature identification module: for scenic spot belonging to longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot known in characteristic storage module, and the sight spot at corresponding scenic spot of working as foreground point, finger print information and the characteristic storage module of scene image are preserved, the finger print information of scene image are mated, and obtain final sight spot, scenery profile information;
Locating module: for obtaining mobile terminal longitude and latitude positional information, and be sent in feature identification module.
The recognition methods adopting the tourist attractions of facing moving terminal and scenery intelligent identifying system to carry out, comprises the following steps:
Step 1, employing feature collection module send a signal to mobile terminal camera, by the image of camera collection when foreground zone internal object sight spot, scenery, send a signal to locating module simultaneously and obtain current mobile terminal longitude and latitude positional information;
The finger print information of step 2, employing feature collection CMOS macro cell target sight spot, scene image, specific as follows:
Step 2-1: the coloured image of sight spot, scenery is converted into gray-scale map, obtains image pixel two-dimensional array;
Step 2-2: carry out 2-D discrete cosine conversion to image pixel two-dimensional array, obtains the sight spot after conversion, scene image pixel two-dimensional array;
Step 2-3: the sight spot after conversion, scene image pixel two-dimensional array are converted to one-dimension array by row;
Step 2-4: the pixel average of trying to achieve sight spot, scene image pixel one-dimension array;
Step 2-5: according to the pixel average obtained, sight spot, scene image pixel one-dimension array are normalized, namely obtain the finger print information of sight spot, scene image;
Step 3, the sight spot of acquisition, the finger print information of scene image are sent to characteristic storage module by network communication module and store; Current mobile terminal longitude and latitude positional information is sent in feature identification module by locating module, feature identification module is scenic spot belonging to known longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot, and by network communication module, affiliated scenic spot is sent to characteristic storage module and stores; Sight spot, scenery profile information are stored to characteristic storage module;
Step 4, repeatedly perform step 1 to step 3, until obtain all sight spots in required scenic spot, scenery finger print information, affiliated scenic spot and sight spot, scenery profile information, and be all stored in characteristic storage module;
Step 5, after user arrives scenic spot, feature collection module is adopted to send a signal to mobile terminal camera, by the image of camera collection when foreground point, scenery, and the finger print information obtained when foreground point, scene image, adopt locating module to obtain current mobile terminal longitude and latitude positional information simultaneously;
Current mobile terminal longitude and latitude positional information is sent in feature identification module by step 6, locating module, feature identification module is scenic spot belonging to known longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot, finger print information when foreground point, scene image is sent in feature identification module by network communication module by feature collection module, the sight spot at corresponding scenic spot, the finger print information of scene image that feature identification module is preserved according to characteristic storage module mate, obtain final sight spot, scenery, specific as follows:
Step 6-1, acquisition are when foreground point, all sight spots of finger print information and this scenic spot of scene image, the Hamming distance of scene image finger print information;
Step 6-2, judge all sight spots, whether the Hamming distance of scenery be less than or equal to 5, if so, then performs step 6-3, otherwise performs step 6-4;
Step 6-3, preserve this sight spot, scenery and Hamming distance thereof, and in above-mentioned sight spot, scenery, choose the minimum sight spot of Hamming distance, scenery is net result, and performs step 7;
Step 6-4, acquisition net result are without this scenery, sight spot in this scenic spot, and perform step 7;
Step 7, customer mobile terminal show final sight spot, scenery brief introduction data or show in this scenic spot without this scenery, sight spot.
Employing locating module described in step 1 obtains current mobile terminal longitude and latitude positional information, and concrete grammar is as follows:
Step 1-1, adopt locating module search gps satellite number, judge whether the gps satellite quantity searched is more than or equal to 4, if so, the then direct longitude and latitude position being obtained current mobile terminal by GPS, otherwise, perform step 1-2;
Step 1-2, employing base station manner localization method obtain the longitude and latitude position of current mobile terminal, namely according to the longitude and latitude position coordinates of each base station and the displacement of each base station and mobile terminal, obtain many group mobile terminal longitude and latitude position coordinateses;
Step 1-3, averaging is carried out to many group mobile terminal longitude and latitude position coordinateses, obtain the longitude and latitude position of current mobile terminal.
Described in step 2-1, the coloured image of sight spot, scenery is converted into gray-scale map, obtains gray level image pixel two-dimensional array, specific as follows:
The two-dimensional pixel array s [i, j] of step 2-1-1, acquisition sight spot, scene image;
The gray-scale value g [i, j] of the two-dimensional pixel array s [i, j] of step 2-1-2, calculating acquisition sight spot, scene image;
Formula is as follows:
g[i,j]=((s[i,j]×0x00FF0000)>>16)×0.3 (1)
+((s[i,j]×0x0000FF00)>>8)×0.59+(s[i,j]×0x000000FF)×0.11
Wherein, g [i, j] represents the gray-scale value of two-dimensional pixel array of sight spot, scene image; 0x00FF0000 is that 16 red systems represent, 0x0000FF00 is that 16 green systems represent, 0x000000FF is that 16 blue systems represent, > > represents right shift; I represents image pixel horizontal ordinate, and j represents image pixel ordinate;
The transparency a [i, j] of the two-dimensional pixel array s [i, j] of step 2-1-3, calculating acquisition sight spot, scene image;
Formula is as follows:
a[i,j]=s[i,j]∧0xFF000000 (2)
Wherein, a [i, j] represents the transparency of two-dimensional pixel array of sight spot, scene image; ∧ represents AND operation;
Step 2-1-4, according to the gray-scale value of the two-dimensional pixel array of sight spot, scene image and transparency, calculate and obtain gray level image pixel two-dimensional array;
Formula is as follows:
p[i,j]=a[i,j]∨(g[i,j]<<16)∨(g[i,j]<<8)∨g[i,j] (3)
Wherein, p [i, j] represents gray level image pixel two-dimensional array; ∨ represents inclusive-OR operation; < < represents to shifting left.
Described in step 2-2,2-D discrete cosine conversion is carried out to image pixel two-dimensional array, obtain the sight spot after conversion, scene image pixel two-dimensional array;
The formula calculating two-dimentional off-line cosine conversion conversion array A (i, j) is as follows:
Wherein, A (i, j) represents two-dimentional off-line cosine conversion conversion array; N represents the dimension of p [i, j];
Calculate the sight spot after conversion, scene image pixel two-dimensional array q [i, j] formula be as follows:
q[i,j]=A[i,j]·p[i,j]·A
T[i,j] (5)
Wherein, q [i, j] represents the sight spot after changing, scene image pixel two-dimensional array.
Acquisition sight spot described in step 2-5, the finger print information of scene image, be specially: each pixel value and pixel average in sight spot, scene image pixel one-dimension array are compared, if be more than or equal to pixel average, then this pixel value is set to 1, if be less than pixel average, then this pixel value is set to 0.
Advantage of the present invention:
The present invention proposes a kind of tourist attractions of facing moving terminal and scenery intelligent identifying system and method, contemplate a kind of brand-new tourist attractions, scenery intelligent identifying system, the mode that the GPS of employing, architecture technology combine can make up simple employing GPS positioning system to a great extent and cover incomplete problem; The present invention's scope simple to operate, on probation is wide, and the crowd that can try out mobile phone can learn fast and skillfully try out; The present invention has very strong extendability, and the sight spot of identification, scene information can replace with sight spot, the voice of scenery, image, video explanation, and great rich information content, improves visitor's traveling quality.
Accompanying drawing explanation
Fig. 1 is tourist attractions and the scenery intelligent identifying system structured flowchart of the employing facing moving terminal of an embodiment of the present invention;
Fig. 2 is tourist attractions and the scenery intelligent identification Method process flow diagram of the employing facing moving terminal of an embodiment of the present invention;
Fig. 3 is the generation target sight spot of an embodiment of the present invention, the finger print information method flow diagram of scene image;
Fig. 4 is the final sight spot of acquisition, the scenery method flow diagram of an embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, an embodiment of the present invention is described further.
As shown in Figure 1, in the embodiment of the present invention, the tourist attractions of facing moving terminal and scenery intelligent identifying system, comprise characteristic storage module, feature collection module, network communication module, feature identification module and locating module, described characteristic storage module installation is in server, feature collection module, network communication module, feature identification module and locating module are arranged in mobile terminal, wherein, characteristic storage module is for storing all sight spots in scenic spot longitude and latitude positional information, scenic spot, scenery finger print information and sight spot, scenery profile information, feature collection module is for sending enabling signal to mobile terminal camera and locating module, what receive camera collection works as foreground zone internal object sight spot, the image of scenery, and gray-scale map is converted into the image obtained, obtain image pixel two-dimensional array, and 2-D discrete cosine conversion is carried out to image pixel two-dimensional array, by the sight spot after conversion, scene image pixel two-dimensional array is converted to one-dimension array by row, ask for the pixel average of this one-dimension array again, according to pixel average, one-dimension array is normalized, namely sight spot is obtained, the finger print information of scene image, by sight spot, the finger print information of scene image is sent in network communication module, network communication module is used for realization character collection module, data communication between feature identification module and characteristic storage module, feature identification module is used for scenic spot belonging to longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot known in characteristic storage module, and the sight spot at corresponding scenic spot of working as foreground point, finger print information and the characteristic storage module of scene image are preserved, the finger print information of scene image are mated, and obtain final sight spot, scenery profile information,
Locating module for obtaining mobile terminal longitude and latitude positional information, and is sent in feature identification module.
In the embodiment of the present invention, feature collection module is installed on mobile terminals, shooting is when foreground point, scenery still image, generate sight spot, scenery finger print information, obtain when foreground point, scenery positional information in conjunction with locating module, and by network communication module, sight spot, scenery finger print information, positional information and sight spot, scenery details (comprising the information such as sight spot and scenery text description, voice, video) are transferred to installation characteristic storage module on the server; Described feature identification module is installed on mobile terminals, visitor position information is obtained by locating module, by shooting when foreground point, scenery photo generate sight spot, scenery finger print information, and by network communication module, sight spot, scenery finger print information and positional information are sent to characteristic storage module, obtain when foreground point, scenery essential information result is passed back to feature identification module by searching matching algorithm.
In the embodiment of the present invention, described system adopts C/S framework, and characteristic storage module adopts J2EE development technique, and business processing, data access, legitimacy School Affairs system interface etc. are placed on Business Logic.Database adopts MySQL relational database.Feature collection module and feature identification module use android technology again, by calling the realization function separately of the Servlet of service end.
In the embodiment of the present invention, adopt the recognition methods that the tourist attractions of facing moving terminal and scenery intelligent identifying system carry out, method flow as shown in Figure 2, comprises the following steps:
Step 1, employing feature collection module send a signal to mobile terminal camera, by the image of camera collection when foreground zone internal object sight spot, scenery, send a signal to locating module simultaneously and obtain current mobile terminal longitude and latitude positional information;
In the embodiment of the present invention, when information collector steps into scenic spot (as Shenyang Imperial Palace Museum), unlatching is arranged on the feature collection module of mobile terminal (such as mobile phone, panel computer etc.), and after activating network communication module and locating module, the current latitude and longitude information of automatic acquisition, and the Servlet calling service end sends the latitude and longitude information at current place to characteristic storage module, obtain the scenic spot ID (as 2401) at current place, and record; When information collector is determined to want the sight spot of typing, scenery (as large political affairs hall), need be taken pictures (but multiple) to sight spot, scenery by collection apparatus module, and typing sight spot, scenery details;
Described employing locating module obtains current mobile terminal longitude and latitude positional information, and concrete grammar is as follows:
Step 1-1, adopt locating module search gps satellite number, judge whether the gps satellite quantity searched is more than or equal to 4, if so, the then direct longitude and latitude position (x, y) being obtained current mobile terminal by GPS, otherwise, perform step 1-2;
Step 1-2, employing base station manner localization method obtain the longitude and latitude position (x of current mobile terminal, y), namely according to the longitude and latitude position coordinates of each base station and the displacement of each base station and mobile terminal, many group mobile terminal longitude and latitude position coordinateses are obtained;
Described base station manner localization method, specifically comprises:
Latitude and longitude coordinates (the x of attachable base station (quantity is greater than 2) near step 1-2-1, acquisition
i, y
i), and each base station is apart from current location distance d
i, then current longitude and latitude position (x, y) and base station meet formula 6;
Step 1-2-2, according to the latitude and longitude coordinates of neighbouring attachable base station and the distance with current mobile terminal, arrange to obtain system of equations, calculate and obtain current location result array r [2, n-1];
Wherein, n represents base station number; I=1...n;
Step 1-2-3, each row of current location result array obtained according to step 1.1.2.2 are averaged, and obtain current longitude and latitude position (x, y).
Step 1-3, averaging is carried out to many group mobile terminal longitude and latitude position coordinateses, obtain the longitude and latitude position of current mobile terminal.
The finger print information of step 2, employing feature collection CMOS macro cell target sight spot, scene image, method flow diagram is as shown in Figure 3, specific as follows:
Step 2-1: the coloured image of sight spot, scenery is converted into gray-scale map, obtains image pixel two-dimensional array p [i, j], specific as follows:
The two-dimensional pixel array s [i, j] of step 2-1-1, acquisition sight spot, scene image;
The gray-scale value g [i, j] of the two-dimensional pixel array s [i, j] of step 2-1-2, calculating acquisition sight spot, scene image;
Formula is as follows:
g[i,j]=((s[i,j]×0x00FF0000)>>16)×0.3 (1)
+((s[i,j]×0x0000FF00)>>8)×0.59+(s[i,j]×0x000000FF)×0.11
Wherein, g [i, j] represents the gray-scale value of two-dimensional pixel array of sight spot, scene image; 0x00FF0000 is that 16 red systems represent, 0x0000FF00 is that 16 green systems represent, 0x000000FF is that 16 blue systems represent, > > represents right shift; I represents image pixel horizontal ordinate, and j represents image pixel ordinate;
The transparency a [i, j] of the two-dimensional pixel array s [i, j] of step 2-1-3, calculating acquisition sight spot, scene image;
Formula is as follows:
a[i,j]=s[i,j]∧0xFF000000 (2)
Wherein, a [i, j] represents the transparency of two-dimensional pixel array of sight spot, scene image; ∧ represents AND operation;
Step 2-1-4, according to the gray-scale value of the two-dimensional pixel array of sight spot, scene image and transparency, calculate and obtain gray level image pixel two-dimensional array;
Formula is as follows:
p[i,j]=a[i,j]∨(g[i,j]<<16)∨(g[i,j]<<8)∨g[i,j] (3)
Wherein, p [i, j] represents gray level image pixel two-dimensional array; ∨ represents inclusive-OR operation; < < represents to shifting left.
Step 2-2: carry out 2-D discrete cosine conversion to image pixel two-dimensional array p [i, j], obtains the sight spot after conversion, scene image pixel two-dimensional array;
The formula calculating two-dimentional off-line cosine conversion conversion array A (i, j) is as follows:
Wherein, A (i, j) represents two-dimentional off-line cosine conversion conversion array; N represents the dimension of p [i, j];
Calculate the sight spot after conversion, scene image pixel two-dimensional array q [i, j] formula be as follows:
q[i,j]=A[i,j]·p[i,j]·A
T[i,j] (5)
Wherein, q [i, j] represents the sight spot after changing, scene image pixel two-dimensional array.
Step 2-3, sight spot, the scene image pixel two-dimensional array after conversion is converted to one-dimension array by row;
Step 2-4, try to achieve the pixel average of sight spot, scene image pixel one-dimension array;
Step 2-5, according to obtain pixel average, be normalized sight spot, scene image pixel one-dimension array, namely obtain the finger print information c [m] of sight spot, scene image, m represents the number of data in array;
Be specially: the sight spot obtained, scene image one-dimension array v [m] are compared with obtained pixel average avg, is normalized; Namely as v [i] >=avg, sight spot, scene image fingerprint array c [i]=1.As v [i] < avg, sight spot, scene image fingerprint array c [i]=0, obtain sight spot, scene image fingerprint array c [m] by comparing.
In the embodiment of the present invention, collection apparatus module will generate the finger print information (as 10110001111101111001111010110101) of sight spot, scenery automatically, and by network communication module, sight spot, scenery fingerprint and details is sent to characteristic storage module;
Step 3, the sight spot of acquisition, the finger print information of scene image are sent to characteristic storage module by network communication module and store; Current mobile terminal longitude and latitude positional information is sent in feature identification module by locating module, feature identification module is scenic spot belonging to known longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot, and by network communication module, affiliated scenic spot is sent to characteristic storage module and stores; Sight spot, scenery profile information are stored to characteristic storage module;
In the embodiment of the present invention, longitude and latitude position, scenic spot is known quantity, is stored in characteristic storage module, by judging scenic spot belonging to current mobile terminal longitude and latitude position, current mobile terminal longitude and latitude position being carried out classification and storing.
Step 4, repeatedly perform step 1 to step 3, until obtain all sight spots in required scenic spot, scenery finger print information, affiliated scenic spot and sight spot, scenery profile information, and be all stored in characteristic storage module;
Step 5, after user arrives scenic spot, feature collection module is adopted to send a signal to mobile terminal camera, by the image of camera collection when foreground point, scenery, and the finger print information obtained when foreground point, scene image, adopt locating module to obtain current mobile terminal longitude and latitude positional information simultaneously;
In the embodiment of the present invention, after visitor arrives scenic spot (as Shenyang Imperial Palace Museum), open native system at mobile terminal, activate feature identification module, locating module and network communication module; After network successful connection, the current latitude and longitude information of automatic acquisition, and send current place latitude and longitude information to characteristic storage module, obtain the scenic spot ID (as 2401) at current place; When visitor becomes interested to sight spot in front, scenery (as large political affairs hall), but when can not find related introduction, by the camera function of feature identification module, (individual) is taken pictures to sight spot, scenery;
Current mobile terminal longitude and latitude positional information is sent in feature identification module by step 6, locating module, feature identification module is scenic spot belonging to known longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot, finger print information when foreground point, scene image is sent in feature identification module by network communication module by feature collection module, the sight spot at corresponding scenic spot, the finger print information of scene image that feature identification module is preserved according to characteristic storage module mate, and obtain final sight spot, scenery;
In the embodiment of the present invention, feature identification module will generate the finger print information (as 10110001111101111001111010110100) of sight spot, scenery automatically, and by network communication module, sight spot, scenery finger print information and scenic spot ID is sent to characteristic storage module and mates;
Method flow diagram is as shown in Figure 4, specific as follows:
Step 6-1, acquisition are when foreground point, all sight spots of finger print information and this scenic spot of scene image, the Hamming distance of scene image finger print information;
Step 6-2, judge all sight spots, whether the Hamming distance of scenery be less than or equal to 5, if so, then performs step 6-3, otherwise performs step 6-4;
In the embodiment of the present invention,
Step a, when Hamming distance is less than or equal to 5, by this scenery, sight spot mark and Hamming distance additional record in array r [2, m];
Step b, when Hamming distance is greater than 5, do not carry out any operation.
Whether step c, judged result array r [2, m] are empty, and when result array r [2, m] is empty, then without matching result, namely characteristic storage module is without this scenery, sight spot record; When result array r [2, m] is not empty, then finds the minimum scenery of Hamming distance, sight spot, namely from characteristic storage module, obtain this sight spot information.
Step 6-3, preserve this sight spot, scenery and Hamming distance thereof, and in above-mentioned sight spot, scenery, choose the minimum sight spot of Hamming distance, scenery is net result, and performs step 7;
In the embodiment of the present invention, characteristic storage module is by all sight spots at traversal scenic spot, place, scenery, find the finger print information (as 10110001111101111001111010110101) that matching degree is the highest, and return the descriptor of this sight spot, scenery to feature identification module;
Step 6-4, acquisition net result are without this scenery, sight spot in this scenic spot, and perform step 7;
Step 7, customer mobile terminal show final sight spot, scenery brief introduction data or show without this scenery, sight spot in this scenic spot, visitor can by the detailed description information of checking sight spot, scenery of feature identification module.
Claims (6)
1. the tourist attractions of a facing moving terminal and scenery intelligent identifying system, it is characterized in that, comprise characteristic storage module, feature collection module, network communication module, feature identification module and locating module, described characteristic storage module installation is in server, feature collection module, network communication module, feature identification module and locating module are arranged in mobile terminal, wherein
Characteristic storage module: for storing all sight spots in scenic spot longitude and latitude positional information, scenic spot, scenery finger print information and sight spot, scenery profile information;
Feature collection module: for sending enabling signal to mobile terminal camera and locating module, what receive camera collection works as foreground zone internal object sight spot, the image of scenery, and gray-scale map is converted into the image obtained, obtain image pixel two-dimensional array, and 2-D discrete cosine conversion is carried out to image pixel two-dimensional array, by the sight spot after conversion, scene image pixel two-dimensional array is converted to one-dimension array by row, ask for the pixel average of this one-dimension array again, according to pixel average, one-dimension array is normalized, namely sight spot is obtained, the finger print information of scene image, by sight spot, the finger print information of scene image is sent in network communication module,
Network communication module: for realization character collection module, data communication between feature identification module and characteristic storage module;
Feature identification module: for scenic spot belonging to longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot known in characteristic storage module, and the sight spot at corresponding scenic spot of working as foreground point, finger print information and the characteristic storage module of scene image are preserved, the finger print information of scene image are mated, and obtain final sight spot, scenery profile information;
Locating module: for obtaining mobile terminal longitude and latitude positional information, and be sent in feature identification module.
2. the recognition methods adopting the tourist attractions of facing moving terminal according to claim 1 and scenery intelligent identifying system to carry out, is characterized in that, comprise the following steps:
Step 1, employing feature collection module send a signal to mobile terminal camera, by the image of camera collection when foreground zone internal object sight spot, scenery, send a signal to locating module simultaneously and obtain current mobile terminal longitude and latitude positional information;
The finger print information of step 2, employing feature collection CMOS macro cell target sight spot, scene image, specific as follows:
Step 2-1: the coloured image of sight spot, scenery is converted into gray-scale map, obtains image pixel two-dimensional array;
Step 2-2: carry out 2-D discrete cosine conversion to image pixel two-dimensional array, obtains the sight spot after conversion, scene image pixel two-dimensional array;
Step 2-3: the sight spot after conversion, scene image pixel two-dimensional array are converted to one-dimension array by row;
Step 2-4: the pixel average of trying to achieve sight spot, scene image pixel one-dimension array;
Step 2-5: according to the pixel average obtained, sight spot, scene image pixel one-dimension array are normalized, namely obtain the finger print information of sight spot, scene image;
Step 3, the sight spot of acquisition, the finger print information of scene image are sent to characteristic storage module by network communication module and store; Current mobile terminal longitude and latitude positional information is sent in feature identification module by locating module, feature identification module is scenic spot belonging to known longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot, and by network communication module, affiliated scenic spot is sent to characteristic storage module and stores; Sight spot, scenery profile information are stored to characteristic storage module;
Step 4, repeatedly perform step 1 to step 3, until obtain all sight spots in required scenic spot, scenery finger print information, affiliated scenic spot and sight spot, scenery profile information, and be all stored in characteristic storage module;
Step 5, after user arrives scenic spot, feature collection module is adopted to send a signal to mobile terminal camera, by the image of camera collection when foreground point, scenery, and the finger print information obtained when foreground point, scene image, adopt locating module to obtain current mobile terminal longitude and latitude positional information simultaneously;
Current mobile terminal longitude and latitude positional information is sent in feature identification module by step 6, locating module, feature identification module is scenic spot belonging to known longitude and latitude position judgment current mobile terminal longitude and latitude position, scenic spot, finger print information when foreground point, scene image is sent in feature identification module by network communication module by feature collection module, the sight spot at corresponding scenic spot, the finger print information of scene image that feature identification module is preserved according to characteristic storage module mate, obtain final sight spot, scenery, specific as follows:
Step 6-1, acquisition are when foreground point, all sight spots of finger print information and this scenic spot of scene image, the Hamming distance of scene image finger print information;
Step 6-2, judge all sight spots, whether the Hamming distance of scenery be less than or equal to 5, if so, then performs step 6-3, otherwise performs step 6-4;
Step 6-3, preserve this sight spot, scenery and Hamming distance thereof, and in above-mentioned sight spot, scenery, choose the minimum sight spot of Hamming distance, scenery is net result, and performs step 7;
Step 6-4, acquisition net result are without this scenery, sight spot in this scenic spot, and perform step 7;
Step 7, customer mobile terminal show final sight spot, scenery brief introduction data or show in this scenic spot without this scenery, sight spot.
3. recognition methods according to claim 2, is characterized in that, the employing locating module described in step 1 obtains current mobile terminal longitude and latitude positional information, and concrete grammar is as follows:
Step 1-1, adopt locating module search gps satellite number, judge whether the gps satellite quantity searched is more than or equal to 4, if so, the then direct longitude and latitude position being obtained current mobile terminal by GPS, otherwise, perform step 1-2;
Step 1-2, employing base station manner localization method obtain the longitude and latitude position of current mobile terminal, namely according to the longitude and latitude position coordinates of each base station and the displacement of each base station and mobile terminal, obtain many group mobile terminal longitude and latitude position coordinateses;
Step 1-3, averaging is carried out to many group mobile terminal longitude and latitude position coordinateses, obtain the longitude and latitude position of current mobile terminal.
4. recognition methods according to claim 2, is characterized in that, described in step 2-1, the coloured image of sight spot, scenery is converted into gray-scale map, obtains gray level image pixel two-dimensional array, specific as follows:
The two-dimensional pixel array s [i, j] of step 2-1-1, acquisition sight spot, scene image;
The gray-scale value g [i, j] of the two-dimensional pixel array s [i, j] of step 2-1-2, calculating acquisition sight spot, scene image;
Formula is as follows:
g[i,j]=((s[i,j]×0x00FF0000)>>16)×0.3 (1)
+((s[i,j]×0x0000FF00)>>8)×0.59+(s[i,j]×0x000000FF)×0.11
Wherein, g [i, j] represents the gray-scale value of two-dimensional pixel array of sight spot, scene image; 0x00FF0000 is that 16 red systems represent, 0x0000FF00 is that 16 green systems represent, 0x000000FF is that 16 blue systems represent, > > represents right shift; I represents image pixel horizontal ordinate, and j represents image pixel ordinate;
The transparency a [i, j] of the two-dimensional pixel array s [i, j] of step 2-1-3, calculating acquisition sight spot, scene image;
Formula is as follows:
a[i,j]=s[i,j]∧0xFF000000 (2)
Wherein, a [i, j] represents the transparency of two-dimensional pixel array of sight spot, scene image; ∧ represents AND operation;
Step 2-1-4, according to the gray-scale value of the two-dimensional pixel array of sight spot, scene image and transparency, calculate and obtain gray level image pixel two-dimensional array;
Formula is as follows:
p[i,j]=a[i,j]∨(g[i,j]<<16)∨(g[i,j]<<8)∨g[i,j] (3)
Wherein, p [i, j] represents gray level image pixel two-dimensional array; ∨ represents inclusive-OR operation; < < represents to shifting left.
5. recognition methods according to claim 2, is characterized in that, carries out 2-D discrete cosine conversion described in step 2-2 to image pixel two-dimensional array, obtains the sight spot after conversion, scene image pixel two-dimensional array;
The formula calculating two-dimentional off-line cosine conversion conversion array A (i, j) is as follows:
Wherein, A (i, j) represents two-dimentional off-line cosine conversion conversion array; N represents the dimension of gray level image pixel two-dimensional array p [i, j];
Calculate the sight spot after conversion, scene image pixel two-dimensional array q [i, j] formula be as follows:
q[i,j]=A[i,j]·p[i,j]·A
T[i,j] (5)
Wherein, q [i, j] represents the sight spot after changing, scene image pixel two-dimensional array.
6. recognition methods according to claim 2, it is characterized in that, acquisition sight spot described in step 2-5, the finger print information of scene image, be specially: each pixel value and pixel average in sight spot, scene image pixel one-dimension array are compared, if be more than or equal to pixel average, then this pixel value is set to 1, if be less than pixel average, then this pixel value is set to 0.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410468259.9A CN104298965A (en) | 2014-09-12 | 2014-09-12 | Intelligent scenic spot and scenery recognition system and method oriented to mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410468259.9A CN104298965A (en) | 2014-09-12 | 2014-09-12 | Intelligent scenic spot and scenery recognition system and method oriented to mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104298965A true CN104298965A (en) | 2015-01-21 |
Family
ID=52318686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410468259.9A Pending CN104298965A (en) | 2014-09-12 | 2014-09-12 | Intelligent scenic spot and scenery recognition system and method oriented to mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104298965A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106713728A (en) * | 2015-11-12 | 2017-05-24 | 任伟 | Method and system for enhancing scenic spot photographing information |
WO2018000299A1 (en) * | 2016-06-30 | 2018-01-04 | Orange | Method for assisting acquisition of picture by device |
CN107742340A (en) * | 2017-10-12 | 2018-02-27 | 湖州华科信息咨询有限公司 | A kind of automatic management method and apparatus of scenic spot guide equipment |
CN108198099A (en) * | 2018-02-06 | 2018-06-22 | 上海尤卡城信息科技有限责任公司 | The guidance method and system of augmented reality, device, server and computer readable storage medium |
CN108563702A (en) * | 2018-03-23 | 2018-09-21 | 美景听听(北京)科技有限公司 | Speech sound eeplaining data processing method and device based on showpiece image recognition |
CN108604298A (en) * | 2016-01-29 | 2018-09-28 | 罗伯特·博世有限公司 | The method of object, especially three dimensional object for identification |
CN109737948A (en) * | 2019-01-08 | 2019-05-10 | 青岛一舍科技有限公司 | Real-time positioning system and its localization method in a kind of scenic spot based on image |
CN109765597A (en) * | 2019-03-01 | 2019-05-17 | 广州达安临床检验中心有限公司 | Medicine cold chain localization method, device, equipment and storage medium |
CN110856107A (en) * | 2018-08-21 | 2020-02-28 | 上海擎感智能科技有限公司 | Intelligent tour guide method, system, server and vehicle |
CN112967641A (en) * | 2020-10-22 | 2021-06-15 | 太极计算机股份有限公司 | Automatic identification explanation and enhanced display method for scenic spot based on AR technology |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1620003A (en) * | 2004-11-15 | 2005-05-25 | 北京交通大学 | Anti interference information hidding method based on turbo code and image carrier |
CN101702165A (en) * | 2009-10-30 | 2010-05-05 | 高翔 | Live-action information system and method thereof based on GPS positioning and direction identification technology |
CN101846521A (en) * | 2010-06-04 | 2010-09-29 | 汪海 | Self-service travel mobile terminal and navigation method |
JP2012190244A (en) * | 2011-03-10 | 2012-10-04 | Fujitsu Ltd | Information providing method and information providing device |
CN102915326A (en) * | 2012-08-30 | 2013-02-06 | 杭州藕根科技有限公司 | Mobile terminal scenery identifying system based on GPS (Global Positioning System) and image search technique |
CN103632626A (en) * | 2013-12-03 | 2014-03-12 | 四川省计算机研究院 | Intelligent tour guide realizing method and intelligent tour guide device based on mobile network and mobile client |
-
2014
- 2014-09-12 CN CN201410468259.9A patent/CN104298965A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1620003A (en) * | 2004-11-15 | 2005-05-25 | 北京交通大学 | Anti interference information hidding method based on turbo code and image carrier |
CN101702165A (en) * | 2009-10-30 | 2010-05-05 | 高翔 | Live-action information system and method thereof based on GPS positioning and direction identification technology |
CN101846521A (en) * | 2010-06-04 | 2010-09-29 | 汪海 | Self-service travel mobile terminal and navigation method |
JP2012190244A (en) * | 2011-03-10 | 2012-10-04 | Fujitsu Ltd | Information providing method and information providing device |
CN102915326A (en) * | 2012-08-30 | 2013-02-06 | 杭州藕根科技有限公司 | Mobile terminal scenery identifying system based on GPS (Global Positioning System) and image search technique |
CN103632626A (en) * | 2013-12-03 | 2014-03-12 | 四川省计算机研究院 | Intelligent tour guide realizing method and intelligent tour guide device based on mobile network and mobile client |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106713728A (en) * | 2015-11-12 | 2017-05-24 | 任伟 | Method and system for enhancing scenic spot photographing information |
CN108604298A (en) * | 2016-01-29 | 2018-09-28 | 罗伯特·博世有限公司 | The method of object, especially three dimensional object for identification |
WO2018000299A1 (en) * | 2016-06-30 | 2018-01-04 | Orange | Method for assisting acquisition of picture by device |
CN107742340A (en) * | 2017-10-12 | 2018-02-27 | 湖州华科信息咨询有限公司 | A kind of automatic management method and apparatus of scenic spot guide equipment |
CN108198099A (en) * | 2018-02-06 | 2018-06-22 | 上海尤卡城信息科技有限责任公司 | The guidance method and system of augmented reality, device, server and computer readable storage medium |
CN108563702B (en) * | 2018-03-23 | 2022-02-25 | 美景听听(北京)科技有限公司 | Voice explanation data processing method and device based on exhibit image recognition |
CN108563702A (en) * | 2018-03-23 | 2018-09-21 | 美景听听(北京)科技有限公司 | Speech sound eeplaining data processing method and device based on showpiece image recognition |
CN110856107B (en) * | 2018-08-21 | 2023-08-22 | 上海擎感智能科技有限公司 | Intelligent tour guide method, system, server and vehicle |
CN110856107A (en) * | 2018-08-21 | 2020-02-28 | 上海擎感智能科技有限公司 | Intelligent tour guide method, system, server and vehicle |
CN109737948A (en) * | 2019-01-08 | 2019-05-10 | 青岛一舍科技有限公司 | Real-time positioning system and its localization method in a kind of scenic spot based on image |
CN109765597A (en) * | 2019-03-01 | 2019-05-17 | 广州达安临床检验中心有限公司 | Medicine cold chain localization method, device, equipment and storage medium |
CN112967641A (en) * | 2020-10-22 | 2021-06-15 | 太极计算机股份有限公司 | Automatic identification explanation and enhanced display method for scenic spot based on AR technology |
CN112967641B (en) * | 2020-10-22 | 2023-02-17 | 太极计算机股份有限公司 | Automatic identification explanation and enhanced display method for scenic spot based on AR technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104298965A (en) | Intelligent scenic spot and scenery recognition system and method oriented to mobile terminal | |
CN103913174B (en) | The generation method and system of a kind of navigation information and mobile client and server end | |
US10664708B2 (en) | Image location through large object detection | |
CN107067794B (en) | Indoor vehicle positioning and navigation system and method based on video image processing | |
CN102611747B (en) | Based on guide and the method for protecting of satellite equipment | |
CN107229690A (en) | Dynamic High-accuracy map datum processing system and method based on trackside sensor | |
Xing et al. | Mapping human activity volumes through remote sensing imagery | |
CN110443898A (en) | A kind of AR intelligent terminal target identification system and method based on deep learning | |
CN102829788A (en) | Live action navigation method and live action navigation device | |
CN104145173A (en) | Visual ocr for positioning | |
CN108120436A (en) | Real scene navigation method in a kind of iBeacon auxiliary earth magnetism room | |
CN103827634A (en) | Logo detection for indoor positioning | |
CN102479214B (en) | Based on Bar Code and the localization method of GIS technology and alignment system | |
CN104661300B (en) | Localization method, device, system and mobile terminal | |
CN113239952B (en) | Aerial image geographical positioning method based on spatial scale attention mechanism and vector map | |
Feng et al. | Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments | |
CN103593450A (en) | System and method for establishing streetscape spatial database | |
CN209525760U (en) | A kind of vehicle location assistance searching system | |
KR20200025421A (en) | Augmented Reality Based Parking Guidance System in Indoor Parking Lot | |
CN107347209A (en) | Three point on a straight line localization method based on Beacon technologies | |
CN109857826A (en) | A kind of video camera visible range labeling system and its mask method | |
JP2003330953A (en) | Server device, portable terminal, information provision system, information provision method, and information acquisition method | |
CN204613940U (en) | A kind of equipment for reconnaissance trip | |
CN106683473A (en) | Reverse-direction vehicle-finding navigation method and mobile terminal | |
CN207068261U (en) | A kind of indoor vehicle Position Fixing Navigation System based on Computer Vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150121 |
|
WD01 | Invention patent application deemed withdrawn after publication |