CN117765839A - Indoor intelligent navigation method, device and storage medium - Google Patents
Indoor intelligent navigation method, device and storage medium Download PDFInfo
- Publication number
- CN117765839A CN117765839A CN202311793074.0A CN202311793074A CN117765839A CN 117765839 A CN117765839 A CN 117765839A CN 202311793074 A CN202311793074 A CN 202311793074A CN 117765839 A CN117765839 A CN 117765839A
- Authority
- CN
- China
- Prior art keywords
- audio
- positioning
- video
- position information
- video stream
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000012545 processing Methods 0.000 claims abstract description 129
- 230000008859 change Effects 0.000 claims abstract description 17
- 238000004364 calculation method Methods 0.000 claims abstract description 15
- 230000006855 networking Effects 0.000 claims description 19
- 230000015572 biosynthetic process Effects 0.000 claims description 15
- 238000003786 synthesis reaction Methods 0.000 claims description 15
- 230000000694 effects Effects 0.000 claims description 13
- 230000033001 locomotion Effects 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 37
- 238000010586 diagram Methods 0.000 description 13
- 230000005540 biological transmission Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000011161 development Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000001771 impaired effect Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention provides an indoor intelligent navigation method, device and storage medium, comprising the following steps: acquiring first positioning information of a real-time mobile main body, and sequentially performing position calculation with second positioning information of a plurality of indoor fixed main bodies to obtain first position information of the first positioning information; acquiring audio and video data corresponding to the first position information, and performing digital signal processing on the audio and video data to acquire a first audio and video stream after encoding and decoding; and storing the first audio and video stream into a buffer area, and reading data in the buffer area in a volume gradual change mode when the second position information of the first position information is detected to be different from the first position information again.
Description
Technical Field
The invention relates to the technical field of indoor navigation, in particular to an indoor intelligent navigation method, an indoor intelligent navigation device and a storage medium.
Background
The current background of the development of voice navigation technology encompasses a number of areas including computer vision, natural language processing, location technology, and the popularity of mobile devices. The popularization of the current voice navigation technology benefits from the popularization of intelligent equipment, the development of mature voice technology and positioning technology, the application of big data and cloud computing, increased barrier-free and accessibility demands and application scene diversification, and the factors promote the continuous innovation and improvement of the voice navigation technology together, so that more convenient navigation and information service are provided for users. Current navigation systems still have some drawbacks and challenges, although significant progress has been made. The following are some common disadvantages of navigation systems:
The positioning accuracy of the navigation system in the indoor environment is relatively low, because the indoor positioning technology is still developing, which may cause difficulty in navigation in indoor places such as shopping malls, hospitals, airports and the like; many navigation systems require a stable internet connection to acquire map data, real-time updates, and speech synthesis services, and lack of an internet connection or an unstable connection may affect the usability of the system; also, navigation systems often require access to the user's location information after connection over the internet, which may raise privacy and security concerns, requiring appropriate privacy protection measures.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an indoor intelligent navigation method, an indoor intelligent navigation device and a storage medium, which can obtain accurate positioning and improve confidentiality of position privacy.
In a first aspect, the present invention provides an indoor intelligent navigation method, including:
acquiring first positioning information of a real-time mobile main body, and sequentially performing position calculation with second positioning information of a plurality of indoor fixed main bodies to obtain first position information of the first positioning information;
acquiring audio and video data corresponding to the first position information, and performing digital signal processing on the audio and video data to acquire a first audio and video stream after encoding and decoding;
And storing the first audio and video stream into a buffer area, and reading data in the buffer area in a volume gradual change mode when the second position information of the first position information is detected to be different from the first position information again.
According to the invention, through position resolving is carried out on the first positioning information of the real-time mobile main body and the second positioning information of the fixed main body, the problem that the indoor positioning accuracy of the existing navigation system is low is solved, a network is not required to be connected to acquire the position information, the situation that navigation is not feasible due to unstable network can be avoided, privacy and safety problems caused by directly acquiring the position information after Internet connection can be avoided, in addition, when the position of the mobile main body is changed, the data is read in a mode of gradual change of volume, the immersed navigation experience of 'sound follow-up' can be created, and the experience sense is improved.
With reference to the first aspect, in some embodiments, the sequentially performing position calculation with the second positioning information of the indoor multiple fixed bodies to obtain first position information of the first positioning information includes:
networking a real-time mobile main body and a plurality of indoor fixed main bodies, establishing a corresponding coordinate system, and obtaining first position information of the first positioning information according to coordinates of the real-time mobile main body in the coordinate system; wherein the first positioning information is the positioning of the real-time mobile body in a corresponding preset area; a preset area contains a plurality of fixed bodies to be explained.
According to the method, the real-time mobile body and the indoor multiple fixed bodies are networked, the position of the real-time mobile body is acquired according to the position in the established coordinate system, and compared with the related technology of directly acquiring the position information through the network, the method and the device have the advantages that the position information of a user can not be directly accessed, the user is not directly connected with the Internet, the risk of information leakage of the user is avoided, privacy and safety problems are caused, privacy protection of the user is fundamentally enhanced, the privacy information of the real-time mobile body is protected, and therefore the experience degree of indoor sightseeing of the mobile body is improved.
With reference to the first aspect, in some embodiments, the collecting the audio/video number corresponding to the first location information includes:
and acquiring a first audio and video stream through a preset area corresponding to the first position information, and keeping the second audio and video stream of the remaining preset area at default.
With reference to the first aspect, in some embodiments, the performing digital signal processing on the audio and video data to obtain a first audio and video stream after encoding and decoding includes:
processing the sampling rate, the audio modulation and the sound effect of the audio data to obtain a first audio stream after encoding and decoding;
Compressing and decompressing video data, enhancing images and videos, detecting and tracking motion, processing images and videos and encoding and decoding images and videos to obtain a first video stream; wherein, the first audio-video stream includes: the first audio stream and the first video stream.
With reference to the first aspect, in some embodiments, the acquiring the encoded first audio/video stream further includes:
according to the text input interface, performing voice synthesis on at least one acquired text, and converting the text into an audio file; the audio file is used for the real-time mobile main body to play voice according to the requirement; wherein the first audio-video stream further comprises the audio file.
With reference to the first aspect, in some embodiments, before the detecting again the second location information of the first location information is different from the first location information, the detecting includes:
and reading the first audio stream, the first video stream or the audio file synthesized by the text from the buffer memory area according to the requirement of the real-time mobile main body, and pushing the read data to media playing equipment corresponding to the first positioning information for playing.
With reference to the first aspect, in some embodiments, the reading the data in the buffer area by using a volume gradual change method includes:
And reading the first audio and video stream from the buffer area, gradually playing the first audio and video stream in the volume of first media playing equipment in a preset area corresponding to the first position information, reading the second audio and video stream from the buffer area, and gradually playing the second audio and video stream in the volume of second media playing equipment in the preset area corresponding to the second position information.
In a second aspect, the present invention provides an indoor intelligent navigation device, comprising: the system comprises a positioning processing module, a wireless audio processing module, a video processing module, a navigation processing module and a sound-amplifying projection scheduling module; wherein,
the positioning processing module is used for acquiring first positioning information of the real-time mobile main body, and sequentially carrying out position calculation with second positioning information of a plurality of indoor fixed main bodies to obtain first position information of the first positioning information;
the wireless audio processing module is used for acquiring audio data corresponding to the first position information;
the video processing module is used for acquiring video data corresponding to the first position information; wherein, audio-video data includes: the audio data and video data;
the navigation processing module is used for carrying out digital signal processing on the audio and video data to obtain a first audio and video stream after encoding and decoding;
And the sound expansion projection scheduling module is used for storing the first audio and video stream into a buffer area, and reading data in the buffer area in a volume gradual change mode when the second position information of the first position information is detected to be different from the first position information again.
With reference to the second aspect, in some embodiments, the sequentially performing position calculation with the second positioning information of the indoor multiple fixed bodies, to obtain first position information of the first positioning information includes:
networking a real-time mobile main body and a plurality of indoor fixed main bodies, establishing a corresponding coordinate system, and obtaining first position information of the first positioning information according to coordinates of the real-time mobile main body in the coordinate system; wherein the first positioning information is the positioning of the real-time mobile body in a corresponding preset area; a preset area contains a plurality of fixed bodies to be explained.
In a third aspect, the present invention provides a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the indoor wisdom navigation method according to the first aspect.
Drawings
FIG. 1 is a flow chart of an indoor intelligent navigation method according to the present embodiment;
fig. 2 is a schematic structural diagram of an indoor intelligent navigation device according to the present embodiment;
FIG. 3 is a schematic diagram of a complete indoor intelligent navigation device according to the present embodiment;
FIG. 4 is a schematic diagram of a complete indoor intelligent navigation device according to the present embodiment;
fig. 5 is a schematic flow chart of the operation of the wireless processing module provided in the present embodiment;
FIG. 6 is a schematic flow chart of the operation of the positioning processing module according to the present embodiment;
fig. 7 is a schematic flow chart of the operation of the sound amplifying projection module provided in the present embodiment;
fig. 8 is a schematic flow chart of the operation of the video processing module provided in the present embodiment;
fig. 9 is a schematic diagram showing interaction between each module of the indoor intelligent navigation device according to the present embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, the main background element of the current voice navigation technology is as follows:
(1) Popularity of mobile smart devices: the wide spread of smartphones and tablet computers provides a platform for the development of voice navigation technology. These devices are equipped with various sensors such as GPS, accelerometers, gyroscopes, etc. that can be used for positioning and navigation while also having sufficient computing power to support speech synthesis and speech recognition applications.
(2) Maturation of speech technology: speech synthesis and speech recognition techniques have advanced significantly in recent years, becoming more accurate and natural. This enables voice navigation techniques to provide a better user experience through which a user may interact with the navigation system.
(3) Development of positioning technology: the continual improvement in Global Positioning System (GPS) and indoor positioning technology enables voice navigation technology to provide more accurate location information. Indoor positioning technologies, such as Wi-Fi positioning, bluetooth positioning, and RFID technology, extend the range of applicability of navigation technologies, including indoor navigation and location services.
(4) Big data and cloud computing: big data analysis and cloud computing technology provide support for voice navigation. By analyzing large-scale geographic data and user behavior data, the system may provide more intelligent navigation advice and a personalized experience.
(5) Barrier-free and accessibility requirements: as the public's focus on unobstructed and accessibility increases, so does the need for voice navigation techniques to aid blind people, visually impaired people, and other special demand groups. This has motivated the development and innovation of this area of technology.
(6) Application scene diversification: the voice navigation technology is not only used for traditional tour guide and map application, but also widely applied to various application scenes such as automatic driving automobiles, intelligent home, intelligent assistants, intelligent glasses and the like so as to provide more diversified navigation and information guidance.
In summary, the development of voice navigation technology benefits from the popularity of mobile devices, advances in voice technology, evolution of positioning technology, and increased social demand for accessibility and intelligent navigation. These factors together motivate the continual innovation and improvement of voice navigation technology, providing more convenient navigation and information services for users.
However, current navigation systems still have some drawbacks and challenges, although significant progress has been made. The following are some common disadvantages of navigation systems:
(1) Indoor positioning problem: the positioning accuracy of navigation systems in indoor environments is relatively low, as indoor positioning technology is still evolving. This may lead to difficulties in navigating indoor sites in large malls, hospitals, airports, etc.
(2) Speech recognition accuracy: automatic speech recognition still has accuracy problems, especially in noisy environments or in different accents. This may result in a user provided voice command being misrecognized, resulting in a false navigation instruction.
(3) Degree of naturalness of speech synthesis: although speech synthesis techniques have improved, in some cases, the generated speech may still sound insufficiently natural, lacking in emotion expression. This may affect the user experience.
(4) Relying on internet connection: many navigation systems require a stable internet connection to obtain map data, real-time updates, and speech synthesis services. Lack of an internet connection or an unstable connection may affect the usability of the system.
(5) Privacy and security issues: navigation systems often require access to the user's location information, which can raise privacy and security concerns, requiring appropriate privacy protection measures.
(6) User interface complexity: for some users, the interface using the navigation system may be too complex, especially for those who are not familiar with the technology.
(7) Battery life and resource consumption: using navigation applications may result in faster drain of the cell phone battery because they typically require positioning and internet connection to be enabled, which may limit the duration of time the user is in the navigation process.
(8) Not applicable to all users: while navigation systems are very useful in helping blind and visually impaired people, they are not suitable for all users. Some users may prefer visual navigation or other means.
The root causes of these problems include technical limitations, environmental factors, resource requirements, privacy considerations, and diversified user needs. Despite these challenges, continued technical innovation and research effort may help solve these problems, improving performance and user experience of the navigation system.
In order to solve the problems of technical limitation, environmental factors, resource requirements, privacy considerations, diversified user requirements and the like, advanced audio technology, wireless audio technology and indoor positioning technology are used for providing convenient, rapid, interactive and real-time navigation service for tourists. Therefore, the indoor intelligent guiding method, the device and the storage medium which are specially used for mobile explanation guiding scenes and are designed for the scenes needing guiding such as intelligent exhibition halls, museums and large factories can provide more intelligent, accurate and confidential guiding services, and an 'sound follow-up' immersive guiding atmosphere is constructed. In order to explain the steps of the present invention in detail, the following examples will be described.
Example 1
Referring to fig. 1, a flow chart of an indoor intelligent navigation method provided in this embodiment includes: the steps S11 to S13 are specifically as follows:
step S11, acquiring first positioning information of a real-time mobile main body, and sequentially performing position calculation with second positioning information of a plurality of indoor fixed main bodies to obtain first position information of the first positioning information.
It should be noted that the real-time mobile body may be a user who needs to navigate, and the plurality of fixed bodies may be a plurality of indoor articles to be illustrated, or a plurality of indoor fixed modules with one-to-one correspondence to the articles to be illustrated, where the fixed modules include position information corresponding to the articles to be illustrated; the first positioning information is a preset area where the real-time mobile main body is located, and each preset area at least comprises one article to be illustrated; the second positioning information is fixed positions of a plurality of indoor fixed bodies; the first position information is position information of the real-time mobile main body after being added into the networking of at least one fixed main body according to the position of the preset area.
In some embodiments, the sequentially performing position calculation with the second positioning information of the indoor multiple fixed bodies to obtain first position information of the first positioning information includes: networking a real-time mobile main body and a plurality of indoor fixed main bodies, establishing a corresponding coordinate system, and obtaining first position information of the first positioning information according to coordinates of the real-time mobile main body in the coordinate system; wherein the first positioning information is the positioning of the real-time mobile body in a corresponding preset area; a preset area contains a plurality of fixed bodies to be explained.
Notably, through real-time mobile body and indoor a plurality of fixed main body networking, obtain according to the position in the coordinate system of establishing the position of real-time mobile body, compare in using the correlation technique that directly obtains positional information through the network, this embodiment can not directly visit user's positional information, and not directly be connected with the internet, avoid user information to reveal the risk, cause privacy and safety problem, privacy protection of user has been strengthened fundamentally, thereby the privacy information of protection real-time mobile body, and the positioning accuracy that carries out the location according to the coordinate system can be accurate to 10cm, can greatly improve the positioning accuracy, thereby improve the experience degree that implementation mobile body carries out the sightseeing in indoor.
In some embodiments, the positioning manner in this embodiment may be replaced by multiple technologies such as Wi-Fi positioning, bluetooth positioning, RFID technology, ultrasonic positioning, inertial navigation, visual positioning, millimeter wave radar, and UWB positioning, and the indoor positioning method in this embodiment includes, but is not limited to, technologies capable of achieving the same effect.
And step S12, acquiring the audio and video data corresponding to the first position information, and performing digital signal processing on the audio and video data to acquire a first audio and video stream after encoding and decoding.
In some embodiments, collecting the audio/video number corresponding to the first location information includes: and acquiring a first audio and video stream through a preset area corresponding to the first position information, and keeping the second audio and video stream of the remaining preset area at default.
In some embodiments, performing digital signal processing on the audio and video data to obtain a first audio and video stream after encoding and decoding, including: processing the sampling rate, the audio modulation and the sound effect of the audio data to obtain a first audio stream after encoding and decoding; compressing and decompressing video data, enhancing images and videos, detecting and tracking motion, processing images and videos and encoding and decoding images and videos to obtain a first video stream; wherein, the first audio-video stream includes: the first audio stream and the first video stream.
In some embodiments, obtaining the encoded first audio-video stream further includes: according to the text input interface, performing voice synthesis on at least one acquired text, and converting the text into an audio file; the audio file is used for the real-time mobile main body to play voice according to the requirement; wherein the first audio-video stream further comprises the audio file.
In some embodiments, the first audio video stream comprises: a first audio stream, a first video stream, and an audio file.
In some embodiments, the first audio stream may be mixed with the first video stream and output.
In some embodiments, speech synthesis is performed by a Speech-To-Speech (TTS) technique.
It is worth to describe that, the embodiment adopts the latest speech synthesis technology, provides natural and smooth speech synthesis, and supports multiple languages and speech styles, so that the naturalness of speech synthesis can be improved, and the experience of implementing the mobile body to visit indoors is improved.
In some embodiments, wireless audio techniques include, but are not limited to, wireless U-band pass-through, 2.4G wireless pass-through, or 5G wireless pass-through, among others, which may achieve the same effect.
And S13, storing the first audio and video stream into a buffer area, and reading data in the buffer area in a volume gradual change mode when detecting that the second position information of the first position information is different from the first position information again.
In some embodiments, before the detecting again the second location information of the first location information is different from the first location information, comprising: and reading the first audio stream, the first video stream or the audio file synthesized by the text from the buffer memory area according to the requirement of the real-time mobile main body, and pushing the read data to media playing equipment corresponding to the first positioning information for playing.
In some embodiments, reading the data in the buffer area in a manner of gradual volume change includes: and reading the first audio and video stream from the buffer area, gradually playing the first audio and video stream in the volume of first media playing equipment in a preset area corresponding to the first position information, reading the second audio and video stream from the buffer area, and gradually playing the second audio and video stream in the volume of second media playing equipment in the preset area corresponding to the second position information.
In some embodiments, the obtaining of the second location information includes: and acquiring second positioning information of the real-time mobile main body in real time, and sequentially performing position calculation with the second positioning information of the indoor multiple fixed main bodies to obtain second position information of the second positioning information.
In some embodiments, coordinates of the real-time moving body in a coordinate system are acquired in real time, and second position information of the second positioning information is obtained.
In some embodiments, one media playing device corresponds to one preset area.
It should be noted that the first media playing device is a media playing device corresponding to the real-time mobile body at the first position information, and the second media playing device is another media playing device corresponding to the mobile body at the second position information.
In some embodiments, the media playing device is used with a display screen, and when the mobile body is moved to a preset area provided with instructions or other interactions, the preset video or interactions are pushed to the display screen or other display devices in the preset area where the mobile body is located by implementing the first position information of the mobile body.
In some embodiments, when the real-time moving subject leaves the current preset area to be ready to enter the next preset area, the playing device sound of the current area starts to fade out and the device sound of the next preset area starts to fade in. A gradual change buffer area is also arranged between preset areas, so that the gradual change process is ensured to be smooth and continuous, and the effect of 'sound follow-up' is achieved.
It is worth to say that, this embodiment does not choose specific users, adopts audio and video explanation design, satisfies different scene applications, is applicable to all user groups, and extensive applicability, and broadcast when moving the main part in real time and moving to different preset areas through the mode of gradually going into gradually going out, can provide the immersive impression of "sound follow-up", and then can improve and implement the experience degree that the main part was seen in the room.
In some embodiments, according to the requirement of the real-time mobile body, a first audio stream, a first video stream or a text synthesized audio file is read out from the buffer area, the read data is pushed to a media playing device corresponding to the first positioning information to be played at an initial volume, when the second position information of the first positioning information is detected to be different from the first position information, the initial volume is updated at a preset gradual rate, and the update of the preset volume is stopped until the second position information is not higher than a minimum threshold; wherein, the value range of the preset gradual change rate is (0, 1); updating the volume with the preset fade-in rate not higher than the lowest threshold until the initial volume is reached; wherein, the value range of the preset gradual entering rate is (1, 1+the preset gradual entering rate).
In some embodiments, the minimum threshold is 20dB.
In some implementations, the initial volume update at the preset fade rate may be expressed as:
s 0 (t+1)=s 0 (t)*α,
wherein alpha is a preset gradient rate, s 0 (t+1) and s 0 (t) is the volume at time t+1 and t, respectively.
In some embodiments, α ε [0.5, 1).
In some embodiments, updating at a volume at which the preset fade-in rate is not higher than the minimum threshold may be expressed as:
s 0 (t+1)=s 0 (t)*β,
Wherein beta is a preset gradual rate, s 0 (t+1) and s 0 (t) is the volume at time t+1 and t, respectively.
It should be noted that, when the volume is gradually increased, the volume is not decreased at a constant speed, but is decreased at the same rate, and similarly, when the volume is gradually increased, the volume is increased at the same rate, instead of being increased at a constant speed.
In some embodiments, β ε (1, 1+α).
In some embodiments, β=1/α.
Illustratively, when the fade-out starts at 80dB at the initial volume, 20dB at the lowest threshold, and 0.75 at the preset fade-out rate, the fade-out stops after 5 times of 60dB, 45dB, 33.75dB, about 25.31dB, and 18.98dB, and then the fade-in starts at the preset fade-in rate of 1.33, the initial volume is reached through about 25.31dB, 33.75dB, 45dB, 60dB, and 80dB, wherein the initial volume does not exceed the maximum volume (e.g., 101 dB).
In some embodiments, after the fade-out is performed, when the data read from the buffer area is not completely read, the first audio/video stream corresponding to the current data is directly skipped, the corresponding first audio/video stream is deleted from the buffer area, and when the fade-in is performed, the next second audio/video stream is read.
It is worth noting that, through the network audio transmission technology developed independently, lossless audio transmission can be supported, and the play delay is lower than 5ms.
In some embodiments, the different audio-video streams are stored in a circular queue.
According to the embodiment, through carrying out position resolving on the first positioning information of the real-time mobile main body and the second positioning information of the fixed main body, the problem that an existing navigation system is low in indoor positioning accuracy can be solved, moreover, a network is not required to be connected to acquire the position information, the situation that navigation is not feasible due to unstable network can be avoided, privacy and safety problems caused by directly acquiring the position information after internet connection can be avoided, in addition, when the position of the mobile main body is changed, data are read in a mode of gradual change of volume, immersed navigation experience of 'sound follow-up' can be created, and experience sense is improved.
Example 2
Referring to fig. 2, a schematic structural diagram of an indoor intelligent navigation device according to the present embodiment includes: a positioning processing module 21, a wireless audio processing module 22, a video processing module 25, a navigation processing module 23 and a sound-amplifying projection scheduling module 24.
The positioning processing module 21 is mainly used for positioning the real-time mobile body, and transmitting the obtained first position information to the wireless audio processing module 22; after receiving the first position information transmitted by the positioning processing module 21, the wireless audio processing module 22 collects audio data corresponding to the first position information and transmits the audio and video data to the navigation processing module 23; the video processing module 25 is configured to collect video data corresponding to the first position and transmit the video data to the navigation processing module 23; the navigation processing module 23 performs digital signal processing according to the received audio and video data, and transmits the obtained first audio and video stream to the sound amplifying projection scheduling module 24; after receiving the first audio/video stream transmitted by the navigation processing module 23, the sound-amplifying projection scheduling module 24 stores the first audio/video stream in the buffer area, and reads and broadcasts the data in the buffer area in a progressive gradually-out mode.
The positioning processing module 21 is configured to obtain first positioning information of the real-time moving body, and perform position calculation with second positioning information of a plurality of indoor fixed bodies in sequence, so as to obtain first position information of the first positioning information.
In some embodiments, the sequentially performing position calculation with the second positioning information of the indoor multiple fixed bodies to obtain first position information of the first positioning information includes: networking a real-time mobile main body and a plurality of indoor fixed main bodies, establishing a corresponding coordinate system, and obtaining first position information of the first positioning information according to coordinates of the real-time mobile main body in the coordinate system; wherein the first positioning information is the positioning of the real-time mobile body in a corresponding preset area; a preset area contains a plurality of fixed bodies to be explained.
The wireless audio processing module 22 is configured to collect audio data corresponding to the first location information.
The video processing module 25 is configured to collect video data corresponding to the first location information; wherein, audio-video data includes: the audio data and video data.
In some embodiments, the audio data and the video data are transmitted to the navigation processing module 23 separately.
In some embodiments, based on the first location information, if only audio data is available, only audio data is collected by the wireless audio processing module 22; if only video data is available, only video data is collected by the video processing module 25.
And the navigation processing module 23 is used for performing digital signal processing on the audio and video data to obtain a first audio and video stream after encoding and decoding.
In some embodiments, collecting the audio/video number corresponding to the first location information includes: and acquiring a first audio and video stream through a preset area corresponding to the first position information, and keeping the second audio and video stream of the remaining preset area at default.
In some embodiments, performing digital signal processing on the audio and video data to obtain a first audio and video stream after encoding and decoding, including: processing the sampling rate, the audio modulation and the sound effect of the audio data to obtain a first audio stream after encoding and decoding;
in some embodiments, video data is compressed and decompressed, image and video enhanced, motion detected and tracked, image and video processed, and image and video codec processed to obtain a first video stream; wherein, the first audio-video stream includes: the first audio stream and the first video stream.
In some embodiments, obtaining the encoded first audio-video stream further includes: according to the text input interface, performing voice synthesis on at least one acquired text, and converting the text into an audio file; the audio file is used for the real-time mobile main body to play voice according to the requirement; wherein the first audio-video stream further comprises the audio file.
And the sound-amplifying projection scheduling module 24 is configured to store the first audio/video stream into a buffer, and read data in the buffer in a manner of gradual volume change when the second position information of the first position information is detected to be different from the first position information again.
In some embodiments, before the detecting again the second location information of the first location information is different from the first location information, comprising: and reading the first audio stream, the first video stream or the audio file synthesized by the text from the buffer memory area according to the requirement of the real-time mobile main body, and pushing the read data to media playing equipment corresponding to the first positioning information for playing.
In some embodiments, reading the data in the buffer area in a manner of gradual volume change includes: and reading the first audio and video stream from the buffer area, gradually playing the first audio and video stream in the volume of first media playing equipment in a preset area corresponding to the first position information, reading the second audio and video stream from the buffer area, and gradually playing the second audio and video stream in the volume of second media playing equipment in the preset area corresponding to the second position information.
In some embodiments, referring to fig. 3 and fig. 4, fig. 3 is a schematic structural diagram of the complete indoor intelligent navigation device provided by the present embodiment, and fig. 4 is a schematic flow chart of the complete indoor intelligent navigation device work provided by the present embodiment. In fig. 3, the navigation processing module 23 generates a coordinate system according to a positioning system formed by the positioning networking units 101 of the positioning processing module 21, and then calculates positioning measurement data of the positioning beacon unit 203 of the wireless audio processing module 22 and the positioning networking units 101 of the positioning processing module 21 through the positioning operation unit 312 of the navigation processing module 23, so as to calculate accurate indoor position information of a user; then, task scheduling is performed according to the audio data returned by the wireless audio processing module 22 or the video data collected by the video processing module, and the corresponding audio or video is pushed to the sound amplifying projection scheduling module 24. Finally, the sound-amplifying projection scheduling module 24 pushes the audio or video information to a designated position to amplify, project or screen the audio or video information to corresponding equipment according to the position information pushed by the navigation processing module 23; the sound-amplifying projection scheduling module 24 mainly receives the scheduling information pushing of the navigation processing module 23 and performs sound-amplifying projection scheduling of the audios and videos in different places according to the scheduling information; the positioning processing module 21 mainly builds a positioning network, generates a coordinate system, and then pushes positioning measurement data of each positioning unit in the positioning network to the navigation processing module 23 so as to calculate accurate positioning information of a user; the wireless audio processing module 22 mainly collects audio and video, the video processing module mainly collects video, and then the collected audio and video data are respectively forwarded to the navigation processing module 23, so that real-time explanation audio of explanation personnel is obtained. The video processing module mainly collects video data, and then sends the video stream data to the navigation processing module 23 for processing.
In some embodiments, after the task management unit 308 performs operation judgment on the text input in advance by the user, the text is added, deleted, changed and searched to generate an audio file through the TTS Voice unit 307, or Voice Conversion (Voice Conversion) is performed, so as to obtain sound sources in different languages or different speaking subjects, and then the sound sources are stored in the storage unit 310. Wherein the speaking subjects are pre-designated different speakers.
In some embodiments, after any operation in the addition, deletion and examination is performed on the document pre-introduced by the user, a corresponding audio file or a sound source of another language is generated.
In some embodiments, the collection object of wireless audio is a wireless microphone, and the general flow is that real-time audio of the microphone is transmitted to the collection unit wirelessly.
In some embodiments, the wireless audio is acquired by wireless technology, and the overall acquisition process includes: audio acquisition, audio transmission and audio reception.
In some embodiments, the navigation processing module 23, the sound-amplifying projection scheduling module 24, the positioning processing module 21 and the wireless audio processing module 22 may be integrated into one device, but the frame of the entire navigation system after being integrated into one device is complex, because the wireless audio acquisition unit 201, the sound-amplifying projection scheduling module 24 and the positioning unit include related devices, but are not limited to the same device, but may be multiple devices.
In some embodiments, the navigation processing module 23 includes: a wireless audio receiving unit 301, a DSP (Digital Signal Processing ) processing unit 302, an audio codec unit, a video receiving unit 304, a video codec unit, a network audio/video transmitting unit 306, a TTS voice unit 307, a task management unit 308, a task scheduling unit 309, a storage unit 310, a network communication unit 311, and a positioning operation unit 312.
In some embodiments, the wireless audio receiving unit 301 is configured to receive audio data sent by the wireless audio sending unit 202 of the wireless audio processing module 22.
In some embodiments, the DSP processing unit 302 is configured to DSP the audio source collected by the wireless audio processing module 22 or the video collected by the video processing module; the processing of the audio includes, but is not limited to, processing related to sampling rate, audio modulation, sound effect, etc.; video processing includes, but is not limited to, compression and decompression, image and video enhancement, motion detection and tracking, image and video processing algorithms, and image and video codec.
In some embodiments, the audio codec unit is mainly configured to perform codec processing on the audio processed by the DSP processing unit 302; the video receiving unit 304 is mainly configured to receive video stream data sent by the video sending unit 502 of the video processing module; the video encoding and decoding unit is mainly used for encoding and decoding the video stream data received by the video receiving unit 304; the network audio/video transmitting unit 306 mainly puts the audio or video with encoded and decoded audio or video into a transmitting buffer area, and pushes the audio or video to the sound-amplifying projection scheduling module 24; the TTS voice unit 307 mainly converts text information into a natural and smooth audio file through a voice synthesis technology; the task management unit 308 is mainly responsible for task scheduling, task allocation, task state management, task creation and modification, and other tasks; the task allocation refers to providing a user with a customized navigation task according to different requirements of the user, and the task allocation includes an explanation audio of a selected explanation area, an explanation video and an explanation audio synthesized by the TTS voice unit 307; in addition, the task management unit 308 is provided with a text input interface of the TTS voice unit 307; the task scheduling unit 309 is configured to perform task scheduling on a task customized by a user.
In some embodiments, the storage unit 310 is mainly a storage medium, and mainly stores related media files such as preset audio, TTS generated audio, and audio and video. The network communication unit 311 mainly processes communication data between each module, including the module in the navigation processing module 23; the positioning operation unit 312 includes a position resolving unit and a position navigation unit; the position resolving unit is configured to resolve the position of the positioning information between the positioning unit and the positioning beacon unit 203, then obtain the position information of the positioning beacon unit 203, and send the final position information to the task scheduling unit 309 for task scheduling and processing; the location navigation unit mainly generates location coordinates of a location unit of the location networking unit 101 of the location processing module 21 to provide a location navigation interface.
In some embodiments, the positioning processing module 21 includes: positioning network unit 101 and network communication unit 102; the positioning network unit 101 includes: a positioning unit and a synchronizing unit; the positioning units mainly complete networking with the positioning beacon unit 203 in the wireless audio processing module 22, and the synchronization units mainly synchronize the position and time information between the positioning units.
In some embodiments, the wireless audio processing module 22 includes: a wireless audio acquisition unit 201, a wireless audio transmission unit 202, and a positioning beacon unit 203; the audio acquisition unit mainly comprises a wireless transmission unit and a wireless receiving unit, and is mainly used for acquiring audio sources of wireless equipment such as a wireless microphone, wherein the wireless transmission unit mainly acquires audio of the wireless microphone, packages the audio to perform signal modulation, and then transmits the audio to the wireless receiving unit. The radio receiving unit first performs signal modulation and finally forwards the received data packet to the navigation processing module 23. The positioning beacon unit 203 is integrated in the wireless audio acquisition unit 201 in an integrated manner, but not limited to this manner, and can be independently worn on the user, and the main function is to communicate with the positioning networking unit 101 of the positioning processing module 21, and then calculate the actual position information of the user under the processing of the positioning operation unit 312 of the navigation processing module 23. Referring to fig. 5, a schematic flow chart of the operation of the wireless processing module provided in this embodiment is shown.
In some embodiments, when the positioning networking unit 101 of the positioning processing module 21 starts to be deployed to the system, networking is completed first, a coordinate system is built, when a user walks between indoor explanation areas with the positioning beacon unit, the positioning beacon unit 312 communicates with a plurality of positioning units of the positioning networking unit 101 deployed in the indoor explanation areas to measure distance, then the synchronization unit synchronizes and pushes interaction data between the positioning beacon unit 203 and the positioning units to the positioning operation unit 312, and after receiving the data, the positioning operation unit 312 calculates position information of the coordinate system where the positioning beacon unit 203 is located. Referring to fig. 6, a flow chart of the operation of the positioning processing module provided in this embodiment is shown.
In some embodiments, the sound amplification projection scheduling module 24 comprises: a network audio/video receiving unit 401, a video codec processing unit 402, an audio codec processing unit 403, and a playback unit combination 404; the network audio and video receiving unit 401 mainly puts the audio and video data pushed by the navigation processing module 23 into a receiving buffer; the video encoding and decoding unit mainly decodes the video stream data in the receiving buffer area and then pushes the video stream data to the playing unit combination 404; the audio codec processing unit 403 mainly decodes the audio stream data in the reception buffer, and then pushes to the playback unit combination 404; the playback unit combination 404 mainly refers to playback devices that play back audio and video. Referring to fig. 7, a schematic flow chart of the operation of the sound amplifying projection module provided in this embodiment is shown.
In some embodiments, the positioning beacon unit 203 is integrated in the wireless audio acquisition unit 201 in an integrated manner, and is integrated in the wireless microphone at all, so that the wireless microphone can be positioned.
It should be noted that integration refers to hardware integration, and the positioning beacon unit 203 is also essentially a wireless device.
In some embodiments, the wireless audio processing module 22 is a device integrating the hardware of its sub-units, the positioning processing module 21 is a device integrating the hardware of its positioning networking unit 101 and the network communication unit 102, the video processing module 25 is a device integrating the hardware of the video acquisition unit 501 and the video transmission unit 502, the sound-amplifying projection scheduling module 24 is a device integrating the hardware of the network audio/video receiving unit 401, the video codec processing unit 402, the audio codec unit 403 and the play unit combination 404, and the navigation processing module 23 is a device integrating the hardware of its internal 12 sub-units.
In some embodiments, the wireless audio processing module 22 and the positioning processing module 21 are not located on the same device, and the wireless audio acquisition unit 201 and the positioning unit include related devices, but are not limited to the same device, but may be multiple devices.
In some embodiments, the positioning beacon unit 203 may be integrated into the wireless audio acquisition unit 201, or may be a separate positioning beacon device, and worn around the user.
In some embodiments, referring to fig. 8, a schematic flow chart of the operation of the video processing module provided in this embodiment is shown. The video processing module 25 includes: the video acquisition unit 501 and the video sending unit 502, the video acquisition unit 501 includes video acquisition including real-time video of a monitoring camera, but is not limited to video acquisition of a camera, and may also be video acquisition of other video devices. The video transmitting unit 502 mainly transmits the video data acquired by the video acquisition unit 501 to the navigation processing module 23 for data processing and task scheduling. In the whole indoor wisdom guide device, the application of video mainly divide into the acquisition and the demonstration of explanation video and the collection and the propelling movement of surveillance camera video. The application of the explanation video is mainly used for improving the guiding effect of the explanation area and enhancing the visual and auditory experience of the user intercom explanation area. The video acquisition and pushing of the monitoring camera are mainly used for enhancing the security and response efficiency of security linkage of the system, and the real-time video of the area camera where the user is located can be obtained through a positioning beacon worn by the user or a positioning beacon unit 203 integrated on the wireless microphone, so that the security linkage is enhanced to realize omnibearing and multi-layer security monitoring by integrating a plurality of security devices.
Example 3
Based on embodiment 2, referring to fig. 9, an interaction schematic diagram of each module of the indoor intelligent navigation device provided in this embodiment is shown. The indoor intelligent navigation device realizes different navigation effects according to different requirements of a preset area. The different requirements of the preset area mainly refer to different effects which the user wants to realize in the preset area, and the different requirements can be roughly divided into: the method comprises the following steps of occasions requiring only microphone explanation, occasions requiring only prerecording explanation audio, occasions requiring prerecording explanation audio plus microphone flexible explanation, occasions requiring only prerecording explanation video plus microphone flexible explanation.
When only the microphone is needed for explanation, after the user walks to the explanation area, the user directly collects the audio of the wireless microphone and then pushes the wireless microphone to the loudspeaker in the explanation area.
When only the occasion of the explanation audio is needed to be prerecorded, after a user walks to the explanation area, the prerecorded explanation audio is pushed to a loudspeaker of the explanation area.
When the occasion of needing to pre-record the explanation audio frequency and flexibly explain by the microphone, after the user walks to the explanation area, the audio frequency of the wireless microphone is collected and the pre-recorded audio frequency is pushed to the loudspeaker of the explanation area.
When only the explanation video is needed to be prerecorded, after a user walks to the explanation area, the prerecorded explanation video is pushed to a display of the explanation area.
When the pre-recorded explanation video and the microphone are used for flexible explanation, after a user walks to an explanation area, the pre-recorded explanation video is pushed to a display of the explanation area, and meanwhile, the collected audio of the microphone is pushed to a loudspeaker of the explanation area. The explanation area is a preset area.
Taking the case where only microphone explanation is needed as an example, the wireless audio processing module 22 mainly collects audio and positioning information when the user explains in the indoor explanation area. The indoor explanation area is mainly a range of a preset area which is preset by a user and needs to be explained; the main explanation audio comprises preset explanation audio or real-time audio acquisition of a microphone, and the explanation video mainly refers to preset explanation video; the positioning beacon unit 203 may be integrated with the wireless audio acquisition unit 201 in an integrated manner, but is not limited to this manner, and may be independently configured to be worn on the user so as to obtain positioning information of the user in a specific indoor location, and then combine the wireless audio acquisition unit 201 and the video acquisition unit 501.
After the user walks to the indoor preset area, on one hand, the user can acquire the preset audio or real-time audio of the microphone and the like, and then the user is pushed to the area where the user is located through the DSP processing unit 302 of the navigation processing module, so that the effect of 'following the voice with the user' is achieved, and the immersive explanation experience is increased. On the other hand, the preset explanation video can be pushed to the video playing unit of the area where the user is located through the navigation processing module 23. The whole process confirms successful networking through the communication between the positioning beacon unit 203 and the positioning networking unit 101 which is firstly connected with the positioning processing module 21 to construct a coordinate system, then the positioning operation unit 312 performs position calculation, the positioning operation unit 312 pushes the calculated position information to the task scheduling unit 309 for task matching and task scheduling, and finally relevant audios and videos are pushed to the sound amplifying projection scheduling module 24.
The sound amplifying projection scheduling module 24 only pushes relevant audio and video to the playing equipment in the peripheral area of the user according to the position information, and other areas keep the default sound. When the user leaves the current preset area to enter the next preset area, the playing equipment sound of the current area starts to fade out, and the equipment sound of the next preset area starts to fade in. And a gradual change buffer area is also arranged between the preset areas, so that smooth and continuous sound in the gradual change process is ensured. Thereby achieving the effect of 'sound follow-up'.
Example 4
The present embodiment provides a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the indoor wisdom navigation method according to the first aspect.
According to the method, the real-time mobile body and the indoor multiple fixed bodies are networked, the position of the real-time mobile body is acquired according to the position in the established coordinate system, and compared with the related technology of directly acquiring the position information through a network, the method can not directly access the position information of a user, is not directly connected with the Internet, avoids the risk of user information leakage, causes privacy and safety problems, fundamentally enhances the privacy protection of the user, and accordingly protects the privacy information of the real-time mobile body, and the experience degree of the mobile body for indoor sightseeing is improved.
It will be appreciated by those skilled in the art that embodiments of the present application may also provide a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.
Claims (10)
1. An indoor intelligent navigation method is characterized by comprising the following steps:
acquiring first positioning information of a real-time mobile main body, and sequentially performing position calculation with second positioning information of a plurality of indoor fixed main bodies to obtain first position information of the first positioning information;
acquiring audio and video data corresponding to the first position information, and performing digital signal processing on the audio and video data to acquire a first audio and video stream after encoding and decoding;
And storing the first audio and video stream into a buffer area, and reading data in the buffer area in a volume gradual change mode when the second position information of the first position information is detected to be different from the first position information again.
2. The indoor intelligent navigation method according to claim 1, wherein the sequentially performing position calculation with the second positioning information of the plurality of indoor fixed bodies to obtain the first position information of the first positioning information comprises:
networking a real-time mobile main body and a plurality of indoor fixed main bodies, establishing a corresponding coordinate system, and obtaining first position information of the first positioning information according to coordinates of the real-time mobile main body in the coordinate system; wherein the first positioning information is the positioning of the real-time mobile body in a corresponding preset area; a preset area contains a plurality of fixed bodies to be explained.
3. The indoor intelligent navigation method according to claim 1, wherein the acquiring the audio/video number corresponding to the first location information comprises:
and acquiring a first audio and video stream through a preset area corresponding to the first position information, and keeping the second audio and video stream of the remaining preset area at default.
4. The indoor intelligent navigation method according to claim 1, wherein the performing digital signal processing on the audio-video data to obtain the encoded first audio-video stream comprises:
processing the sampling rate, the audio modulation and the sound effect of the audio data to obtain a first audio stream after encoding and decoding;
compressing and decompressing video data, enhancing images and videos, detecting and tracking motion, processing images and videos and encoding and decoding images and videos to obtain a first video stream; wherein, the first audio-video stream includes: the first audio stream and the first video stream.
5. The indoor wisdom navigation method according to claim 4, wherein the acquiring the encoded and decoded first audio and video stream further comprises:
according to the text input interface, performing voice synthesis on at least one acquired text, and converting the text into an audio file; the audio file is used for the real-time mobile main body to play voice according to the requirement; wherein the first audio-video stream further comprises the audio file.
6. The indoor wisdom navigation method of claim 1, wherein before the second location information detecting the first location information again is different from the first location information, comprising:
And reading the first audio stream, the first video stream or the audio file synthesized by the text from the buffer memory area according to the requirement of the real-time mobile main body, and pushing the read data to media playing equipment corresponding to the first positioning information for playing.
7. The indoor intelligent navigation method according to claim 1, wherein the reading the data in the buffer area by means of volume gradation comprises:
and reading the first audio and video stream from the buffer area, gradually playing the first audio and video stream in the volume of first media playing equipment in a preset area corresponding to the first position information, reading the second audio and video stream from the buffer area, and gradually playing the second audio and video stream in the volume of second media playing equipment in the preset area corresponding to the second position information.
8. An indoor wisdom guide to navigation device, characterized in that includes: the system comprises a positioning processing module, a wireless audio processing module, a video processing module, a navigation processing module and a sound-amplifying projection scheduling module; wherein,
the positioning processing module is used for acquiring first positioning information of the real-time mobile main body, and sequentially carrying out position calculation with second positioning information of a plurality of indoor fixed main bodies to obtain first position information of the first positioning information;
The wireless audio processing module is used for acquiring audio data corresponding to the first position information;
the video processing module is used for acquiring video data corresponding to the first position information; wherein, audio-video data includes: the audio data and video data;
the navigation processing module is used for carrying out digital signal processing on the audio and video data to obtain a first audio and video stream after encoding and decoding;
and the sound expansion projection scheduling module is used for storing the first audio and video stream into a buffer area, and reading data in the buffer area in a volume gradual change mode when the second position information of the first position information is detected to be different from the first position information again.
9. The indoor intelligent navigation apparatus of claim 8, wherein the performing position calculation with the second positioning information of the plurality of indoor fixed bodies in sequence to obtain the first positioning information of the first positioning information comprises:
networking a real-time mobile main body and a plurality of indoor fixed main bodies, establishing a corresponding coordinate system, and obtaining first position information of the first positioning information according to coordinates of the real-time mobile main body in the coordinate system; wherein the first positioning information is the positioning of the real-time mobile body in a corresponding preset area; a preset area contains a plurality of fixed bodies to be explained.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the indoor wisdom guidance method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311793074.0A CN117765839B (en) | 2023-12-25 | 2023-12-25 | Indoor intelligent navigation method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311793074.0A CN117765839B (en) | 2023-12-25 | 2023-12-25 | Indoor intelligent navigation method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117765839A true CN117765839A (en) | 2024-03-26 |
CN117765839B CN117765839B (en) | 2024-07-16 |
Family
ID=90319636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311793074.0A Active CN117765839B (en) | 2023-12-25 | 2023-12-25 | Indoor intelligent navigation method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117765839B (en) |
Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001125767A (en) * | 1999-10-28 | 2001-05-11 | Sharp Corp | Device and method for presenting information and computer readable recording medium with recorded information presentation program |
WO2010086321A1 (en) * | 2009-01-28 | 2010-08-05 | Auralia Emotive Media Systems, S.L. | Binaural audio guide |
CN103428275A (en) * | 2013-07-30 | 2013-12-04 | 苏州两江科技有限公司 | Indoor object activity routine tracking method based on WSN |
CN104392633A (en) * | 2014-11-12 | 2015-03-04 | 国家电网公司 | Interpretation control method oriented to power system simulating training |
CN106776793A (en) * | 2016-11-23 | 2017-05-31 | 广州酷狗计算机科技有限公司 | A kind of method and apparatus of playing audio-fequency data |
CN107529146A (en) * | 2017-10-12 | 2017-12-29 | 深圳米唐科技有限公司 | With reference to more sensing chamber localization method, device, system and the storage mediums of audio |
JP2018007227A (en) * | 2016-12-06 | 2018-01-11 | 株式会社コロプラ | Information processing method and program for computer to execute the information processing method |
US20190306421A1 (en) * | 2018-03-30 | 2019-10-03 | Ricoh Company, Ltd. | Vr system, communication method, and non-transitory computer-readable medium |
CN110751578A (en) * | 2019-10-30 | 2020-02-04 | 清远博云软件有限公司 | Guide recognition equipment for tourism |
US20200068335A1 (en) * | 2017-06-02 | 2020-02-27 | Nokia Technologies Oy | Switching rendering mode based on location data |
CN111246378A (en) * | 2020-01-10 | 2020-06-05 | 北京腾文科技有限公司 | Navigation method based on iBeacon Bluetooth positioning and related components |
WO2020207577A1 (en) * | 2019-04-10 | 2020-10-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Virtual playout screen for visual media using arrangement of mobile electronic devices |
CN112001817A (en) * | 2020-08-24 | 2020-11-27 | 杭州宣迅电子科技有限公司 | Scenic spot real-time guide explanation management system based on artificial intelligence |
CN212463502U (en) * | 2020-07-21 | 2021-02-02 | 贵阳清文云科技有限公司 | Directional sound explanation system |
JP2021061461A (en) * | 2019-10-02 | 2021-04-15 | 株式会社Grit | Program, information processing device, information processing method, and information processing system |
CN113359986A (en) * | 2021-06-03 | 2021-09-07 | 北京市商汤科技开发有限公司 | Augmented reality data display method and device, electronic equipment and storage medium |
CN113727272A (en) * | 2021-07-26 | 2021-11-30 | 和美(深圳)信息技术股份有限公司 | Distributed intelligent interaction method and device, electronic equipment and storage medium |
CN114253504A (en) * | 2021-12-28 | 2022-03-29 | 京东方科技集团股份有限公司 | Audio device volume control method, device and system and electronic device |
WO2022160743A1 (en) * | 2021-01-29 | 2022-08-04 | 稿定(厦门)科技有限公司 | Video file playing system, audio/video playing process, and storage medium |
WO2022166173A1 (en) * | 2021-02-02 | 2022-08-11 | 深圳市慧鲤科技有限公司 | Video resource processing method and apparatus, and computer device, storage medium and program |
CN115167806A (en) * | 2022-07-11 | 2022-10-11 | 广州市保伦电子有限公司 | Network audio broadcast gradual change audio processing method and server |
CN115278273A (en) * | 2022-06-13 | 2022-11-01 | 北京达佳互联信息技术有限公司 | Resource display method and device, electronic equipment and storage medium |
CN116300629A (en) * | 2023-03-23 | 2023-06-23 | 广东保伦电子股份有限公司 | Task scene linkage method, device and medium for custom programming |
CN116795273A (en) * | 2022-03-15 | 2023-09-22 | 广州视源电子科技股份有限公司 | Interactive screen display method, device, medium and electronic equipment |
CN116974416A (en) * | 2023-02-21 | 2023-10-31 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and readable storage medium |
-
2023
- 2023-12-25 CN CN202311793074.0A patent/CN117765839B/en active Active
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001125767A (en) * | 1999-10-28 | 2001-05-11 | Sharp Corp | Device and method for presenting information and computer readable recording medium with recorded information presentation program |
WO2010086321A1 (en) * | 2009-01-28 | 2010-08-05 | Auralia Emotive Media Systems, S.L. | Binaural audio guide |
CN103428275A (en) * | 2013-07-30 | 2013-12-04 | 苏州两江科技有限公司 | Indoor object activity routine tracking method based on WSN |
CN104392633A (en) * | 2014-11-12 | 2015-03-04 | 国家电网公司 | Interpretation control method oriented to power system simulating training |
CN106776793A (en) * | 2016-11-23 | 2017-05-31 | 广州酷狗计算机科技有限公司 | A kind of method and apparatus of playing audio-fequency data |
JP2018007227A (en) * | 2016-12-06 | 2018-01-11 | 株式会社コロプラ | Information processing method and program for computer to execute the information processing method |
US20200068335A1 (en) * | 2017-06-02 | 2020-02-27 | Nokia Technologies Oy | Switching rendering mode based on location data |
CN107529146A (en) * | 2017-10-12 | 2017-12-29 | 深圳米唐科技有限公司 | With reference to more sensing chamber localization method, device, system and the storage mediums of audio |
US20190306421A1 (en) * | 2018-03-30 | 2019-10-03 | Ricoh Company, Ltd. | Vr system, communication method, and non-transitory computer-readable medium |
WO2020207577A1 (en) * | 2019-04-10 | 2020-10-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Virtual playout screen for visual media using arrangement of mobile electronic devices |
JP2021061461A (en) * | 2019-10-02 | 2021-04-15 | 株式会社Grit | Program, information processing device, information processing method, and information processing system |
CN110751578A (en) * | 2019-10-30 | 2020-02-04 | 清远博云软件有限公司 | Guide recognition equipment for tourism |
CN111246378A (en) * | 2020-01-10 | 2020-06-05 | 北京腾文科技有限公司 | Navigation method based on iBeacon Bluetooth positioning and related components |
CN212463502U (en) * | 2020-07-21 | 2021-02-02 | 贵阳清文云科技有限公司 | Directional sound explanation system |
CN112001817A (en) * | 2020-08-24 | 2020-11-27 | 杭州宣迅电子科技有限公司 | Scenic spot real-time guide explanation management system based on artificial intelligence |
WO2022160743A1 (en) * | 2021-01-29 | 2022-08-04 | 稿定(厦门)科技有限公司 | Video file playing system, audio/video playing process, and storage medium |
WO2022166173A1 (en) * | 2021-02-02 | 2022-08-11 | 深圳市慧鲤科技有限公司 | Video resource processing method and apparatus, and computer device, storage medium and program |
CN113359986A (en) * | 2021-06-03 | 2021-09-07 | 北京市商汤科技开发有限公司 | Augmented reality data display method and device, electronic equipment and storage medium |
CN113727272A (en) * | 2021-07-26 | 2021-11-30 | 和美(深圳)信息技术股份有限公司 | Distributed intelligent interaction method and device, electronic equipment and storage medium |
CN114253504A (en) * | 2021-12-28 | 2022-03-29 | 京东方科技集团股份有限公司 | Audio device volume control method, device and system and electronic device |
CN116795273A (en) * | 2022-03-15 | 2023-09-22 | 广州视源电子科技股份有限公司 | Interactive screen display method, device, medium and electronic equipment |
CN115278273A (en) * | 2022-06-13 | 2022-11-01 | 北京达佳互联信息技术有限公司 | Resource display method and device, electronic equipment and storage medium |
CN115167806A (en) * | 2022-07-11 | 2022-10-11 | 广州市保伦电子有限公司 | Network audio broadcast gradual change audio processing method and server |
CN116974416A (en) * | 2023-02-21 | 2023-10-31 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and readable storage medium |
CN116300629A (en) * | 2023-03-23 | 2023-06-23 | 广东保伦电子股份有限公司 | Task scene linkage method, device and medium for custom programming |
Non-Patent Citations (1)
Title |
---|
何利: "互动空间设计在博物馆展示空间中的应用研究", 《居社》, 30 September 2023 (2023-09-30) * |
Also Published As
Publication number | Publication date |
---|---|
CN117765839B (en) | 2024-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111476911B (en) | Virtual image realization method, device, storage medium and terminal equipment | |
CN104303177B (en) | Execute the method and earphone computing device of real-time phonetic translation | |
US20220038615A1 (en) | Wearable Multimedia Device and Cloud Computing Platform with Application Ecosystem | |
EP4123444A1 (en) | Voice information processing method and apparatus, and storage medium and electronic device | |
US11274932B2 (en) | Navigation method, navigation device, and storage medium | |
JP2017538948A (en) | Navigation, navigation video generation method and apparatus | |
EP2410714A1 (en) | Apparatus and method for providing augmented reality service using sound | |
US11610498B2 (en) | Voice interactive portable computing device for learning about places of interest | |
US10397750B2 (en) | Method, controller, telepresence robot, and storage medium for controlling communications between first communication device and second communication devices | |
CN114371824B (en) | Audio processing method, system and related device | |
KR20170125414A (en) | Information management system and information management method | |
WO2023093092A1 (en) | Minuting method, and terminal device and minuting system | |
JP2016183968A (en) | Voice guidance support system and program therefor | |
CN114822543A (en) | Lip language identification method, sample labeling method, model training method, device, equipment and storage medium | |
CN111741394A (en) | Data processing method and device and readable medium | |
EP2503545A1 (en) | Arrangement and method relating to audio recognition | |
CN115512479A (en) | Method for managing reception information and back-end equipment | |
CN103780808A (en) | Content acquisition apparatus and storage medium | |
CN117765839B (en) | Indoor intelligent navigation method, device and storage medium | |
KR20200083157A (en) | A method and apparatus for recommending a game based on a real space to a user | |
CN107209901B (en) | Information providing system, information providing method, and management device | |
CN117813652A (en) | Audio signal encoding method, device, electronic equipment and storage medium | |
CN1980406A (en) | Manfree multi-language city tour-guide and comment system realizing method | |
Spandonidis et al. | Design of Smart Glasses that Enable Computer Vision for the Improvement of Autonomy of the Visually Impaired. | |
EP3816774A1 (en) | Information processing device for executing plurality of processes in parallel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |