CN110784818A - Sound navigation interactive system based on intelligent terminal - Google Patents

Sound navigation interactive system based on intelligent terminal Download PDF

Info

Publication number
CN110784818A
CN110784818A CN201911074590.1A CN201911074590A CN110784818A CN 110784818 A CN110784818 A CN 110784818A CN 201911074590 A CN201911074590 A CN 201911074590A CN 110784818 A CN110784818 A CN 110784818A
Authority
CN
China
Prior art keywords
audio
intelligent terminal
sound
sensor
interactive system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911074590.1A
Other languages
Chinese (zh)
Inventor
翁若伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI CONSERVATORY OF MUSIC
Original Assignee
SHANGHAI CONSERVATORY OF MUSIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI CONSERVATORY OF MUSIC filed Critical SHANGHAI CONSERVATORY OF MUSIC
Priority to CN201911074590.1A priority Critical patent/CN110784818A/en
Publication of CN110784818A publication Critical patent/CN110784818A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Abstract

The invention relates to a sound navigation interactive system based on an intelligent terminal, which comprises a sensor end, the intelligent terminal and a sound mixing output end, wherein the sensor end is distributed at different positions of a venue, and the intelligent terminal comprises: the sensor signal analysis module is used for receiving a sensing signal of a sensor end in the area and forming an area event trigger signal based on the sensing signal; the sound source material extraction module is used for extracting and obtaining corresponding audio materials according to the region event trigger signals; the motion sensor is used for receiving real-time position information of an experiencer; the audio digital signal processing module is used for processing the audio material according to the sensing signal and the motion sensor parameter; the audio mixing processing module is used for carrying out audio mixing processing on the audio signals output by the audio digital signal processing module to form stereo signals; the audio mixing output end realizes the playing of a stereo field based on the stereo signal. Compared with the prior art, the method has the advantages of convenience in realizing a three-dimensional sound field, good immersive experience effect and the like.

Description

Sound navigation interactive system based on intelligent terminal
Technical Field
The invention relates to navigation equipment, in particular to a voice navigation interactive system based on an intelligent terminal.
Background
The guide has an extremely important role in various types of exhibitions. Under the background that electronic music increasingly emphasizes and extends 'interactive experience', the requirement of a sound device in new media exhibition is higher and higher, and technical means are required to be searched for, and the 'functionality' of the guide and the 'artistry' created by the device are integrated.
The current navigation modes are mainly two types: one is an automatic tour guide device based on scenic spot and exhibition, and the other is an interface type navigation application program based on a large museum. The automatic tour guide device generally stores digital audio files, video files, characters, pictures and the like in the terminal playing equipment, and then a plurality of wireless signal activation devices covering all scenic spots and exhibition places activate the playing equipment entering the area. The terminal equipment entering the area can automatically play corresponding content after receiving the activation signal from the area. The setting firstly requires that a base station for distributing wireless signals and a signal distribution configuration need to be arranged, and the process is complicated; furthermore, the user-defined function of the terminal playing device is weak, and particularly, further personalized setting and creation requirements cannot be further performed in the aspect of audio. Interface type navigation application programs in large museums are most intelligent equipment application programs for public education purposes, audio and video data and related introduction of a large number of museum works are built in the interface type navigation application programs according to recommended visiting routes set by the museum, and visitors can look up and enjoy the interface type navigation application programs. Such applications are more akin to an exhibition hall introducing e-books that employ positioning systems to direct the exhibition area. Because of popular science requirements, the image and video contents are heavily weighted, and are mostly customized by independent entrustment design of venues, and the possibility of self-defined creation based on sound design is low.
Chinese patent application CN109712034A discloses an intelligent navigation system based on wearable device, including detection terminal, background server and navigation device, wherein: the detection terminal is used for detecting whether target wearable equipment exists in a preset scenic spot position range corresponding to the detection terminal; if the target wearable device exists, reading target user information of the target wearable device, and sending the target user information to the background server; the background server is used for receiving the target user information and generating target navigation information matched with the target user information and the scenery spot information according to the target user information and the scenery spot information corresponding to the detection terminal; and sending a starting instruction to the navigation device according to the target navigation information; and the navigation device is started according to the starting instruction and used for presenting a navigation scene matched with the target navigation information so that a target user corresponding to the target wearable equipment acquires explanation information corresponding to the sight spot information in the navigation scene. The system can achieve the purpose of navigation, but lacks of processing sound, and has poor experience effect.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a sound navigation interactive system based on an intelligent terminal, which is convenient to use and is convenient for realizing the immersive experience of a new media sound device. The system can be used for guiding in exhibition of a new media device, artistic devices mainly guided by sound experience, sound landscape map construction projects of specific scenes such as virtual cities and the like, and sound landscape map guiding projects based on special crowds such as blind people and the like.
The purpose of the invention can be realized by the following technical scheme:
the utility model provides a sound guide interactive system based on intelligent terminal, includes sensor end, intelligent terminal and audio mixing output, the sensor end distributes in the different positions in venue, intelligent terminal includes:
the sensor signal analysis module is used for receiving a sensing signal of a sensor end of a region where the sensor is located and forming a region event trigger signal based on the sensing signal;
the sound source material extraction module is used for extracting and obtaining corresponding audio materials according to the region event trigger signals;
the motion sensor is used for receiving real-time position information of an experiencer;
the audio digital signal processing module is used for carrying out human voice language processing and music environment processing on the audio material according to the sensing signal and the motion sensor parameter;
the audio mixing processing module is used for carrying out audio mixing processing on the audio signals output by the audio digital signal processing module to form stereo signals;
the human-computer interface is used for acquiring control parameters;
and the sound mixing output end realizes the playing of a stereo field based on the stereo signal.
Further, the sensor end comprises a plurality of ibeacon technology-based Bluetooth sensors distributed at different positions of the venue.
Further, the position and signal coverage parameters of the bluetooth sensor are set according to the sound landscape map of the venue.
Further, the audio material is a single track file or a multi-track file.
Further, the audio digital signal processing module is implemented based on Faust language.
Further, the audio digital signal processing module comprises:
the human voice language processing unit is used for acquiring the relative position state of the experiencer according to the sensing signal and simulating to acquire the three-dimensional positioning information of the sound source based on the parameters of the motion sensor;
and the music effect processing unit is used for obtaining the directivity parameters according to the sensing signals and mapping the directivity parameters into the effect parameters of the effect device.
Further, the mixing process includes adjusting the ratio between different audio materials.
Furthermore, the audio mixing output end is an earphone end, and the human voice part is processed by three-dimensional sound to form an earphone virtual sound field environment.
Further, the control parameters include an audio material selection parameter and an effector selection parameter.
Compared with the prior art, the invention has the following beneficial effects:
1) according to the invention, the sensor end is used for triggering the area event, so that the position of an experiencer can be more accurately positioned, a more accurate three-dimensional sound field is formed, and the experience effect is improved.
2) The sensor is simple in configuration, and can be conveniently created and performed by the same creator even a creation team in different environments, so that the requirement of upgrading, transforming and customizing the individuation according to the characteristics of different works is greatly met.
3) The invention processes and mixes the audio frequency materials based on the sensing signals, constructs a sound field environment which is integrated with the freshness, and enables the experiencer to combine sound art, space art and auditory sense during walking and advancing, thereby having good experience effect.
4) The invention can better help electronic music creators to conveniently and rapidly create and manufacture sound landscape map experience works and devices under the platform architecture.
Drawings
FIG. 1 is a schematic structural diagram of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
As shown in fig. 1, the present embodiment provides an intelligent terminal-based sound navigation interactive system, which includes a sensor terminal 1, an intelligent terminal 2, and a mixing output terminal 3, and cooperates with each node to implement music creation and effect processing, so as to finally present immersive sound experience.
The sensor end 1 is distributed at different positions of the venue, the plurality of Bluetooth sensors 11 based on the ibeacon technology are distributed at different positions of the venue, the positions and signal coverage parameters of the Bluetooth sensors 11 are set according to a sound landscape map of the venue, and different effect areas and path trends in the sound landscape map are finally formed. iBeacon is a low-energy-consumption Bluetooth technology based on a tag module, the working principle of the iBeacon is similar to that of the prior Bluetooth technology, and the iBeacon transmits signals, and the IOS/Android device receives positioning signals so as to make feedback. A suite of iBeacon deployments consists of one or more iBeacon beacon devices transmitting their unique identification code within a range. The transmission distance of iBeacon is divided into 3 different ranges of short-range, medium-range and long-range: from a few centimeters to more than ten meters. When a user enters, exits or wanders in an area, the broadcast of the iBeacon has the ability to propagate, interacting according to the distance between the user and the Beacon. The technology is widely compatible with most of smart phones including iOS/Android, and a plurality of corresponding scene technology applications can be made according to the simple positioning technology.
The intelligent terminal 2 comprises a sensor signal analysis module 21, a sound source material extraction module 22, a motion sensor 26, an audio digital signal processing module 23, a sound mixing processing module 24 and a human-computer interface 25, wherein the sensor signal analysis module 21 is used for receiving a sensing signal of a sensor end of a region where the sensor is located and forming a region event trigger signal based on the sensing signal; the sound source material extraction module 22 is used for extracting and obtaining corresponding audio materials according to the region event trigger signals; motion sensor 26 is used to receive experiencer real-time location information; the audio digital signal processing module 23 is configured to perform human voice language processing and music environment processing on the audio material according to the sensing signal and the motion sensor parameter; the audio mixing processing module 24 is configured to perform audio mixing processing on the audio signal output by the audio digital signal processing module, including adjusting the proportion among different audio materials, and the like, to form a stereo signal; the human-machine interface 25 is used to obtain control parameters.
In this embodiment, the intelligent terminal is a smart phone. When the smart phone enters the corresponding effect area, the sensor signal analysis module analyzes and analyzes the received signal, judges the corresponding number of the area where the smart phone is located, and triggers the event response of the corresponding area in time.
And after receiving the trigger event response, the sound source material extraction module extracts the corresponding audio material in the play library according to the preset corresponding event. The audio material supports mono or multi-channel mono or multi-track files, supporting a variety of audio file formats.
And the audio signal enters an audio digital signal processing module written based on the Faust language, and corresponding effect processing is carried out on the audio material according to the design. Faust (functional Audio stream), developed by the Freon national music creation center (GRAME) of France. Is a functional programming language for sound synthesis and audio processing, emphasizing the design of synthesizers, musical instruments, audio effects, etc. The method is mainly used for high-performance signal processing application programs and audio plug-ins and is widely compiled in various operating platforms including smart phones.
The audio digital signal processing module comprises a human voice language processing unit and a music effect processing unit, wherein the human voice language processing unit is used for acquiring the relative position state of an experiencer according to the sensing signal and simulating to acquire the three-dimensional positioning information of the sound source based on the parameters of the motion sensor; the music effect processing unit is used for obtaining the directional parameters according to the sensing signals and mapping the directional parameters into effect parameters of the effect device so as to achieve real-time interactive experience.
In this embodiment, the audio mixing output end 3 is an earphone end, and the human voice part is processed by three-dimensional sound to form an earphone virtual sound field environment.
The human-computer interface of the intelligent terminal can obtain control parameters input by an experiencer, including audio material selection parameters and effector selection parameters, and facilitates creation of works and content updating.
In some embodiments, as a platform system arranged on the earphone and emphasizing the auditory experience guidance, the human-computer interface only displays the sensor signal diagram under the current environment, does not provide complicated interface operation, and is convenient and concise.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions that can be obtained by a person skilled in the art through logic analysis, reasoning or limited experiments based on the prior art according to the concept of the present invention should be within the protection scope determined by the present invention.

Claims (9)

1. The utility model provides a sound guide interactive system based on intelligent terminal which characterized in that, includes sensor end, intelligent terminal and audio mixing output, the sensor end distributes in the different positions in venue, intelligent terminal includes:
the sensor signal analysis module is used for receiving a sensing signal of a sensor end of a region where the sensor is located and forming a region event trigger signal based on the sensing signal;
the sound source material extraction module is used for extracting and obtaining corresponding audio materials according to the region event trigger signals;
the motion sensor is used for receiving real-time position information of an experiencer;
the audio digital signal processing module is used for carrying out human voice language processing and music environment processing on the audio material according to the sensing signal and the motion sensor parameter;
the audio mixing processing module is used for carrying out audio mixing processing on the audio signals output by the audio digital signal processing module to form stereo signals;
the human-computer interface is used for acquiring control parameters;
and the sound mixing output end realizes the playing of a stereo field based on the stereo signal.
2. The intelligent terminal-based voice navigation interactive system according to claim 1, wherein the sensor terminal comprises a plurality of ibeacon technology-based bluetooth sensors distributed at different locations of the venue.
3. The intelligent terminal-based voice navigation interactive system of claim 2, wherein the bluetooth sensor location and signal coverage parameters are set according to a sound landscape map of a venue.
4. The intelligent terminal-based sound navigation interactive system of claim 1, wherein the audio material is a single track file or a multi-track file.
5. The intelligent terminal-based voice navigation interactive system of claim 1, wherein the audio digital signal processing module is implemented based on the Faust language.
6. The intelligent terminal based sound navigation interactive system according to claim 1, wherein the audio digital signal processing module comprises:
the human voice language processing unit is used for acquiring the relative position state of the experiencer according to the sensing signal and simulating to acquire the three-dimensional positioning information of the sound source based on the parameters of the motion sensor;
and the music effect processing unit is used for obtaining the directivity parameters according to the sensing signals and mapping the directivity parameters into the effect parameters of the effect device.
7. The intelligent terminal-based sound navigation interactive system of claim 1, wherein the mixing process comprises adjusting the ratio between different audio materials.
8. The intelligent terminal based sound navigation interactive system according to claim 1, wherein the audio mixing output end is an earphone end, and the human voice part is processed by three-dimensional sound to form an earphone virtual sound field environment.
9. The intelligent terminal-based sound navigation interactive system according to claim 1, wherein the control parameters include an audio material selection parameter and an effector selection parameter.
CN201911074590.1A 2019-11-06 2019-11-06 Sound navigation interactive system based on intelligent terminal Pending CN110784818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911074590.1A CN110784818A (en) 2019-11-06 2019-11-06 Sound navigation interactive system based on intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911074590.1A CN110784818A (en) 2019-11-06 2019-11-06 Sound navigation interactive system based on intelligent terminal

Publications (1)

Publication Number Publication Date
CN110784818A true CN110784818A (en) 2020-02-11

Family

ID=69389368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911074590.1A Pending CN110784818A (en) 2019-11-06 2019-11-06 Sound navigation interactive system based on intelligent terminal

Country Status (1)

Country Link
CN (1) CN110784818A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112289288A (en) * 2020-11-03 2021-01-29 上海音乐学院 Music creation system based on intelligent mobile terminal

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202307060U (en) * 2011-10-14 2012-07-04 傅航 Intelligent exhibition system
US20160342303A1 (en) * 2007-10-24 2016-11-24 Sococo, Inc. Shared virtual area communication environment based apparatus and methods
CN107360541A (en) * 2016-05-09 2017-11-17 上海华博信息服务有限公司 A kind of exhibition section wisdom guide system
CN108141684A (en) * 2015-10-09 2018-06-08 索尼公司 Audio output device, sound generation method and program
US20190045318A1 (en) * 2017-08-07 2019-02-07 Google Inc. Spatial audio triggered by a users physical environment
CN109885778A (en) * 2018-12-27 2019-06-14 福建农林大学 A kind of tourism guide to visitors application method and system based on augmented reality
CN110211222A (en) * 2019-05-07 2019-09-06 谷东科技有限公司 A kind of AR immersion tourism guide method, device, storage medium and terminal device
US20190306647A1 (en) * 2015-12-27 2019-10-03 Philip Scott Lyren Switching Binaural Sound
CN110379340A (en) * 2019-06-19 2019-10-25 北京邮电大学 Outdoor positioning tourism guide system based on iBeacon and GPS

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160342303A1 (en) * 2007-10-24 2016-11-24 Sococo, Inc. Shared virtual area communication environment based apparatus and methods
CN202307060U (en) * 2011-10-14 2012-07-04 傅航 Intelligent exhibition system
CN108141684A (en) * 2015-10-09 2018-06-08 索尼公司 Audio output device, sound generation method and program
US20190306647A1 (en) * 2015-12-27 2019-10-03 Philip Scott Lyren Switching Binaural Sound
CN107360541A (en) * 2016-05-09 2017-11-17 上海华博信息服务有限公司 A kind of exhibition section wisdom guide system
US20190045318A1 (en) * 2017-08-07 2019-02-07 Google Inc. Spatial audio triggered by a users physical environment
CN109885778A (en) * 2018-12-27 2019-06-14 福建农林大学 A kind of tourism guide to visitors application method and system based on augmented reality
CN110211222A (en) * 2019-05-07 2019-09-06 谷东科技有限公司 A kind of AR immersion tourism guide method, device, storage medium and terminal device
CN110379340A (en) * 2019-06-19 2019-10-25 北京邮电大学 Outdoor positioning tourism guide system based on iBeacon and GPS

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汤善雯: "《互动设计在博物馆展示中的应用》", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112289288A (en) * 2020-11-03 2021-01-29 上海音乐学院 Music creation system based on intelligent mobile terminal

Similar Documents

Publication Publication Date Title
US7068290B2 (en) Authoring system
Rozier Here&There: an augmented reality system of linked audio
CN108540542B (en) Mobile augmented reality system and display method
CN106465008B (en) Terminal audio mixing system and playing method
CN109640125B (en) Video content processing method, device, server and storage medium
CN102985901A (en) Method and apparatus for rendering a perspective view of objects and content related thereto for location-based services on mobile device
US9092894B2 (en) Display control device and display control program for grouping related items based upon location
KR101421695B1 (en) System and method for providing cheering service using luminous devices
US20190130644A1 (en) Provision of Virtual Reality Content
CN111916039A (en) Music file processing method, device, terminal and storage medium
KR20120009131A (en) Apparatus and method for providing augmented reality service using sound
EP3989083A1 (en) Information processing system, information processing method, and recording medium
KR101413794B1 (en) Edutainment contents mobile terminal using augmented reality and method for controlling thereof
CN112165628A (en) Live broadcast interaction method, device, equipment and storage medium
CN111402844B (en) Song chorus method, device and system
CN110493635B (en) Video playing method and device and terminal
CN110784818A (en) Sound navigation interactive system based on intelligent terminal
CN110309238A (en) Point of interest interactive approach, system, electric terminal and storage medium in music
Andolina et al. Exploitation of mobile access to context-based information in cultural heritage fruition
CN114693890A (en) Augmented reality interaction method and electronic equipment
Marynowsky et al. 'The Ghosts of Roller Disco', a Choreographed, Interactive Performance for Robotic Roller Skates
JP6792666B2 (en) Terminal device, terminal device control method, computer program
CN113343022A (en) Song teaching method, device, terminal and storage medium
Dastageeri et al. Approach and evaluation of a mobile video-based and location-based augmented reality platform for information brokerage
KR20190026264A (en) Mixed reality contents providing system capable of freely marker register

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211

RJ01 Rejection of invention patent application after publication