CN108810838A - The room-level localization method known based on smart mobile phone room background phonoreception - Google Patents

The room-level localization method known based on smart mobile phone room background phonoreception Download PDF

Info

Publication number
CN108810838A
CN108810838A CN201810560130.9A CN201810560130A CN108810838A CN 108810838 A CN108810838 A CN 108810838A CN 201810560130 A CN201810560130 A CN 201810560130A CN 108810838 A CN108810838 A CN 108810838A
Authority
CN
China
Prior art keywords
room
background sound
background
mobile phone
smart mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810560130.9A
Other languages
Chinese (zh)
Inventor
王玫
昂晨
仇洪冰
宋浠瑜
罗丽燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201810560130.9A priority Critical patent/CN108810838A/en
Publication of CN108810838A publication Critical patent/CN108810838A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0252Radio frequency fingerprinting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a kind of room-level localization methods known based on smart mobile phone room background phonoreception, including position two stages on acquisition and training and background sound ray in place to be positioned under indoor room background sound ray.Indoor environment sound is enrolled using smart mobile phone, the 5th percentile power by extracting sound power is used as acoustic feature, import RNN-LSTM learning algorithms, training obtains background sound location model in given chamber, the discrimination in room can be calculated by being compared by the room information with true environment, identify that room-level positioning can be realized in room.For relatively traditional similar acoustic signature indoor locating system, the method for the present invention does not only reach the requirement of room-level positioning, and improves room discrimination, is more suitable in room background acoustic fix ranging scene.

Description

The room-level localization method known based on smart mobile phone room background phonoreception
Technical field
It is specifically a kind of to be known based on smart mobile phone room background phonoreception the present invention relates to the localization method of indoor room grade Room-level localization method.
Background technology
GPS be representative location technology since appearance, characteristic efficient with its, rapid, accurate makes people's lives Drastic change occurs for mode, while its service having been driven to flourish with what is applied, brings great convenience to people's lives.But It is traditional outdoor positioning technology (such as GPS) due to the limitation of principle, performance in room conditions is ideal not to the utmost, therefore It is badly in need of efficient, convenient, the accurate indoor positioning technologies of one kind to plug a gap.
It is currently more ripe to have based on indoor positioning technologies such as Wifi, bluetooth, infrared ray, ultrasonic waves.Determined based on WiFi Position technical foundation equipment is easily installed, but easily higher by other signals interference, power consumption;Location technology based on bluetooth is low in energy consumption, Easy of integration, but orientation distance is short, stability is poor, easily by noise jamming;It is high based on infrared location technology precision, but cannot wear Obstacle-overpass, while cost is high, power consumption is larger;Indoor positioning technologies overall precision based on ultrasound is high, simple in structure, but exists Multipath effect, decaying is apparent, is easily affected by temperature, is of high cost.
Technological merit based on background acoustic fix ranging is easily acquired without the other infrastructure of pre-arranged and background sound. In fact, background sound is distributed special Acoustic Wave Propagation form as a kind of time and space, human auditory system is acted on, it can shape At the Auditory Perception effect of certain rule.Meanwhile background sound is also a kind of information carrier, reflects the physical attribute, outer of sounding body The key property of numerous environmental factors such as portion's exciting force.In addition, architectural acoustics field proposes:The lasting sound in room and room Shock response is combined the unique background sound for foring each room.Even two rooms similar in human auditory system, due to Persistence sound caused by room unit remains able to more accurately distinguish two different rooms.Therefore using background sound into Row positioning is feasible.
Oneself has part interior fingerprint location system, and advantage, acquisition WiFi, sound, vision figure are sensed using smart mobile phone more Picture, accelerometer data carry out the fusion positioning of multi information as fingerprint;A small number of documents then specialize in indoor environment background sound The method of positioning, such as:Background sound indoor positioning etc. is carried out by background sound fingerprint extraction and KNN algorithms.However, by different acoustics Feature and the influence for not identifying sorting algorithm in unison, positioning accuracy are generally relatively low.
Invention content
The shortcomings that needing pre- deployment base facility for traditional indoor positioning, the present invention provide a kind of based on smart mobile phone room The room-level localization method of interior background sound perception, it is only necessary to acquire indoor room background sound using smart mobile phone, extract background sound Fingerprint simultaneously establishes background acoustic model;One is trained by RNN-LSTM learning algorithms to be suitable for determining under room background sound field scape Bit model is applied to the positioning of indoor room grade.
The present invention is based on the room-level localization methods that smart mobile phone room background phonoreception is known, including (1) indoor room background It acquires and trains under sound ray, and position two stages on (2) background sound ray in place to be positioned.
Acquisition and training, specific method include the following steps under stage (1) the indoor room background sound ray:
(1.1) indoor room background sound and feature extraction are acquired:
Using enough room background sound data are acquired under smart mobile phone line, background acoustic feature extraction is carried out, passes through the 5th Percentile power draw goes out background sound fingerprint;
(1.2) background sound fingerprint base is built:
Background sound fingerprint is collectively formed into room background sound fingerprint base with the room label information manually marked;
(1.3) training process:
After constructing background sound fingerprint base, as training set data, trained by RNN-LSTM deep learning algorithms Go out to be suitable for the location model under background sound indoor positioning scene, this model there need to be higher generalization ability, can preferably reflect The feature of entire sample space.
It is positioned on stage (2) the background sound ray in place to be positioned, specific method includes the following steps:
(2.1) background sound test set finger print data in place to be positioned is obtained:
Place background sound to be positioned in studio carries out the 5th percentage power draw, and the background sound fingerprint obtained is as survey Examination collection data;
(2.2) it by the background sound location model of training under test set Data In-Line, after input, exports to mark for room and believe Breath, the discrimination that can calculate room is compared by the room information with true environment, identifies that room-level can be realized in room Positioning.
The present invention establishes room background sound by the 5th percentile power draw background sound fingerprint using RNN-LSTM algorithms Location model so that room discrimination has promotion by a relatively large margin.
Step (1.1) the 5th percentile power draw goes out background sound fingerprint, includes the following steps:
(1.1.1) carries out framing windowing operation to the original audio sequence of acquisition, obtains the background acoustical signal of short-term stationarity, Window function is:
Each frame audio signal after framing adding window is done FFT transform by (1.1.2), retain FFT transform preceding two/ One data, and it is multiplied by its conjugation, power spectrum can be found out;
FFT transform formula is:
(1.1.3) gives up the audio signal that frequency is more than 7kHz;
(1.1.4) is ranked up remaining data by watt level;
5th percentage of (1.1.5) extraction power arranges and takes logarithm, obtains background sound fingerprint.
The 5th percentage power draw of carry out described in step (2.1), method are identical as step (1.1).First two steps are marks Accurate spectra calculation method.After finding out power spectrum, need the feature vector that robustness is high in extraction power spectrum to characterize room Between background sound.Since want extraction is background sound in room, this feature should have time stationarity, it is therefore desirable to inhibit Transient noise.During window sample background is extracted by selecting the minimum value for the background acoustical power observed under each frequency Sound spectrum.However, minimum value is easy to be interfered by outside noise and preprocessing process, therefore selection closes on power minimum One group of feature vector replaces minimum value, i.e. the 5th percentile feature vector of power.
Step (1.3) the RNN-LSTM learning algorithms train location model, include the following steps:
(1.3.1) determines parameter:Initialize the weight matrix of input layer, hidden layer, output layer;
(1.3.2) propagated forward:The output valve of each neuron of forward calculation;
(1.3.3) backpropagation:Propagate packet in the direction of the error term of each neuron of backwards calculation, RNN-LSTM error terms Include both direction:One is propagated along the direction of time, i.e., since current t moment, calculates the error term at each moment;One It is to propagate error term upper layer;
The iteration that (1.3.4) carries out parameters weighting according to corresponding error term updates calculating.
RNN-LSTM replaces the Advanced Edition of the RNN of conventional network elements using LSTM cells.The basis of LSTM cells is former Reason is to manipulate the information flow in network with different types of door.By door, LSTM cells can decide when It should remember input information, when should forget the information and when should export the information.Therefore it is one Kind can protect stored complicated and exquisite network element RNN-LSTM for a long time.It can solve to disappear or explode due to gradient Caused short cycle Dependence Problem, to realize the effect of long-term memory.
The calculating of step (2.2) the room discrimination:It is the room mark of the result and true environment that are exported according to model Note is compared, and can calculate the discrimination p in room;
Wherein, yiIt indicates to mark using the calculated room of model,Indicate the room label under true environment, Expression is worked asIts value is 1;Otherwise its value is 0.
The present invention is based on the room-level localization method that smart mobile phone room background phonoreception is known, this method is other without disposing in advance Infrastructure, it is only necessary to acquire room background sound using smart mobile phone, the 5th percentile power of extraction is as background sound fingerprint characteristic. This feature extracting method calculates simply relative to feature extracting methods such as MFCC, passes through RNN-LSTM deep learning algorithms and trains The Model Identification rate obtained is high, compares conventional model performance and has and is largely promoted, and is more suitable for room background acoustic fix ranging field Jing Zhong.
Description of the drawings
Fig. 1 is to be acquired and training process block diagram under indoor room background sound ray in localization method of the present invention;
Fig. 2 is position fixing process block diagram on background sound ray in place to be positioned in localization method of the present invention.
Specific implementation mode
The content of present invention is further described below in conjunction with the accompanying drawings, but is not limitation of the invention.
Referring to Fig.1-2, the present invention is based on the room-level localization methods that smart mobile phone room background phonoreception is known, including walk as follows Suddenly:
(1) acquisition and training under indoor room background sound ray
(1.1) indoor room background sound and feature extraction are acquired:
Using enough room background sound data are acquired under smart mobile phone line, background acoustic feature extraction is carried out, passes through the 5th Percentile power draw goes out background sound fingerprint;
(1.2) background sound fingerprint base is built:
Background sound fingerprint is collectively formed into room background sound fingerprint base with the room label information manually marked;
(1.3) training process:
After constructing background sound fingerprint base, as training set data, trained by RNN-LSTM deep learning algorithms Go out to be suitable for the location model under background sound indoor positioning scene, this model there need to be higher generalization ability, can preferably reflect The feature of entire sample space;
(2) it is positioned on background sound ray in place to be positioned
(2.1) background sound test set finger print data in place to be positioned is obtained:
Place background sound to be positioned in studio carries out the 5th percentage power draw, and the background sound fingerprint obtained is as survey Examination collection data;
(2.2) it by the background sound location model of training under test set Data In-Line, after input, exports to mark for room and believe Breath, the discrimination that can calculate room is compared by the room information with true environment, identifies that room-level can be realized in room Positioning.
Step (1.1) the 5th percentile power draw goes out background sound fingerprint, includes the following steps:
(1.1.1) carries out framing windowing operation to the original audio sequence of acquisition, obtains the background acoustical signal of short-term stationarity, Window function is:
Each frame audio signal after framing adding window is done FFT transform by (1.1.2), retain FFT transform preceding two/ One data, and it is multiplied by its conjugation, power spectrum can be found out;
FFT transform formula is:
(1.1.3) gives up the audio signal that frequency is more than 7kHz;
(1.1.4) is ranked up remaining data by watt level;
5th percentage of (1.1.5) extraction power arranges and takes logarithm, obtains background sound fingerprint.
The 5th percentage power draw of carry out described in step (2.1), method are identical as step (1.1).
Step (1.3) the RNN-LSTM learning algorithms train location model, include the following steps:
(1.3.1) determines parameter:Initialize the weight matrix of input layer, hidden layer, output layer;
(1.3.2) propagated forward:The output valve of each neuron of forward calculation;
(1.3.3) backpropagation:Propagate packet in the direction of the error term of each neuron of backwards calculation, RNN-LSTM error terms Include both direction:One is propagated along the direction of time, i.e., since current t moment, calculates the error term at each moment;One It is to propagate error term upper layer;
The iteration that (1.3.4) carries out parameters weighting according to corresponding error term updates calculating.
The calculating of step (2.2) the room discrimination:It is the room mark of the result and true environment that are exported according to model Note is compared, and can calculate the discrimination p in room;
Wherein, yiIt indicates to mark using the calculated room of model,Indicate the room label under true environment, Expression is worked asIts value is 1;Otherwise its value is 0.
The present invention enrolls indoor environment sound using smart mobile phone, and the 5th percentile power by extracting sound power is used as Acoustic feature imports RNN-LSTM learning algorithms, trains background sound location model in given chamber, relatively traditional similar acoustic signature room For interior positioning system, reaches 90% or more using discrimination of the method for the present invention in 15 rooms, do not only reach room-level The requirement of positioning, and improve room discrimination.

Claims (4)

1. based on the room-level localization method that smart mobile phone room background phonoreception is known, including being adopted under (1) indoor room background sound ray Collection and training, and two stages are positioned on (2) background sound ray in place to be positioned, it is characterised in that:
Acquisition and training, specific method include the following steps under stage (1) the indoor room background sound ray:
(1.1) indoor room background sound and feature extraction are acquired:
Using enough room background sound data are acquired under smart mobile phone line, background acoustic feature extraction is carried out, the 5th percentage is passed through Position power draw goes out background sound fingerprint;
(1.2) background sound fingerprint base is built:
Background sound fingerprint is collectively formed into room background sound fingerprint base with the room label information manually marked;
(1.3) training process:
After constructing background sound fingerprint base, as training set data, trained by RNN-LSTM deep learning algorithms suitable For the location model under background sound indoor positioning scene;
It is positioned on stage (2) the background sound ray in place to be positioned, specific method includes the following steps:
(2.1) background sound test set finger print data in place to be positioned is obtained:
Place background sound to be positioned in studio carries out the 5th percentage power draw, and the background sound fingerprint obtained is as test set Data;
(2.2) will under test set Data In-Line training background sound location model, after input, export as room label information, The discrimination in room can be calculated by being compared by the room information with true environment, and it is fixed to identify that room-level can be realized in room Position.
2. the room-level localization method according to claim 1 known based on smart mobile phone room background phonoreception, feature are existed In:Step (1.1) the 5th percentile power draw goes out background sound fingerprint, includes the following steps:The original of (1.1.1) to acquisition Beginning tonic train carries out framing windowing operation, obtains the background acoustical signal of short-term stationarity, window function is:
Each frame audio signal after framing adding window is done FFT transform by (1.1.2), retains the preceding half number of FFT transform According to, and it is multiplied by its conjugation, power spectrum can be found out;
FFT transform formula is:
(1.1.3) gives up the audio signal that frequency is more than 7kHz;
(1.1.4) is ranked up remaining data by watt level;
5th percentage of (1.1.5) extraction power arranges and takes logarithm, obtains background sound fingerprint.
3. the room-level localization method according to claim 1 known based on smart mobile phone room background phonoreception, feature are existed In:Step (1.3) the RNN-LSTM learning algorithms train location model, include the following steps:
(1.3.1) determines parameter:Initialize the weight matrix of input layer, hidden layer, output layer;
(1.3.2) propagated forward:The output valve of each neuron of forward calculation;
(1.3.3) backpropagation:The error term of each neuron of backwards calculation, it includes two that the direction of RNN-LSTM error terms, which is propagated, A direction:One is propagated along the direction of time, i.e., since current t moment, calculates the error term at each moment;One be by Error term upper layer is propagated;
The iteration that (1.3.4) carries out parameters weighting according to corresponding error term updates calculating.
4. the room-level localization method according to claim 1 known based on smart mobile phone room background phonoreception, feature are existed In:The calculating of step (2.2) the room discrimination:It is the room label of the result and true environment that are exported according to location model It is compared, the discrimination p in room can be calculated;
Wherein, yiIt indicates to mark using the calculated room of model,Indicate the room label under true environment,Expression is worked asIts value is 1;Otherwise its value is 0.
CN201810560130.9A 2018-06-03 2018-06-03 The room-level localization method known based on smart mobile phone room background phonoreception Pending CN108810838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810560130.9A CN108810838A (en) 2018-06-03 2018-06-03 The room-level localization method known based on smart mobile phone room background phonoreception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810560130.9A CN108810838A (en) 2018-06-03 2018-06-03 The room-level localization method known based on smart mobile phone room background phonoreception

Publications (1)

Publication Number Publication Date
CN108810838A true CN108810838A (en) 2018-11-13

Family

ID=64090138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810560130.9A Pending CN108810838A (en) 2018-06-03 2018-06-03 The room-level localization method known based on smart mobile phone room background phonoreception

Country Status (1)

Country Link
CN (1) CN108810838A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547936A (en) * 2018-12-29 2019-03-29 桂林电子科技大学 Indoor orientation method based on Wi-Fi signal and environmental background sound
CN110333484A (en) * 2019-07-15 2019-10-15 桂林电子科技大学 The room area grade localization method with analysis is known based on environmental background phonoreception
CN111415678A (en) * 2019-01-07 2020-07-14 意法半导体公司 Open or closed space environment classification for mobile or wearable devices
CN112040408A (en) * 2020-08-14 2020-12-04 山东大学 Multi-target accurate intelligent positioning and tracking method suitable for supervision places
CN114339600A (en) * 2022-01-10 2022-04-12 浙江德清知路导航科技有限公司 Electronic equipment indoor positioning system and method based on 5G signal and sound wave signal
US20220317272A1 (en) * 2021-03-31 2022-10-06 At&T Intellectual Property I, L.P. Using Scent Fingerprints and Sound Fingerprints for Location and Proximity Determinations

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020097882A1 (en) * 2000-11-29 2002-07-25 Greenberg Jeffry Allen Method and implementation for detecting and characterizing audible transients in noise
CN105976827A (en) * 2016-05-26 2016-09-28 南京邮电大学 Integrated-learning-based indoor sound source positioning method
CN106535134A (en) * 2016-11-22 2017-03-22 上海斐讯数据通信技术有限公司 Multi-room locating method based on wifi and server
CN107703486A (en) * 2017-08-23 2018-02-16 南京邮电大学 A kind of auditory localization algorithm based on convolutional neural networks CNN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020097882A1 (en) * 2000-11-29 2002-07-25 Greenberg Jeffry Allen Method and implementation for detecting and characterizing audible transients in noise
CN105976827A (en) * 2016-05-26 2016-09-28 南京邮电大学 Integrated-learning-based indoor sound source positioning method
CN106535134A (en) * 2016-11-22 2017-03-22 上海斐讯数据通信技术有限公司 Multi-room locating method based on wifi and server
CN107703486A (en) * 2017-08-23 2018-02-16 南京邮电大学 A kind of auditory localization algorithm based on convolutional neural networks CNN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TARZIA S P ET AL: "Indoor Localization without Infrastructure Using the Acoustic Background Spectrum", 《INTENATIONAL CONFERENCE ON MOBILE SYSTEMS,APPLICATIONS AND SERVICES,ACM》 *
陈文婧: "基于环境感知的智能手机室内定位系统的设计和实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547936A (en) * 2018-12-29 2019-03-29 桂林电子科技大学 Indoor orientation method based on Wi-Fi signal and environmental background sound
CN111415678A (en) * 2019-01-07 2020-07-14 意法半导体公司 Open or closed space environment classification for mobile or wearable devices
CN111415678B (en) * 2019-01-07 2024-02-27 意法半导体公司 Classifying open or closed space environments for mobile or wearable devices
CN110333484A (en) * 2019-07-15 2019-10-15 桂林电子科技大学 The room area grade localization method with analysis is known based on environmental background phonoreception
CN110333484B (en) * 2019-07-15 2021-04-13 桂林电子科技大学 Indoor area level positioning method based on environmental background sound perception and analysis
CN112040408A (en) * 2020-08-14 2020-12-04 山东大学 Multi-target accurate intelligent positioning and tracking method suitable for supervision places
CN112040408B (en) * 2020-08-14 2021-08-03 山东大学 Multi-target accurate intelligent positioning and tracking method suitable for supervision places
US20220317272A1 (en) * 2021-03-31 2022-10-06 At&T Intellectual Property I, L.P. Using Scent Fingerprints and Sound Fingerprints for Location and Proximity Determinations
CN114339600A (en) * 2022-01-10 2022-04-12 浙江德清知路导航科技有限公司 Electronic equipment indoor positioning system and method based on 5G signal and sound wave signal

Similar Documents

Publication Publication Date Title
CN108810838A (en) The room-level localization method known based on smart mobile phone room background phonoreception
CN103310789B (en) A kind of sound event recognition method of the parallel model combination based on improving
CN101023469B (en) Digital filtering method, digital filtering equipment
CN109839612A (en) Sounnd source direction estimation method based on time-frequency masking and deep neural network
CN107610707A (en) A kind of method for recognizing sound-groove and device
CN109192213A (en) The real-time transfer method of court's trial voice, device, computer equipment and storage medium
CN112163461B (en) Underwater target identification method based on multi-mode fusion
CN109949823A (en) A kind of interior abnormal sound recognition methods based on DWPT-MFCC and GMM
CN108630209B (en) Marine organism identification method based on feature fusion and deep confidence network
CN101923855A (en) Test-irrelevant voice print identifying system
CN101710490A (en) Method and device for compensating noise for voice assessment
CN110415728A (en) A kind of method and apparatus identifying emotional speech
CN108922513A (en) Speech differentiation method, apparatus, computer equipment and storage medium
CN103559879A (en) Method and device for extracting acoustic features in language identification system
CN111341319B (en) Audio scene identification method and system based on local texture features
CN108922541A (en) Multidimensional characteristic parameter method for recognizing sound-groove based on DTW and GMM model
CN104978507A (en) Intelligent well logging evaluation expert system identity authentication method based on voiceprint recognition
CN108520753A (en) Voice lie detection method based on the two-way length of convolution memory network in short-term
CN108597505A (en) Audio recognition method, device and terminal device
CN109192224A (en) A kind of speech evaluating method, device, equipment and readable storage medium storing program for executing
CN107202559B (en) Object identification method based on indoor acoustic channel disturbance analysis
CN108877809A (en) A kind of speaker's audio recognition method and device
CN109767760A (en) Far field audio recognition method based on the study of the multiple target of amplitude and phase information
CN108198561A (en) A kind of pirate recordings speech detection method based on convolutional neural networks
CN105825857A (en) Voiceprint-recognition-based method for assisting deaf patient in determining sound type

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181113