CN106250435B - user scene identification method based on mobile terminal noise map - Google Patents

user scene identification method based on mobile terminal noise map Download PDF

Info

Publication number
CN106250435B
CN106250435B CN201610594847.6A CN201610594847A CN106250435B CN 106250435 B CN106250435 B CN 106250435B CN 201610594847 A CN201610594847 A CN 201610594847A CN 106250435 B CN106250435 B CN 106250435B
Authority
CN
China
Prior art keywords
scene
sensor
user
data
mobile phone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610594847.6A
Other languages
Chinese (zh)
Other versions
CN106250435A (en
Inventor
舒磊
霍志强
周长兵
陈媛芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Petrochemical Technology
Original Assignee
Guangdong University of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Petrochemical Technology filed Critical Guangdong University of Petrochemical Technology
Priority to CN201610594847.6A priority Critical patent/CN106250435B/en
Publication of CN106250435A publication Critical patent/CN106250435A/en
Application granted granted Critical
Publication of CN106250435B publication Critical patent/CN106250435B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

the invention discloses a user scene recognition method based on a mobile terminal noise map, which is characterized in that a mobile intelligent mobile phone is used for collecting sensor data, and the coarse-grained division is carried out on a scene of noise decibel collected by a user by referring to the characteristic that sound data is easily influenced by friction and vibration. And obtaining a test data set by using a semi-artificial labeling mode, calculating to obtain a sensor scene judgment threshold value aiming at different sensor characteristics, and carrying out scene classification on decibel data records collected under different scenes by using a mobile terminal scene recognition algorithm. According to the user scene identification method based on the mobile terminal noise map, the user collection decibel data collection scene is calculated and presumed, and the data recording quality is analyzed, so that the visualized data quality of the noise map and the user incentive system can be improved, and more people are encouraged to participate in the environment monitoring task of communities and even cities.

Description

user scene identification method based on mobile terminal noise map
Technical Field
The invention relates to a user scene identification method based on a mobile terminal noise map, and belongs to the technical field of user scene identification of crowd sensing in a sensor network.
Background
Crowd-sourcing perception is used for collecting and sharing collected physical environment data more efficiently by using crowdsourcing power, and more innovative services and applications oriented to environmental monitoring and intelligent cities are researched and developed based on the collected physical environment data. Has three characteristics: 1) the system is low in cost, citizens who volunteer to participate serve as monitors, and manpower resources and equipment maintenance cost are saved by contributing environmental data collected by a mobile phone; 2) the map is updated quickly, and the random dynamic monitoring of citizens can accelerate the data updating speed, so that the map updating method has better timeliness; 3) and data sharing, wherein the participating users are not only contributors of noise data, and the like. When a large number of citizens participate in environment and social monitoring tasks of communities and even cities and are matched with timely and effective solving measures, the crowdsourcing force can be used for applying to innovative applications such as environment monitoring, safety early warning, real-time medical mutual assistance and the like. Despite the broad application prospects and advantages of crowd-sourcing perception, there are some technical difficulties and challenges in actual deployment and application. Firstly, hardware of different smart phones has certain difference, data collected by users using the smart phones has certain difference, secondly, the purpose mode of participation collection of the users has uncertainty, and how to provide incentive policies encourages more users to participate in crowd sensing application is also a problem to be solved, so that the collection efficiency and quality of the data are improved. The method aims at the uncertainty of the purpose, quality and measurement scene of user data acquisition in the application of the noise map of the mobile terminal, and provides the classification of the scene of sensor data contributed by a user by using a user scene recognition algorithm. Through effective collection scene classification and data quality identification, the method is beneficial to an administrator to carry out data preprocessing, data filtering, visualization by using effective data, and the appropriate work of user incentive policy and the like.
in order to classify user acquisition scenes based on crowd sensing, researchers at home and abroad propose various identification algorithms, and relevant documents are as follows:
1. In 2010, Nicolas et al proposed a way of using an automatic scene tag in the "particulate noise polarization monitoring using mobile phones," and used a timestamp and a user manual tag to perform scene identification on a record of a single measurement. In the method, the scene coarse granularity is divided into 4 types: geographic location (city, street), timestamp (weekday, weekend), weather, and user behavior (stationary, moving).
2. in 2013, Rajib et al propose a scene mining algorithm in Ear-Phone A context-aware mapping using smart phones, and use a three-axis accelerometer and a front-mounted optical sensor to perform coarse-grained division on the collection position of a mobile Phone: in the hand, in the bag and in the pocket. The method comprises the steps that a researcher analyzes and extracts the characteristics of an accelerometer and a front-mounted optical sensor in three different scenes, the characteristic matching and classification are carried out on collected accelerometer data by using a kNN algorithm, and whether the mobile phone is in a closed state or not is judged by judging whether the front-mounted optical sensor is shielded or not.
3. in 2015, the concept of collecting scene atmospheres of users, namely family scenes, working scenes, outdoor scenes and automobile scenes, is proposed by Juezen in 'research and implementation of key technologies for data collection of a participatory perception platform'. And the user geographical position positioning and motion state, the timestamp, the user terminal network connection state and the historical record are used for inferring the collection scene of the user, and the possibility of the user in each scene is inferred by calculating the Wi-Fi connection number and the historical record.
Disclosure of Invention
The invention aims to solve the problems that: in noise map application based on crowd sensing, a scene of decibel data collected by a user is judged and recognized by using data collected by a built-in sensor of a mobile terminal, and scene identification is carried out on the measured data record, so that the scene of data collected by the user is presumed.
In order to achieve the purpose, the invention is realized by the following technical scheme:
A user scene identification method based on a mobile terminal noise map comprises the following steps:
the method comprises the steps of firstly, determining and acquiring built-in sensor data of the mobile terminal. According to the application characteristics of the crowd sensing noise map and different characteristics of monitoring the motion state of the mobile terminal by various sensors, four types of sensors, a linear acceleration sensor, a GPS sensor, a gyroscope sensor and a proximity sensor are determined and used.
step two: and collecting scene coarse grain classification by a user. The microphone sensor has high sensitivity to environmental changes, and experimental analysis shows that vibration, friction and the position of the microphone can generate great influence on experimental results, so that the method can divide the user acquisition scene into coarse granularity and fine granularity.
Step three: and acquiring a judgment threshold of the fine-grained scene sensor. The method uses four types of sensors for scene recognition, the linear acceleration sensor recognizes the low-speed moving or static state of a user, the gyroscope sensor recognizes the rotation angle of the intelligent terminal, the proximity sensor recognizes whether a physical shelter or a wrapped terminal exists, and the GPS sensor recognizes the high-speed or low-speed moving state of the user.
Step four: and (4) a user collection scene presumption classification algorithm. The invention designs a scene classification and identification process aiming at noise decibel data collection, which refines scenes into six categories and uses a linear sensor, a gyroscope sensor, an optical distance sensor and a GPS sensor to divide different scenes of the physical state of a mobile terminal.
in the first step, the method for determining and acquiring the built-in sensor data of the mobile terminal comprises the following steps:
the use of four types of sensors is as follows: using a linear acceleration sensor, X, Y, Z acceleration values (m/s2) in three-axis directions are obtained, wherein gravity acceleration is not included; acquiring geographical position information of a user in real time by using a GPS sensor, and calculating to obtain the moving speed (m/s) of the user; using a gyroscope sensor to obtain X, Y, Z angular velocities in three axis directions, and calculating to obtain the rotational angular velocity (°) of the mobile phone; and acquiring the distance from the object to the mobile phone by using the proximity sensor.
In the second step, the coarse-grained classification method for the user-collected scene is as follows:
Firstly, dividing coarse granularity according to the influence degree of a scene on microphone decibel data, determining the type of a sensor used in scene identification, and then, according to the characteristics of a user acquisition scene, refining the scene into six types of scenes according to the coarse granularity scene, wherein the six types of scenes are as follows:
In the scene 1, the mobile phone is statically placed on a non-vibration physical surface, the front side of the mobile phone faces upwards, and no shielding object or physical package exists;
Scene 2, the mobile phone is placed in a user hand, the user does not have a motion state, and the mobile phone does not frequently turn over the angle and shake greatly;
Scene 3, the mobile phone is placed in a static physical with wrapping capability, and the phenomena of large-amplitude overturning and vibration are avoided
Scene 4, the mobile phone is placed in or on a fast moving object, and is not physically wrapped and does not vibrate greatly in an open environment
scene 5, the mobile phone is placed in the physical inner or surface of the slow motion, and is in an open environment without physical package and large vibration
Scene 6, the object is placed in a moving closed object and has a wrapping phenomenon;
Grading the decibel data quality acquired in each scene according to the influence of the outside on vibration and friction during acquisition of the smart phone, and according to experimental analysis, grading the decibel data quality when the smart phone is static or located in the hand of a user to be higher.
in the third step, the method for obtaining the judgment threshold of the fine-grained scene sensor comprises the following steps:
and aiming at the four sensor scene judgment threshold values, acquiring different sensor judgment threshold values by using a gradient descent method.
The method comprises the steps of firstly, using a smart phone to measure multiple groups of test data in a mixed mode under six scenes, and using a semi-automatic mode to carry out scene identification on the data of the six scenes through manual marking for verifying the accuracy of later algorithm scene classification.
And secondly, calculating and acquiring judgment threshold values of different sensors in different scenes.
Calculating to obtain A between two points by using a formula according to the longitude and latitude data of the GPS
(ng 1, lat1) and B (ng 2, lat2) distances S (km), i.e. distances S (km) to
wherein a is lat1-lat2, b is long 1-long 2,6378.137(km) is the radius of the earth, thereby obtaining the moving speed of the user, wherein S and interval are the distance and time interval between two points respectively; for linear acceleration sensing data (acc _ x, acc _ y, acc _ z), the state of movement of the user is calculated using the combined acceleration accValue and the average energy accEnergy, i.e. the motion state of the user is calculated
and judging the walking and static acceleration average energy threshold by using the linear acceleration average energy. The linear acceleration threshold is calculated using a gradient descent method, y ═ b (min < b < max), where b represents the intercept of the transverse cut line on the y-axis, and min and max represent the minimum and maximum values, respectively, of the average energy in the data to be measured. And b, calculating the classification error rate of the sensor scene, wherein S represents the total data of different scenes marked by semi-manual marking, and S correctly represents the number of data scenes which can be correctly divided and obtained after judgment by using an acceleration energy threshold. In the process of calculating the threshold value, b is initialized to take a value max, then the intercept b is continuously reduced in an iterative manner, when the error rate delta is less than or equal to 0.01, the iterative calculation process is stopped, and the values b meeting the conditions are the judgment threshold values of the walking state and the static state respectively. Wherein when Δ >0.05, b decreases by 0.1 each time; b is decreased by 0.01 each time when Delta is less than or equal to 0.05. The gyroscope measurement is used to distinguish whether the handset is stationary or in the user's hand. For the Android proximity sensor, two types of values are provided, when the value is 0, the object shielding is indicated, when the value is not 0, no physical shielding is indicated, and the mobile phone is in an open space. The gyroscope is used to distinguish whether the handset is in a relatively stationary state, which value is used to distinguish whether the handset is in a stationary state or in a user's hand. And obtaining the judgment threshold value of the sensor through the non-calculation iteration of the mixed scene. The experimental result shows that when the mobile terminal is in an absolute static state, the gyroscope and the acceleration value are not 0, and different offset values are provided according to different mobile phone hardware.
In the fourth step, the user collection scene presumption classification algorithm method is as follows:
firstly, initializing each sensor to judge a threshold value; calculating the moving speed of the user by using a GPS, if the moving speed is a high-speed moving state v is more than or equal to 4.6m/s, judging whether the proximity sensor is 0, if not, indicating no shielding, and marking as a scene 4; if the low-speed moving state v is less than or equal to 4.6m/s, judging whether the average energy exceeds a walking threshold value, if so, judging whether a proximity sensor is 0, if not, marking the proximity sensor as a scene 5, and if so, marking the proximity sensor as a scene 6; if the average energy threshold does not exceed the walking threshold, judging whether the proximity sensor is 0, if so, marking the proximity sensor as scene 3, if not, judging whether the proximity sensor simultaneously meets the condition that the average energy exceeds the static average energy threshold and the rotation angle of the gyroscope is greater than the gyroscope angle at the static state, if not, marking the proximity sensor as scene 1, otherwise, marking the proximity sensor as scene 2.
The invention has the beneficial effects that: the user scene recognition algorithm based on the mobile terminal noise map provided by the invention can calculate and find decibel data acquisition scenes of different behavior modes of the user in real time, can effectively analyze and calculate the data quality of the user participating in the environmental perception, can effectively improve the visualization of the noise map through effective filtering and data preprocessing based on the decibel data acquisition scenes, and can provide reference opinions for user incentive measures based on crowd sensing, thereby encouraging more people to participate in the environmental monitoring task of communities and even cities.
Drawings
fig. 1 is a schematic view of a scene recognition process of a mobile terminal according to the present invention.
Detailed Description
the invention will be further explained with reference to the drawings.
a user scene identification method based on a mobile terminal noise map comprises the following steps:
the method comprises the steps of firstly, determining and acquiring built-in sensor data of the mobile terminal. According to the application characteristics of the crowd sensing noise map and different characteristics of monitoring the motion state of the mobile terminal by various sensors, four types of sensors, a linear acceleration sensor, a GPS sensor, a gyroscope sensor and a proximity sensor are determined and used.
Step two: and collecting scene coarse grain classification by a user. The microphone sensor has high sensitivity to environmental changes, and experimental analysis shows that vibration, friction and the position of the microphone can generate great influence on experimental results, so that the method can divide the user acquisition scene into coarse granularity and fine granularity.
Step three: and acquiring a judgment threshold of the fine-grained scene sensor. The method uses four types of sensors for scene recognition, the linear acceleration sensor recognizes the low-speed moving or static state of a user, the gyroscope sensor recognizes the rotation angle of the intelligent terminal, the proximity sensor recognizes whether a physical shelter or a wrapped terminal exists, and the GPS sensor recognizes the high-speed or low-speed moving state of the user.
step four: and (4) a user collection scene presumption classification algorithm. The invention designs a scene classification and identification process aiming at noise decibel data collection, which refines scenes into six categories and uses a linear sensor, a gyroscope sensor, an optical distance sensor and a GPS sensor to divide different scenes of the physical state of a mobile terminal.
in the first step, the method for determining and acquiring the built-in sensor data of the mobile terminal comprises the following steps:
The use of four types of sensors is as follows: using a linear acceleration sensor, X, Y, Z acceleration values (m/s2) in three-axis directions are obtained, wherein gravity acceleration is not included; acquiring geographical position information of a user in real time by using a GPS sensor, and calculating to obtain the moving speed (m/s) of the user; using a gyroscope sensor to obtain X, Y, Z the angular velocity in the three-axis direction, and calculating to obtain the rotational angular velocity (o) of the mobile phone; and acquiring the distance from the object to the mobile phone by using the proximity sensor.
in the second step, the coarse-grained classification method for the user-collected scene is as follows:
firstly, according to the influence degree of the scene on the decibel data of the microphone, the coarse granularity division is carried out, and the sensor types used in the scene identification are determined, as shown in table 1,
TABLE 1 user scene coarse grain classification
Then, according to the characteristics of the user collection scene, the scene is refined into six types of scenes according to the coarse-grained scene in table 1, as shown in table 2,
TABLE 2 Fine-grained division and decibel data scoring criteria for user collection scenarios
grading the decibel data quality acquired in each scene according to the influence of the outside on vibration and friction during acquisition of the smart phone, and according to experimental analysis, grading the decibel data quality when the smart phone is static or located in the hand of a user to be higher.
In the third step, the method for obtaining the judgment threshold of the fine-grained scene sensor comprises the following steps:
and aiming at the four sensor scene judgment threshold values, acquiring different sensor judgment threshold values by using a gradient descent method.
The method comprises the steps of firstly, using a smart phone to measure multiple groups of test data in a mixed mode under six scenes, and using a semi-automatic mode to carry out scene identification on the data of the six scenes through manual marking for verifying the accuracy of later algorithm scene classification.
and secondly, calculating and acquiring judgment threshold values of different sensors in different scenes.
calculating to obtain A between two points by using a formula according to the longitude and latitude data of the GPS
(ng 1, lat1) and B (ng 2, lat2) distances S (km), i.e. distances S (km) to
Wherein a is lat1-lat2, b is long 1-long 2,6378.137(km) is the radius of the earth, thereby obtaining the moving speed of the user, wherein S and interval are the distance and time interval between two points respectively; for linear acceleration sensing data (acc _ x, acc _ y, acc _ z), the state of movement of the user is calculated using the combined acceleration accValue and the average energy accEnergy, i.e. the motion state of the user is calculated
in the algorithm, the linear acceleration average energy is used for judging the walking and static acceleration average energy threshold. The linear acceleration threshold is calculated using a gradient descent method, y ═ b (min < b < max), where b represents the intercept of the transverse cut line on the y-axis, and min and max represent the minimum and maximum values, respectively, of the average energy in the data to be measured. And b, calculating the classification error rate of the sensor scene, wherein S represents the total data of different scenes marked by semi-manual marking, and S correctly represents the number of data scenes which can be correctly divided and obtained after judgment by using an acceleration energy threshold. In the process of calculating the threshold value, b is initialized to take a value max, then the intercept b is continuously reduced in an iterative manner, when the error rate delta is less than or equal to 0.01, the iterative calculation process is stopped, and the values b meeting the conditions are the judgment threshold values of the walking state and the static state respectively. Wherein when Δ >0.05, b decreases by 0.1 each time; b is decreased by 0.01 each time when Delta is less than or equal to 0.05. In the algorithm of the invention, the gyroscope measurement is used to distinguish whether the handset is in a stationary state, and the value is used to distinguish whether the handset is in a stationary state or in the user's hand. For the Android proximity sensor, two types of values are provided, when the value is 0, the object shielding is indicated, when the value is not 0, no physical shielding is indicated, and the mobile phone is in an open space. The gyroscope is used to distinguish whether the handset is in a relatively stationary state, which value is used to distinguish whether the handset is in a stationary state or in a user's hand. Through the non-calculation iteration of the mixed scene, the sensor judgment threshold obtained by the algorithm is shown in the table 3.
TABLE 3Android sensor scene decision thresholds
The experimental result shows that when the mobile terminal is in an absolute static state, the gyroscope and the acceleration value are not 0, and different offset values are provided according to different mobile phone hardware.
in the fourth step, the user collection scene presumption classification algorithm method is as follows:
in the user acquisition scene recognition algorithm, the scene recognition flow is shown in fig. 1. Firstly, initializing each sensor to judge a threshold value; calculating the moving speed of the user by using a GPS, if the moving speed is a high-speed moving state v is more than or equal to 4.6m/s, judging whether the proximity sensor is 0, if not, indicating no shielding, and marking as a scene 4; if the low-speed moving state v is less than or equal to 4.6m/s, judging whether the average energy exceeds a walking threshold value, if so, judging whether a proximity sensor is 0, if not, marking the proximity sensor as a scene 5, and if so, marking the proximity sensor as a scene 6; if the average energy threshold does not exceed the walking threshold, judging whether the proximity sensor is 0, if so, marking the proximity sensor as scene 3, if not, judging whether the proximity sensor simultaneously meets the condition that the average energy exceeds the static average energy threshold and the rotation angle of the gyroscope is greater than the gyroscope angle at the static state, if not, marking the proximity sensor as scene 1, otherwise, marking the proximity sensor as scene 2.

Claims (4)

1. a user scene identification method based on a mobile terminal noise map is characterized in that: the method comprises the following steps:
Determining and acquiring built-in sensor data of a mobile terminal;
Determining and using four types of sensors, a linear acceleration sensor, a GPS sensor, a gyroscope sensor and a proximity sensor according to the application characteristics of the crowd sensing noise map and different characteristics of the sensors for monitoring the motion state of the mobile terminal;
Step two: collecting scene coarse-grained classification by a user;
dividing a user acquisition scene into coarse granularity and fine granularity;
step three: acquiring a judgment threshold value of a fine-grained scene sensor;
The system comprises a linear acceleration sensor, a gyroscope sensor, a short-distance sensor and a GPS sensor, wherein the linear acceleration sensor identifies a low-speed moving or static state of a user, the gyroscope sensor identifies a rotation angle of an intelligent terminal, the short-distance sensor identifies whether a physical shield or a wrapped terminal exists or not, and the GPS sensor identifies a high-speed or low-speed moving state of the user;
The method for acquiring the judgment threshold of the fine-grained scene sensor comprises the following steps:
acquiring different sensor judgment threshold values aiming at the four types of sensor scene judgment threshold values;
(4-1) carrying out mixed measurement on multiple groups of test data in six scenes by using a smart phone, carrying out scene identification on the data in the six scenes by manual marking in a semi-automatic mode, and checking the accuracy of the later algorithm scene classification;
(4-2) calculating and acquiring judgment threshold values of different sensors in different scenes;
A GPS sensor:
for the GPS longitude and latitude data, the distance S (km) between the two points A (long 1, lat1) and B (long 2, lat2) is calculated by using a formula, namely
Wherein a is lat1-lat2, b is long 1-long 2,6378.137(km) is the radius of the earth, thereby obtaining the moving speed of the user, wherein S and interval are the distance and time interval between two points respectively;
a linear acceleration sensor:
for linear acceleration sensing data (acc _ x, acc _ y, acc _ z), the state of movement of the user is calculated using the combined acceleration accValue and the average energy accEnergy, i.e. the motion state of the user is calculated
Judging walking and static acceleration average energy threshold values by using linear acceleration average energy, calculating the linear acceleration threshold values by using a gradient descent method,
y=b(min<b<max)
B represents the intercept of a cutting transverse line on a y axis, min and max represent the minimum and maximum values of average energy in data to be measured respectively, and the sensor scene classification error rate when y is calculated to be b is used, wherein S represents the total number of data of different scenes marked by semi-manual work, and S correctly represents the number of data scenes which can be correctly divided and are obtained after judgment by using an acceleration energy threshold; in the process of calculating the threshold value, b initializes a value max, then an intercept b is continuously iterated and reduced, when the error rate delta is less than or equal to 0.01, the iterative calculation process is stopped, the b values meeting the conditions are respectively the judgment threshold values of the walking state and the static state, wherein when the delta is greater than 0.05, the b is reduced by 0.1 each time; b is reduced by 0.01 each time when delta is less than or equal to 0.05;
a gyro sensor:
The gyroscope measurement value is used for distinguishing whether the mobile phone is in a static state or not, and the gyroscope measurement value is used for distinguishing whether the mobile phone is in the static state or in the hand of a user;
aiming at the Android short-distance sensor, two types of values are provided, and when the value is 0, the object is shielded, and the mobile phone is in a closed state; when the value is not 0, no physical shielding exists, and the mobile phone is in an open space;
distinguishing whether the mobile phone is in a relative static state or not by using a gyroscope, wherein the value is used for distinguishing whether the mobile phone is in the static state or in the hand of a user, and obtaining a sensor judgment threshold value through non-calculation iteration in a mixed scene;
step four: a user collection scene presumption classification algorithm;
the method comprises the steps of adopting a scene classification identification process aiming at noise decibel data collection, thinning scenes into six categories, and carrying out different scene division on the physical state of the mobile terminal by using a linear sensor, a gyroscope sensor, a proximity sensor and a GPS sensor.
2. The method for recognizing the user scene based on the noise map of the mobile terminal as claimed in claim 1, wherein: in the first step, the method for acquiring the built-in sensor data of the mobile terminal comprises the following steps:
Using a linear acceleration sensor, X, Y, Z acceleration values in three axis directions are obtained, wherein gravity acceleration is not included;
Acquiring geographical position information of a user in real time by using a GPS sensor, and calculating to obtain the moving speed of the user;
Using a gyroscope sensor to obtain X, Y, Z angular velocities in three axial directions, and calculating to obtain the rotational angular velocity of the mobile phone;
and acquiring the distance from the object to the mobile phone by using the proximity sensor.
3. The method for recognizing the user scene based on the noise map of the mobile terminal as claimed in claim 1, wherein: in the second step, the coarse-grained classification method for the user collection scene is as follows:
(3-1) firstly, carrying out coarse-grained division according to the influence degree of a scene on the decibel data of the microphone, and determining the type of a sensor used in scene identification;
(3-2) then, according to the characteristics of the user acquisition scene, and according to the coarse-grained scene, refining the scene into six types of scenes, wherein the six types of scenes are as follows:
In the scene 1, the mobile phone is statically placed on a non-vibration physical surface, the front side of the mobile phone faces upwards, and no shielding object or physical package exists;
Scene 2, the mobile phone is placed in a user hand, the user does not have a motion state, and the mobile phone does not frequently turn over the angle and shake greatly;
Scene 3, the mobile phone is placed in a static physical with wrapping capability, and the phenomena of large-amplitude overturning and vibration are avoided
Scene 4, the mobile phone is placed in or on a fast moving object, and is not physically wrapped and does not vibrate greatly in an open environment
Scene 5, the mobile phone is placed in the physical inner or surface of the slow motion, and is in an open environment without physical package and large vibration
Scene 6, the object is placed in a moving closed object and has a wrapping phenomenon;
and (3-3) grading the decibel data quality acquired in each scene according to the influence of the outside on vibration and friction during acquisition of the smart phone.
4. The method for recognizing the user scene based on the noise map of the mobile terminal as claimed in claim 1, wherein: in the fourth step, the user collection scene presumption classification algorithm method is as follows:
Firstly, initializing each sensor to judge a threshold value; calculating the moving speed of the user by using a GPS, if the moving speed is a high-speed moving state v is more than or equal to 4.6m/s, judging whether the proximity sensor is 0, if not, indicating no shielding, and marking as a scene 4; if the low-speed moving state v is less than or equal to 4.6m/s, judging whether the average energy exceeds a walking threshold value, if so, judging whether a proximity sensor is 0, if not, marking the proximity sensor as a scene 5, and if so, marking the proximity sensor as a scene 6; if the average energy threshold does not exceed the walking threshold, judging whether the proximity sensor is 0, if so, marking the proximity sensor as scene 3, if not, judging whether the proximity sensor simultaneously meets the condition that the average energy exceeds the static average energy threshold and the rotation angle of the gyroscope is greater than the gyroscope angle at the static state, if not, marking the proximity sensor as scene 1, otherwise, marking the proximity sensor as scene 2.
CN201610594847.6A 2016-07-26 2016-07-26 user scene identification method based on mobile terminal noise map Expired - Fee Related CN106250435B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610594847.6A CN106250435B (en) 2016-07-26 2016-07-26 user scene identification method based on mobile terminal noise map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610594847.6A CN106250435B (en) 2016-07-26 2016-07-26 user scene identification method based on mobile terminal noise map

Publications (2)

Publication Number Publication Date
CN106250435A CN106250435A (en) 2016-12-21
CN106250435B true CN106250435B (en) 2019-12-06

Family

ID=57603615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610594847.6A Expired - Fee Related CN106250435B (en) 2016-07-26 2016-07-26 user scene identification method based on mobile terminal noise map

Country Status (1)

Country Link
CN (1) CN106250435B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886643A (en) * 2017-02-20 2017-06-23 中国环境监测总站 A kind of method for drafting and drawing system of noise pollution distribution map
US10481044B2 (en) * 2017-05-18 2019-11-19 TuSimple Perception simulation for improved autonomous vehicle control
CN110020576A (en) * 2018-01-10 2019-07-16 中兴通讯股份有限公司 A kind of recognition methods, device and the computer readable storage medium of house scene
CN109302684B (en) * 2018-11-07 2020-08-11 麦片科技(深圳)有限公司 Scene determination method for terminal device, cloud server, and storage medium
CN110516023B (en) * 2019-08-26 2023-04-25 广东石油化工学院 Noise map drawing method based on mobile perception
CN111246415B (en) * 2019-12-18 2022-05-13 广州市梦享网络技术有限公司 User scene position change judgment method
CN113837512A (en) * 2020-06-23 2021-12-24 中国移动通信集团辽宁有限公司 Abnormal user identification method and device
CN114745465A (en) * 2022-03-24 2022-07-12 马斌斌 Interactive noise self-prior sensing analysis system for smart phone

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202385159U (en) * 2011-04-22 2012-08-15 喜讯无限(北京)科技有限责任公司 Multiple mobile device position matching system based on sensors and mobile positioning technology
CN102741840A (en) * 2010-02-03 2012-10-17 诺基亚公司 Method and apparatus for modelling personalized contexts
CN103458361A (en) * 2013-08-13 2013-12-18 西安乾易企业管理咨询有限公司 Scene acquisition and identification method based on mobile terminal
CN104202466A (en) * 2014-08-19 2014-12-10 厦门美图移动科技有限公司 Method of using mobile phone to carry out safety instruction when moving
CN104457751A (en) * 2014-11-19 2015-03-25 中国科学院计算技术研究所 Method and system for recognizing indoor and outdoor scenes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5573443B2 (en) * 2010-07-14 2014-08-20 ソニー株式会社 Information processing apparatus, information processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102741840A (en) * 2010-02-03 2012-10-17 诺基亚公司 Method and apparatus for modelling personalized contexts
CN202385159U (en) * 2011-04-22 2012-08-15 喜讯无限(北京)科技有限责任公司 Multiple mobile device position matching system based on sensors and mobile positioning technology
CN103458361A (en) * 2013-08-13 2013-12-18 西安乾易企业管理咨询有限公司 Scene acquisition and identification method based on mobile terminal
CN104202466A (en) * 2014-08-19 2014-12-10 厦门美图移动科技有限公司 Method of using mobile phone to carry out safety instruction when moving
CN104457751A (en) * 2014-11-19 2015-03-25 中国科学院计算技术研究所 Method and system for recognizing indoor and outdoor scenes

Also Published As

Publication number Publication date
CN106250435A (en) 2016-12-21

Similar Documents

Publication Publication Date Title
CN106250435B (en) user scene identification method based on mobile terminal noise map
Li et al. Toward a mobile crowdsensing system for road surface assessment
Carpineti et al. Custom dual transportation mode detection by smartphone devices exploiting sensor diversity
Chugh et al. Road condition detection using smartphone sensors: A survey
Mohan et al. Nericell: rich monitoring of road and traffic conditions using mobile smartphones
CN108413968B (en) A kind of method and system of movement identification
Xiao et al. Transportation activity analysis using smartphones
Nikolic et al. Review of transportation mode detection approaches based on smartphone data
Mukherjee et al. Characterisation of road bumps using smartphones
Kulkarni et al. Pothole detection system using machine learning on Android
Sattar et al. Developing a near real-time road surface anomaly detection approach for road surface monitoring
Yang et al. GPS and acceleration data in multimode trip data recognition based on wavelet transform modulus maximum algorithm
Higuchi et al. Mobile devices as an infrastructure: A survey of opportunistic sensing technology
Arab et al. Magnopark-locating on-street parking spaces using magnetometer-based pedestrians' smartphones
Sabir et al. Threshold based efficient road monitoring system using crowdsourcing approach
Peng et al. Indoor floor plan construction through sensing data collected from smartphones
Kantola et al. Context awareness for GPS-enabled phones
Wijerathne et al. Towards comfortable cycling: A practical approach to monitor the conditions in cycling paths
Susanti et al. Indoor trajectory reconstruction using mobile devices
Tian et al. Pavement management utilizing mobile crowd sensing
Meng et al. Driving analytics: Will it be OBDs or smartphones?
Wu et al. HDSpeed: Hybrid detection of vehicle speed via acoustic sensing on smartphones
Chibani et al. Road anomaly detection using a dynamic sliding window technique
Chen et al. Transportation mode recognition algorithm based on multiple support vector machine classifiers
Verstockt et al. Collaborative Bike Sensing for Automatic Geographic Enrichment: Geoannotation of road\/terrain type by multimodal bike sensing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191206

Termination date: 20200726