WO2012147970A1 - Dispositif d'acquisition de contexte de position, support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme d'acquisition de contexte de position, et procédé d'acquisition de contexte de position - Google Patents
Dispositif d'acquisition de contexte de position, support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme d'acquisition de contexte de position, et procédé d'acquisition de contexte de position Download PDFInfo
- Publication number
- WO2012147970A1 WO2012147970A1 PCT/JP2012/061483 JP2012061483W WO2012147970A1 WO 2012147970 A1 WO2012147970 A1 WO 2012147970A1 JP 2012061483 W JP2012061483 W JP 2012061483W WO 2012147970 A1 WO2012147970 A1 WO 2012147970A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- position context
- context acquisition
- environment
- sensor
- model data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
Definitions
- the present invention relates to a position context acquisition apparatus, a computer-readable recording medium in which a position context acquisition program is recorded, and a position context acquisition method.
- a specific example is a car navigation system that detects a position using GPS (Global Positioning System) and displays the detected position and a route to a destination on a map.
- GPS Global Positioning System
- the navigation system has a position database that stores identification information for identifying a radio base station and a position where radio waves can be received from the radio base station, and uses a position searched based on the identification information obtained from the received radio waves.
- Patent Document 1 describes a user's behavior using a sensor that detects a beacon signal periodically transmitted by a wireless LAN (Local Area Network) access point.
- An apparatus for identifying a range is disclosed. This device stores the characteristics of each of a plurality of beacon signals and the movement paths that pass through a plurality of positions at which the plurality of beacon signals can be received, respectively. Based on the characteristics detected from the beacon signals, Identify travel routes.
- the car navigation using GPS can detect the position and route, but in what environment the detected position and the position on the route are. I could't identify it.
- the environment is described in the context relating to the location of facilities, equipment, and the like.
- the present invention has been made in view of such problems, a position context acquisition device that can acquire a position context that describes an environment of a position where a sensor exists, a computer-readable recording medium that records a position context acquisition program, It is another object of the present invention to provide a location context acquisition method.
- a position context acquisition apparatus provides: Storage means for storing a plurality of model data representing the characteristics of the environment and location contexts describing the environment in association with each other; Sensor information acquisition means for acquiring sensor information representing a measurement result output by a sensor that measures the environment; Identifies model data representing features whose similarity is equal to or greater than a predetermined value indicating how similar to the feature of the measurement result represented by the sensor information acquired by the sensor information acquisition means, and corresponds to the specified model data
- a position context acquisition means for acquiring the attached position context It is characterized by that.
- a position context acquisition program recorded on a computer-readable recording medium includes: Computer Storage means for storing a plurality of model data representing the features of the environment and positional contexts describing the environment in association with each other; Identifies model data that represents features whose degree of similarity is greater than or equal to a predetermined value indicating how similar to the features of the measurement results output by the sensor that measures the environment, and obtains the position context associated with the identified model data Function as location context acquisition means, It is characterized by that.
- a location context acquisition method includes: A sensor information acquisition step for acquiring sensor information representing a measurement result output by a sensor for measuring the environment; What are the characteristics of the measurement results represented by the sensor information acquired by the sensor information acquisition step from the storage means that stores a plurality of model data representing the characteristics of the environment and positional contexts that explain the environment in association with each other? A position context acquisition step of identifying model data having a similarity equal to or greater than a predetermined value indicating whether they are similar to each other and acquiring a position context associated with the identified model data, It is characterized by that.
- the position context acquisition device the computer-readable recording medium in which the position context acquisition program is recorded, and the position context acquisition method according to the present invention, the position context that describes the environment of the position where the sensor exists can be acquired.
- FIG. 6 is a flowchart illustrating a position context acquisition process executed by the position context acquisition apparatus according to the first embodiment. It is a block diagram showing the position context acquisition apparatus which concerns on Embodiment 2.
- FIG. 10 is a flowchart illustrating a position context acquisition process executed by the position context acquisition apparatus according to the second embodiment. It is a figure showing the sensor operation stop table which a position context acquisition device memorizes.
- FIG. 10 is a flowchart illustrating a position context acquisition process executed by the position context acquisition apparatus according to the third embodiment.
- 10 is a flowchart illustrating a position related information acquisition process executed by a position context acquisition apparatus 101 according to a third embodiment and a position related information output process executed by a position context acquisition apparatus 102.
- It is a figure showing the data format of the position relevant information acquisition request output by a position relevant information acquisition process.
- the location context acquisition device 100 is connected to an external device 900 that is a server.
- the position context acquisition apparatus 100 collects sound, and identifies the facility where the sound is generated or the facility that emits the sound based on the collected sound.
- the environment where the position context acquisition device 100 is located is an environment where the toilet is within a predetermined distance (including a private room of the toilet)”.
- the location context acquisition device 100 uses the specified facility or the common name of the equipment as the location context, information indicating the location context (hereinafter referred to as location context information), identification information for identifying the location context acquisition device 100, Is output to the external device 900.
- the external device 900 specifies the movement status of the user carrying the position context acquisition device 100 based on the output identification information, position context information, and the output date and time of these information, and displays the specified movement status To do.
- the position context acquisition device 100 includes a sensor 110, a sensor information acquisition unit 120, a storage unit 130, a control unit 140, and a communication unit 150.
- the sensor 110 is an electrodynamic microphone (that is, a sound sensor).
- the sensor 110 converts ambient sound into an electrical signal and outputs the electrical signal (hereinafter referred to as an acoustic signal) to the sensor information acquisition unit 120.
- the sensor information acquisition unit 120 includes an A / D (Analog / Digital) converter.
- the sensor information acquisition unit 120 converts the acoustic signal into digital data (hereinafter referred to as sensor information) by sampling and quantizing the acoustic signal output from the sensor 110 at a predetermined period in accordance with a request from the control unit 140. .
- the storage unit 130 is a storage device including an external storage device that is a hard disk and an internal storage device that is a RAM (Random Access Memory).
- the external storage device of the storage unit 130 stores various data such as a program executed by the control unit 140 and a position context analogy dictionary 1310 used for execution of the program.
- the internal storage device of the storage unit 130 functions as a work memory when the control unit 140 executes a program.
- the position context analogy dictionary 1310 includes a plurality of data in which data (hereinafter referred to as model data) representing the characteristics of speech generated in a specific environment and a position context that describes the specific environment are associated with each other. It consists of data.
- This specific environment includes an environment with toilets, kitchens, bathrooms, living rooms, bedrooms, schoolyards, gymnasiums, pools, schools, and specific facilities or equipment such as roads, stations, ports, and airports.
- model data is “sound pressure average”, “sound pressure maximum value”, and “sound pressure variance” in a plurality of learning data (hereinafter referred to as “sample data”) representing speech collected in a specific environment. ", which is a feature amount common to the sample data.
- This model data is acquired as model parameters learned from sample data using a hidden Markov model (HMM: Hidden Markov Model) used in speech recognition, and is stored in the storage unit 130 in advance before shipment from the factory.
- HMM Hidden Markov Model
- the position context analogy dictionary 1310 shown in FIG. 2 includes model data “microwave_beep.hmm” representing the feature amount of a button press sound of a microwave oven and an environment in which the sound is often generated is an environment in which a kitchen is within a predetermined range And the position context described as being have data associated with each other. Further, the position context analogy dictionary 1310 explains that the model data “toilet_flushing.hmm” representing the feature value of the running water sound of the toilet and the environment in which the sound often occurs is an environment in which the toilet is within a predetermined range. The position context is associated with data.
- the storage unit 130 is provided with an area (hereinafter referred to as an acoustic data storage area) for storing acoustic data that is generated from sensor information by the control unit 140 described later and compared with sample data.
- an acoustic data storage area for storing acoustic data that is generated from sensor information by the control unit 140 described later and compared with sample data.
- the control unit 140 includes a processor. In accordance with a program stored in the storage unit 130, the control unit 140 converts the sensor information output from the sensor information acquisition unit 120 into WAV format or RAW format data (for example, 5 seconds) with a playback time of 5 seconds. Hereinafter referred to as acoustic data).
- This acoustic data is also referred to as measurement result data because the sensor 110 is data representing a measurement result obtained by measuring the voice environment at the position where the position context acquisition device 100 is located.
- the control unit 140 stores the creation date and time of the measurement result data in the acoustic data storage area of the storage unit 130 in association with the measurement result data (that is, acoustic data) as the measurement date and time of the measurement result.
- control unit 140 executes a position context acquisition process for acquiring a position context using the acoustic data and the position context analogy dictionary 1310 according to the program.
- the communication unit 150 includes a communication device such as a wireless LAN card.
- the communication unit 150 communicates various types of information such as position context information with an external device in accordance with a request from the control unit 140.
- the sensor 110 converts sound into an acoustic signal and outputs the acoustic signal to the sensor information acquisition unit 120.
- the sensor information acquisition unit 120 converts the acoustic signal into sensor information that is digital data in accordance with a request from the control unit 140.
- the control unit 140 converts the sensor information into acoustic data, and stores the acoustic data and the creation date of the acoustic data in the storage unit 130 in association with each other.
- the communication unit 150 receives from the external device 900 a location context transmission command that instructs transmission of location context information acquired based on the acoustic data stored in the storage unit 130.
- the control unit 140 When the communication unit 150 receives the position context transmission command, the control unit 140 starts executing the position context acquisition process illustrated in FIG.
- control unit 140 acquires the latest acoustic data stored in the acoustic data storage area from the storage unit 130 (step S101).
- control unit 140 acquires a plurality of model data stored in the position context analogy dictionary 1310 (step S102).
- control unit 140 extracts feature amounts such as the sound pressure average, the sound pressure maximum value, and the sound pressure variance of the sound represented by the sound data from the sound data acquired in Step S101 (Step S103).
- control unit 140 uses the pattern recognition method using the hidden Markov model to determine how similar the feature amount extracted in step S103 and the feature amount represented by each model data acquired in step S102 are. Similarity indicating whether or not the image is present is calculated (step S104).
- control unit 140 identifies model data having the highest similarity among the calculated similarities, and the position context associated with the identified model data is represented by the position context analogy dictionary 1310 illustrated in FIG. (Step S105).
- the control unit 140 uses the similarity between the feature amount represented by the model data “microwave_beep.hmm” in the position context analogy dictionary 1310 and the feature amount of the acoustic data, and the model data “toilet_flushing.hmm”. The similarity between the represented feature value and the feature value of the acoustic data is calculated. Next, the control unit 140 determines that the similarity between the model data “microwave_beep.hmm” and the acoustic data is higher than the similarity between the model data “toilet_flushing.hmm” and the acoustic data. Thereafter, the control unit 140 acquires the position context “kitchen” associated with the model data “microwave_beep.hmm”, and determines that the position context acquisition apparatus 100 is likely to be in the kitchen.
- control unit 140 reads identification information for identifying the position context acquisition device 100 from the storage unit 130, and outputs the identification information and the position context information representing the acquired position context to the communication unit 150 ( Step S106). Thereafter, the control unit 140 ends the execution of the position context acquisition process. Note that the communication unit 150 transmits the position context information and the identification information to the external device.
- the position context acquisition apparatus 100 collects sound and specifies a facility or equipment in which sound having characteristics similar to the characteristics of the collected sound is generated.
- the position context acquisition apparatus 100 outputs position context information explaining that the environment of the position where the position context acquisition apparatus 100 exists is an environment where the specified facility or equipment is present. For this reason, since the position context information describes the environment of the position where the sensor 110 exists with a common name representing a facility or equipment, the position or environment of the sensor 110 is more meaningful to the user than the latitude, longitude, and address. Easy to understand conceptually.
- a microwave oven is usually installed in a kitchen
- a button press sound of the microwave oven is a sound normally generated by using the microwave oven.
- the running water sound of a toilet is a sound normally generated by using the toilet.
- the model data of the button press sound represents a feature amount common to a plurality of sample data representing the button press sounds sampled in the environment of the kitchen.
- the model data of the running water sound of the toilet represents a feature amount common to a plurality of sample data representing the running water sound sampled in the environment of the toilet. Therefore, for these reasons, the position context acquisition apparatus 100 can acquire a position context that describes the environment of the position where the position context acquisition apparatus 100 is carried without storing model data for all positions that can be carried.
- the position context acquisition device 100 acquires the position context based on the similarity calculated based on the model data and the acoustic data, there is a difference between the feature amount represented by the sample data and the feature amount of the acoustic data. Even if it exists, the position context can be acquired. For example, even if the model data represents features common to running water sounds generated by toilets of a plurality of manufacturers, and the acoustic data represents flowing water sounds generated by toilets of other manufacturers, the position context acquisition device 100 Can specify that the facility or equipment that generates the sound represented by the acoustic data is a toilet.
- the position context has been described as representing a common name of a facility or equipment, but is not limited thereto.
- the context information is a sentence or a sentence that describes the environment using the common name of the facility or equipment, such as “the environment where the position context acquisition device 100 is located is an environment where the toilet is within a predetermined distance”. It may be information indicating.
- the position context acquisition apparatus 100 has been described as including the sensor 110 that is a microphone. However, the present invention is not limited to this. In this modification, the position context acquisition apparatus 100 includes an infrared sensor.
- the storage unit of the position context acquisition apparatus 100 associates model data representing the feature amount of the control signal output from the TV remote controller with a position context that explains that the TV is in a predetermined range. I remember it.
- the control unit 140 calculates the degree of similarity between the infrared feature quantity received by the infrared sensor and the feature quantity represented by the model data.
- the control unit 140 When the model data representing the feature quantity having the highest similarity with the infrared feature quantity received by the infrared sensor represents the feature quantity of the control signal output from the television remote controller, the control unit 140 is within the predetermined range. Get the location context that describes the environment at.
- the position context acquisition apparatus 100 has been described as using the sound pressure average, the sound pressure maximum value, and the sound pressure variance as the sound feature amount represented by the sound data.
- the position context acquisition apparatus 100 not only has the sound pressure average, the sound pressure maximum value, and the sound pressure dispersion, but also the voice rise time, steepness, decay time, acoustic signal envelope, Number of zero crossings, peak frequency, pitch, frequency spectrum, frequency spectrum envelope, frequency spectrum filter bank analysis value, MFCC (Mel-Frequency Cepstral Coefficients), MP (Matching Pursuit), and their primary difference and secondary Any one or more of the differences are used as audio feature values.
- the position context acquisition apparatus 100 may change the type of the voice feature amount extracted from the acoustic data to the feature amount type represented by the model data according to the model data to be compared.
- the position context acquisition apparatus 100 uses a pattern recognition method using a hidden Markov model to determine how similar the audio feature amount represented by the acoustic data and the feature amount represented by the model data are.
- the “similarity” indicating whether or not it is calculated is calculated.
- the position context acquisition apparatus 100 may calculate “similarity” or “likelihood” using another pattern recognition method.
- the position context acquisition apparatus 100 calculates the likelihood using a likelihood calculation method based on a statistical model such as GMM (Gaussian Mixture Model), SVM (Support Vector Machine), or Bayesian estimation.
- the similarity and likelihood may be represented by a frequency from 0 to 1 (a numerical value including a decimal point) or a value greater than 1.
- the model data is learned from a plurality of learning data (that is, sample data) representing speech collected in a specific environment by using a method corresponding to a pattern recognition method used for calculating similarity or likelihood. Represents a feature amount common to the sample data, and is stored in the storage unit 130 in advance before shipment from the factory.
- the position context acquisition apparatus 100 has been described as acquiring the position context associated with the model data used for calculating the highest similarity.
- the position context acquisition device 100 determines that the position context cannot be acquired when all the similarities calculated for each model data are below a predetermined threshold. Thereafter, the position context acquisition apparatus 100 does not acquire the position context and returns information indicating that the position context has not been acquired to the external apparatus 900.
- the position context acquisition device 100 does not acquire a position context when all the similarities calculated for each model data are below a predetermined threshold. For this reason, the position context acquisition apparatus 100 can avoid acquiring a position context that erroneously describes the environment when all of the model data represents a feature of speech that occurs in an environment different from the current environment. Therefore, the position context can be acquired with high accuracy.
- the position context acquisition device 100 does not acquire a position context when all the similarities calculated for each model data are below a predetermined threshold.
- the position context acquisition device 100 determines the acoustic data used for calculating the similarity when all the similarities calculated for each model data are below a predetermined threshold. The position context acquisition process is executed again using another acoustic data.
- the position context acquisition device 100 uses, for example, acoustic data associated with an older creation date or newly stored acoustic data as another acoustic data. According to this configuration, for example, even when there is no sound that normally occurs in the environment where the position context acquisition device 100 is present when the position context acquisition process is performed, the position using the sound generated in the past is used. Context acquisition processing is executed. Alternatively, the position context acquisition process is repeated until the sound is generated. For this reason, the position context acquisition apparatus 100 can acquire a position context reliably.
- the position context acquisition apparatus 100 acquires a position context by executing the position context acquisition process once.
- the position context acquisition device 100 executes a position context acquisition process over a plurality of times at predetermined time intervals using a timer interrupt or the like.
- the location context acquisition device 100 acquires the location context as the location context that most accurately describes the environment of the location where the location context acquisition device 100 is located.
- the position context acquisition device 100 most accurately describes the position context acquired most frequently among a plurality of position contexts acquired within a certain time, and describes the environment of the position where the position context acquisition device 100 is located most accurately. You may get as According to these configurations, the position context acquisition apparatus 100 can acquire the position context with high accuracy by suppressing the influence of noise generated in the sensor information, an accidental event, or the like.
- the position context acquisition apparatus 100 uses the hidden Markov model as a hidden Markov model in step S104 in FIG. 3 to determine the similarity between the audio feature amount represented by the acoustic data and the feature amount represented by the model data. The calculation is described using the pattern recognition method used. However, the present invention is not limited to this, and the position context acquisition apparatus 100 calculates the similarity using a similarity calculation method based on the distance estimation between data such as DP (Dynamic Programming) matching and DTW (Dynamic Time Warping). May be.
- DP Dynamic Programming
- DTW Dynamic Time Warping
- the position context acquisition apparatus 100 has a function of updating (that is, rewriting) or adding the position context analogy dictionary 1310.
- the location context acquisition apparatus 100 may acquire a new location context analogy dictionary (hereinafter referred to as update information) via a network.
- the position context acquisition apparatus 100 may have a data card reader and acquire update information from a data card inserted by the user into the data card reader.
- the position context acquisition device 100 is based on the position context analogy dictionary that represents the characteristics of speech that occurs in the environment after the change even if the equipment or facility generally installed in the specific environment changes. You can get the location context. For this reason, by updating the position context analogy dictionary 1310, the position context acquisition apparatus 100 can continue to acquire the position context with high accuracy.
- the position context acquisition apparatus 100 may further include a display unit such as a display that displays a position context acquired by executing the position context acquisition process (hereinafter referred to as an execution result). Further, the position context acquisition device 100 can further include an interface unit such as a USB (Universal Serial Bus) port, and can store the acquired position context information in the storage unit 130. In this configuration, when a cable is inserted into the interface unit, the position context acquisition device 100 transmits the location context information stored in the storage unit 130 to another device connected by the cable. According to these configurations, since the user can visually recognize the position context displayed on the display unit or the position context displayed on another device, convenience is improved.
- a display unit such as a display that displays a position context acquired by executing the position context acquisition process (hereinafter referred to as an execution result).
- the position context acquisition device 100 can further include an interface unit such as a USB (Universal Serial Bus) port, and can store the acquired position context information in the storage unit 130.
- USB Universal Serial Bus
- the position context acquisition apparatus 100 described in the first embodiment has been described as operating in accordance with a command received from the external apparatus 900.
- the position context acquisition device 100 is a stand-alone device that operates independently from other devices such as the external device 900.
- the position context acquisition apparatus 100 further includes a timer that outputs an interrupt signal at a predetermined time interval to the control unit 140.
- the control unit 140 acquires the position context illustrated in FIG. Execute the process.
- the control unit 140 stores the acquired position context information and information indicating the acquisition date and time in association with each other in the storage unit 130.
- the control unit 140 may further include a display unit, and each time the position context acquisition process ends, the acquired position context information and information indicating the acquisition date / time may be displayed in association with each other on the display unit.
- the position context acquisition apparatus 100 includes the sensor 110 that is a microphone, and is described to store the position context describing the environment and the model data representing the feature amount of the sound generated in the environment in association with each other.
- the position context acquisition apparatus 100 includes an image sensor that captures an image or video and outputs a signal representing the captured image or video.
- the position context analogy dictionary 1310 stored in the position context acquisition apparatus 100 includes a plurality of data in which a position context that describes an environment and model data that represents a feature amount of a video captured in the environment are associated with each other.
- the position context acquisition apparatus 100 includes a radio sensor that includes a radio antenna that receives radio waves and a signal processing circuit that converts the received radio waves into electrical signals and outputs the electrical signals.
- the position context analogy dictionary 1310 stored in the position context acquisition apparatus 100 has a plurality of data in which a position context that describes the environment and model data that represents a feature quantity of radio waves that can be received in the environment are associated with each other.
- the position context acquisition apparatus 100 measures any one or more of temperature, humidity, and illuminance, and any one of a signal that represents the measured temperature, a signal that represents humidity, and a signal that represents illuminance. It has a composite sensor that outputs the above.
- the position context analogy dictionary 1310 stored in the position context acquisition apparatus 100 associates a position context that describes an environment with model data that represents one or more feature quantities of temperature, humidity, and illuminance of the environment. Have multiple data.
- the communication unit 150 of the location context acquisition apparatus 100 includes a wireless LAN card that performs wireless communication with the external apparatus 900.
- the protocol used for wireless communication may be Bluetooth (registered trademark), Zigbee (registered trademark), or another communication protocol.
- the communication unit 150 of the position context acquisition apparatus 100 performs communication via Ethernet (registered trademark), communication via a power line (that is, PLC (Power Line Communications)), or communication via a USB cable. 900 may be used.
- the position context acquisition device 100 has been described as including the sensor 110 and the sensor information acquisition unit 120.
- the position context acquisition device 100 is a server that does not include the sensor 110 and the sensor information acquisition unit 120.
- the position context acquisition apparatus 100 is another apparatus independent of the position context acquisition apparatus 100, includes a sensor 110 and a sensor information acquisition unit 120, and has an interface and a function for transmitting sensor information. Get information.
- the position context acquisition apparatus 100 has one sensor 110 and acquires a position context based on the voice characteristics collected by the sensor 110.
- the position context is acquired based on the audio characteristics collected by the sensor 111 and the video characteristics collected by the sensor 112.
- differences from the first embodiment will be mainly described.
- the Sensor 111 is a microphone.
- the sensor 112 is an image sensor such as a charge coupled device (CCD).
- the sensor 111 converts ambient sound into an acoustic signal, and outputs the acoustic signal to the sensor information acquisition unit 120.
- the sensor 112 converts ambient light into an electrical signal (hereinafter referred to as a video signal), and converts the video signal. Output to the sensor information acquisition unit 120.
- the sensor 112 may be an image sensor such as a CMOS (ComplementarylementMetal Oxide Semiconductor).
- the sensor information acquisition unit 120 includes an A / D converter for the sensor 111 and an A / D converter for the sensor 112.
- the sensor information acquisition unit 120 converts the acoustic signal output from the sensor 111 and the video signal output from the sensor 112 into digital data according to the request of the control unit 140, and outputs the digital data to the control unit 140 as sensor information. To do.
- the storage unit 130 stores an audio data storage area for storing audio data, and a video data storage area for storing video data representing video for a predetermined time generated by the control unit 140 based on digital data representing video. And are prepared.
- the position context analogy dictionary 1310 includes, as shown in FIG. 5, model data representing characteristics of a specific environment and a position context that describes the environment, and a sensor type (hereinafter referred to as a sensor type) that measures the environment. Further, it is data generated in association with each other.
- sensor type information “sound” indicating that the sensor type is a microphone
- model data “microwave_beep.hmm” and “kitchen” indicating the feature amount of the button press sound collected by the microphone.
- sensor type information “sound”, model data “toilet_flushing.hmm”, and position context information representing “toilet” are associated with each other.
- sensor type information “video” indicating that the sensor type is an image sensor, and model data “microwave_image.hmm” and “kitchen” indicating feature values of the video of the microwave oven. Context information is associated. Further, in the position context analogy dictionary 1310, sensor type information “video” is associated with position context information indicating model data “stove_image.hmm” and “kitchen” representing the feature amount of the gas stove video. Further, in the position context analogy dictionary 1310, sensor type information “video” is associated with position context information representing model data “toilet_image.hmm” and “toilet” representing the feature amount of the toilet video.
- position context acquisition apparatus 100 Other configurations of the position context acquisition apparatus 100 are the same as those in the first embodiment, and thus the description thereof is omitted. Hereinafter, the operation of the position context acquisition apparatus 100 in the present embodiment will be described.
- the sensor 111 and the sensor 112 convert surrounding sound and light into an acoustic signal and a video signal, respectively, and output them to the sensor information acquisition unit 120.
- the sensor information acquisition unit 120 converts the audio signal and the video signal into sensor information that is digital data in accordance with a request from the control unit 140.
- the control unit 140 converts the sensor information into audio data and video data, and stores them in the audio data storage area and the video data storage area of the storage unit 130, respectively.
- the control unit 140 starts execution of the position context acquisition process illustrated in FIG.
- control unit 140 acquires the latest acoustic data from the storage unit 130 by executing the same process as the process of step S101 of the first embodiment illustrated in FIG.
- control unit 140 acquires a plurality of model data associated with “sound” representing the sensor type of the sensor 101 from the position context analogy dictionary 1310 (step S201).
- control unit 140 executes the same processing as the processing in step S103 and step S104 shown in FIG. As a result, the control unit 140 extracts the feature amount of the voice represented by the acoustic data, and determines the similarity between the feature amount and the feature amount represented by each of the plurality of model data read in step S201. calculate.
- control unit 140 acquires the latest video data from the video data storage area of the storage unit 130 (step S202). Then, the control unit 140 acquires a plurality of model data associated with the “video” representing the sensor type of the sensor 102 from the position context analogy dictionary 1310 (step S203). Subsequently, the control unit 140 extracts the feature amount of the video represented by the video data from the video data (step S204).
- the control unit 140 includes, as the feature amount of the video, the luminance average of the still images constituting the video, the luminance dispersion, the luminance distribution, the average value of the RGB color components, the distribution of the RGB color components, and the main scanning direction and the sub Any one or more of a primary difference and a secondary difference in the scanning direction is calculated.
- control unit 140 calculates the similarity between the feature amount represented by each model data acquired in step S203 and the feature amount of the video data by a pattern recognition method using a hidden Markov model (step S205).
- the control unit 140 identifies, for each position context, a combination of model data associated with the same position context and associated with different sensor types (step S206a). Specifically, the control unit 140 refers to the position context analogy dictionary 1310 illustrated in FIG. 5, and models data “microwave_beep.hmm” associated with the position context representing “kitchen” and the model data “microwave_image. The combination 1 with “hmm” is specified. Further, the control unit 140 specifies a combination 2 of the model data “microwave_beep.hmm” associated with the position context representing “kitchen” and the model data “stove_image.hmm”. Further, the control unit 140 specifies a combination 3 of the model data “toilet_flushing.hmm” associated with the position context representing “toilet” and the model data “toilet _image.hmm”.
- the control unit 140 calculates, for each identified combination, the sum of the two similarities calculated using the two model data constituting the combination (step S206b). Specifically, for the combination 1, the control unit 140 calculates the sum of the similarity calculated using the model data “microwave_beep.hmm” and the similarity calculated using the model data “microwave_image.hmm”. (Hereinafter referred to as the sum of similarities of combination 1) is calculated. In addition, for the combination 2, the control unit 140 adds the similarity calculated using the model data “microwave_beep.hmm” and the similarity calculated using the model data “stove_image.hmm” (hereinafter, (Similarity sum of combinations 2) is calculated. Further, for the combination 3, the control unit 140 calculates the sum of the similarity calculated using the model data “toilet_flushing.hmm” and the similarity calculated using the model data “toileth_image.hmm”. To do.
- control unit 140 sets the maximum value of the calculated sum for each position context as a new similarity (step S206c). Specifically, since both combination 1 and combination 2 are combinations of model data associated with the position context representing “kitchen”, the control unit 140 uses the similarity sum of combination 1 and combination 2 The larger sum of the similarity sums is set as the new similarity.
- control unit 140 acquires a new position context having the highest similarity (step S207). Thereafter, the control unit 140 performs the same processing as step S106 illustrated in FIG. 3, and thereby outputs the information indicating the position context and the identification information of the position context acquisition device 100 to the communication unit 150. Then, the execution of the position context acquisition process is terminated.
- the communication unit 150 transmits the output information to the external device 900, and the external device 900 displays the received information.
- the position context acquisition device 100 acquires a position context based on sensor information from a plurality of sensors, so that the position context can be acquired with high accuracy.
- the position context acquisition apparatus 100 stores a sensor operation stop table as shown in FIG.
- the sensor operation stop table stores a plurality of data in which a position context and a sensor type of a sensor to be stopped in an environment represented by the position context are associated with each other.
- the position context acquisition apparatus 100 determines whether or not the position context acquired in step S207 of FIG. 6 matches any of the position contexts stored in the sensor operation stop table. At this time, if the position context acquisition apparatus 100 determines that the acquired position context matches any of the position contexts stored in the sensor operation stop information table, the position context acquisition apparatus 100 acquires a sensor type associated with the position context. Thereafter, the position context acquisition device 100 stops all the sensors of the acquired sensor type.
- the sensor operation stop information table includes position context information representing the bathroom, the changing room, and the toilet, and the sensor type “video” for stopping the operation in the bathroom, the changing room, and the toilet.
- the position context acquisition apparatus 100 is an environment in which the current environment is “bathroom”, “changing room”, or “toilet” within a predetermined range (including in the bathroom, changing room, or in the toilet).
- the position context acquisition device 100 can prevent acquisition of unnecessary sensor information and can suppress power consumption. In addition, it is possible to prevent privacy infringement of another person who is in the same environment as the environment where the sensor 101 or the sensor 102 exists.
- the storage unit 130 stores a predetermined weight “0.3” for the similarity calculated based on the acoustic data and the model data (hereinafter referred to as acoustic similarity).
- the storage unit 130 stores a weight “0.7” that is predetermined for the similarity calculated based on the video data and the model data (hereinafter referred to as video similarity).
- the control unit 140 calculates the acoustic similarity “A” based on the acoustic data and the model data “microwave_beep.hmm”, and the video calculated based on the video data and the model data “microwave_image.hmm”
- a case where the similarity “B” is calculated will be described as an example.
- the control unit 140 reads out the acoustic similarity weight “0.3” and the video similarity weight “0.7” from the storage unit 130, and uses the read weights as the acoustic similarity “A”.
- the weighted average “0.3 ⁇ A + 0.7 ⁇ B” with the video similarity “B” is calculated. Thereafter, the control unit 140 sets the calculated weighted average as a new similarity.
- control unit 140 of the position context acquisition apparatus 100 calculates the sum of the acoustic similarity and the video similarity as a new similarity.
- the control unit 140 may calculate a new similarity based on another calculation formula or statistical method using the acoustic similarity and the video similarity.
- the control unit 140 may use the integrated value of the acoustic similarity and the video similarity as a new similarity, or may set the weighted integrated value of the acoustic similarity and the video similarity as a new similarity.
- the control unit 140 of the position context acquisition apparatus 100 calculates the sum of the acoustic similarity and the video similarity as a new similarity, and uses the model data used for the calculation of the highest new similarity. It has been explained that the corresponding position context is acquired. In the present modification, the control unit 140 acquires a position context corresponding to the model data used for calculating the highest similarity among the calculated plurality of acoustic similarities and the plurality of video similarities.
- the control unit 140 calculates the acoustic similarity with the acoustic data for each of the plurality of model data associated with the sensor type “sound”, and any of the calculated plurality of acoustic similarities. It is determined whether or not one or more exceeds a predetermined threshold. At this time, if it is determined that any one or more of the acoustic similarities exceed a predetermined threshold, the control unit 140 acquires a position context corresponding to the model data used for calculating the highest acoustic similarity.
- control unit 140 determines that each of the plurality of model data associated with the sensor type “video” is The video similarity is calculated, and it is determined whether any one or more of the calculated plurality of video similarities exceed a predetermined threshold. At this time, if it is determined that any one or more of the video similarity exceeds a predetermined threshold, the control unit 140 acquires a position context corresponding to the model data used for calculating the highest video similarity. On the other hand, when determining that all the video similarity degrees are equal to or less than the predetermined threshold, the control unit 140 determines that the position context cannot be acquired and does not acquire the position context.
- control unit 140 determines that any one or more of the plurality of acoustic similarities exceed a predetermined threshold, the model used for calculating the highest acoustic similarity is not calculated without calculating the video similarity. You may acquire the position context corresponding to data.
- the controller 140 determines that one or more of the plurality of video similarities exceed a predetermined threshold, the model used for calculating the highest video similarity is not calculated without calculating the acoustic similarity. You may acquire the position context corresponding to data.
- the control unit 140 has a similarity between a plurality of feature amounts of audio represented by acoustic data and a plurality of feature amounts represented by model data, and a plurality of features of a video represented by video data.
- the description has been given assuming that the similarity between the quantity and the plurality of feature quantities represented by the model data is calculated.
- the control unit 140 includes a vector (hereinafter referred to as an acoustic feature vector) representing a plurality of feature quantities of audio represented by acoustic data as elements of each dimension, and a plurality of videos represented by video data.
- the degree of similarity is calculated by regarding the feature amount as a vector representing the feature amount of each dimension (hereinafter referred to as a video feature amount vector) and one feature amount vector.
- the acoustic feature vector “VA” is an a-dimensional vector having elements (m 1 , m 2 ,..., M a ), and the video feature vector “VB” is an element (n 1 , n 2 ,..., N b ) will be described as an example of a b-dimensional vector.
- the control unit 140 regards the acoustic feature vector “VA” and the video feature vector “VB” as one feature vector (m 1 , m 2 ,..., M a , n 1 , n 2. ,..., N b ), an a + b dimensional vector “VAB” is generated.
- model data “microwave_beep.hmm” and model data “microwave_image.hmm” that are associated with the position context representing “kitchen” and associated with different sensor types are stored.
- the position context analogy dictionary 1310 of this modification represents both a plurality of feature amounts represented by model data “microwave_beep.hmm” and a plurality of feature amounts represented by model data “microwave_image.hmm”.
- New model data is associated with a position context representing “kitchen”.
- control part 140 calculates a similarity degree using a new feature-value vector and new model data.
- the control unit 140 of the position context acquisition apparatus 100 firstly selects one or more of the acoustic data and the model data associated with the sensor type information “sound”. Is described as a processing target (steps S101, S201, S103, and S104). Next, it has been described that the control unit 140 processes at least one of the video data and the model data associated with the sensor type information “video” (steps S202, S203, S204, and S205).
- control unit 140 of the position context acquisition apparatus 100 first executes steps S202, S203, S204, and S205, and then executes steps S101, S201, S103, and S104. You may do it. That is, the control unit 140 sets one or more of the model data associated with the video data and the sensor type information “video” as a processing target, and then model data associated with the acoustic data and the sensor type information “sound”. Any one or more of these may be processed.
- the control unit 140 of the position context acquisition apparatus 100 may execute steps S202, S203, S204, and S205 and steps S101, S201, S103, and S104 shown in FIG. 6 in parallel. That is, the control unit 140 selects one or more processing targets of the model data associated with the video data and the sensor type information “video” and the model data associated with the acoustic data and the sensor type information “sound”. Any one or more processing targets may be executed in parallel using multithreads or multiprocesses.
- the position context is acquired by one position context acquisition apparatus 100.
- the position context acquisition apparatuses 101 and 102 acquire a position context that explains the position of the position context acquisition apparatus 101 using the position of the position context acquisition apparatus 102 in cooperation with each other.
- differences from the first embodiment will be mainly described.
- the two position context acquisition apparatuses 101 and 102 as shown in FIG. 8 are normally carried by two users who are separated by a distance that can be moved by walking, and constitute a position context acquisition system 200. To do.
- the position context acquisition apparatus 101 includes a sensor 111, a sensor information acquisition unit 121, a storage unit 131, a control unit 141, and a communication unit 151.
- the position context acquisition device 102 includes a sensor 112, a sensor information acquisition unit 122, a storage unit 132, a control unit 142, and a communication unit 152.
- the storage 131 constituting the position context acquisition apparatus 101 stores a position context analogy dictionary 1311 shown in FIG.
- the position context analogy dictionary 1311 includes a plurality of pieces of data in which model data representing the feature amount of voice generated by the operation of a person, animal, or machine is associated with a position context that describes the environment in which the voice is generated.
- the position context analogy dictionary 1311 includes model data “slippers.hmm” representing a feature amount of sound generated when a person wearing slippers walks, and a position context “person who describes the environment in which the sound is generated” “Environment walking with slippers” is associated with the data.
- the storage unit 132 configuring the position context acquisition apparatus 102 stores a position context analogy dictionary 1312 having data similar to the position context analogy dictionary 1311.
- the control unit 141 configuring the position context acquisition device 101 executes a position related information acquisition process for acquiring the position related information received by the communication unit 151 from the position context acquisition device 102 in addition to the position context acquisition process for acquiring the position context.
- the control part 142 which comprises the position context acquisition apparatus 102 performs the position relevant information output process which outputs position relevant information to the communication part 152.
- the communication unit 152 transmits the position related information output from the control unit 142 to the position context acquisition apparatus 101.
- the position related information is information used for acquiring the position context or the position context information itself, and represents sensor information, acoustic data generated from the sensor information, and a feature amount extracted from the acoustic data. Information, as well as location context information.
- location context acquisition devices 101 and 102 are the same as the configuration of the location context acquisition device 100 according to the first embodiment, and a description thereof will be omitted.
- the sensors 111 and 112 convert the sound around the device of the position context acquisition device 101 and the sound around the device of the position context acquisition device 102 into acoustic signals, respectively.
- the acoustic signal is output to the sensor information acquisition units 121 and 122, respectively.
- the sensor information acquisition units 121 and 122 convert acoustic signals into sensor information in accordance with requests from the control units 141 and 142, respectively.
- the control units 141 and 142 acquire sensor information from the sensor information acquisition units 121 and 122, respectively, convert the acquired sensor information into acoustic data, and store the acoustic data and the creation date and time of the acoustic data in the storage units 131 and 132, respectively. Store sequentially.
- the control unit 141 of the position context acquisition apparatus 101 starts execution of the position context acquisition process illustrated in FIG.
- the position context acquisition process executed by the control unit 141 will be described using an example in which the control unit 141 acquires acoustic data as position related information.
- the control unit 141 executes the same processing as that in steps S101 to S104 in FIG. 3, so that the similarity between the audio feature amount represented by the latest acoustic data and the feature amount represented by each model data is similar. Calculate the degree.
- the control unit 141 identifies model data (hereinafter referred to as first model data) used for calculating the highest similarity among the calculated similarities (step S301). Subsequently, the control unit 141 uses model data (that is, first model data) used for calculating the highest similarity and acoustic data (hereinafter referred to as first acoustic data) used for calculating the highest similarity. ) And the creation date and time are stored in the storage unit 131 (step S302).
- model data that is, first model data
- acoustic data hereinafter referred to as first acoustic data
- the control unit 141 of the position context acquisition apparatus 101 executes a position related information acquisition process shown in FIG. Thereby, the control unit 141 generates the acoustic data created within a predetermined time from the creation date and time of the first acoustic data (hereinafter referred to as the first creation date and time) and the creation date and time of the acoustic data from the other position context acquisition device 102. And get.
- the control unit 141 transmits the location-related information created by the location context acquisition device 102 from the date / time before the first creation date / time stored in step S302 to the date / time after the first creation date / time.
- a position-related information acquisition request for requesting is generated.
- control unit 141 outputs the generated position-related information acquisition request to the communication unit 151 (step S401). Thereafter, the communication unit 151 transmits a position related information acquisition request to the position context acquisition apparatus 102.
- the position context acquisition device 102 When receiving the position-related information acquisition request, the position context acquisition device 102 receives the acoustic data created from the date / time before the first creation date / time to the date / time after the first creation date / time, and the date / time after the first creation date / time. A plurality of generation dates and times are acquired from the storage unit 132. Thereafter, the position context acquisition apparatus 102 generates a position related information acquisition response including a plurality of acquired acoustic data and generation date and time, and transmits the generated position related information acquisition response to the position context acquisition apparatus 101.
- step S401 the control unit 141 of the position context acquisition apparatus 101 determines whether or not the communication unit 151 has received a position related information acquisition response from the position context acquisition apparatus 102 (step S402). At this time, if the control unit 141 determines that the communication unit 151 has not received the position-related information acquisition response (step S402; No), the control unit 141 repeats the process of step S402 after a predetermined time has elapsed.
- step S402 when determining that the communication unit 151 has received the position related information acquisition response (step S402; Yes), the control unit 141 acquires the position related information acquisition response from the communication unit 151. Next, the control unit 141 stores the position related information included in the position related information acquisition response and the generation date and time of the position related information in the plurality of storage units 131 (step S403), and then executes the position related information acquisition process. Exit.
- the control unit 141 of the position context acquisition apparatus 101 stores the acoustic data (hereinafter referred to as acquired acoustic data) acquired from the position context acquisition apparatus 102 by the position related information acquisition process and the generation date and time.
- acquired acoustic data acoustic data acquired from the position context acquisition apparatus 102 by the position related information acquisition process and the generation date and time.
- a plurality of items are acquired from the unit 131 (step S303).
- control unit 141 extracts a feature amount from each of the plurality of acquired acoustic data (step S304). Thereafter, the control unit 141 acquires the first model data stored in step 301 from the storage unit 131. Next, the control unit 141 calculates the degree of similarity with the first model data for each of the plurality of acquired acoustic data by a pattern recognition method using a hidden Markov model (step S305).
- control unit 141 specifies acquired acoustic data (hereinafter referred to as second acoustic data) created at the date and time closest to the first creation date and time with the highest similarity to the first model data (step S306a). .
- control unit 141 acquires the position context associated with the first model data from the position context analogy dictionary 1311 illustrated in FIG. 9 (step S306b).
- control unit 141 calculates a time difference between the first generation date and time that is the generation date and time of the first acoustic data and the generation date and time of the second acoustic data (hereinafter referred to as the second generation date and time) (step S306c).
- control unit 141 corrects the position context acquired in step S306b to a context that explains the environment of the position where the position context acquisition apparatus 101 is located in more detail using the calculated time difference (step S306d). Note that the control unit 141 may add, to the position context acquired in step S306b, a context that explains the environment of the position where the position context acquisition apparatus 101 is located in more detail using the calculated time difference.
- the first model data identified in step S301 is model data “slippers.hmm” representing the feature amount of sound generated when a person wearing slippers walks will be described as an example. .
- step S306b the control unit 141 uses the position context analogy dictionary 1311 shown in FIG. 9 to indicate the position context “environment where a person walks in slippers” associated with the model data “slippers.hmm”. get.
- step S306c the control unit 141 determines the time difference between the first creation date and time saved in the storage unit 131 in step S302 and the creation date and time of the second acoustic data identified as having the highest similarity in step S306a. Assume that “1 second” is calculated.
- step S306d the control unit 141 changes the position context “environment in which a person walks in slippers” to “environment in which a person walks from the position of the position context acquisition apparatus 102 and is at a distance of 1 second”. Make corrections.
- the position context acquisition apparatus 101 After step S306d, the position context acquisition apparatus 101 outputs position context information representing the corrected position context to the communication unit 151 (step S106), and then ends the position context acquisition process. Note that the communication unit 151 transmits the location context to the external device 200.
- the position context acquisition apparatus 102 When receiving the position related information acquisition request transmitted from the position context acquisition apparatus 101 in step S401, the position context acquisition apparatus 102 starts executing the position related information output process shown in FIG. This is because the acoustic data as the position related information is transmitted to the position context acquisition apparatus 101 in accordance with the position related information acquisition request.
- control unit 142 of the position context obtaining apparatus 102 acquires the received position related information transmission request from the communication unit 152, and analyzes the acquired position related information transmission request (step). S411).
- the location related information acquisition request includes transmission source device information for identifying a device that is a transmission source that transmits the location related information acquisition request, and the transmission source device information is, for example, an IP (Internet Protocol) address of the location context acquisition device 101. It is represented by The location related information acquisition request further includes destination device information for identifying a destination device of the location related information acquisition request, and the destination device information is represented by the IP address of the location context acquisition device 102. Furthermore, the location related information acquisition request includes location related information type information indicating the type of location related information for which transmission is requested, and the location related information type information indicates that the type of location related information requested is acoustic data. To express.
- the position related information acquisition request further includes first creation date and time information representing the first creation date and time described above.
- the position related information type information is information that further specifies the sensor type as the type of the position related information to be requested. There may be.
- control unit 142 specifies that the position related information requested to be transmitted is acoustic data based on the position related information type information included in the position related information acquisition request. Next, the control unit 142 specifies the first creation date and time represented by the first creation date and time information included in the position related information acquisition request.
- control unit 142 creates acoustic data and acoustic data created from a date and time before the first creation date and time to a date and time after the first creation date and time for a predetermined time such as 1 minute.
- a plurality of dates and times are acquired from the storage unit 132 (step S412).
- control unit 142 creates a position related information acquisition response according to the data format shown in FIG. 13, and stores the acquired plurality of acoustic data and creation date and time in the position related information acquisition response (step S413a).
- control unit 142 outputs a position related information acquisition response to the communication unit 152 (step S413b), and the communication unit 152 transmits the position related information acquisition response to the position context acquisition device 101. Thereafter, the control unit 142 ends the execution of the position related information output process.
- the location-related information acquisition response to be transmitted includes transmission source transmission information represented by the IP address of the location context acquisition device 102, destination device information represented by the IP address of the location context acquisition device 101, and acquisition.
- transmission source transmission information represented by the IP address of the location context acquisition device 102
- destination device information represented by the IP address of the location context acquisition device 101
- acquisition A plurality of acoustic data and notification information that is the creation date and time are included.
- the position context acquisition system 200 can acquire a position context that describes the position of the position context acquisition apparatus 101 using the position of the position context acquisition apparatus 102.
- the storage unit 131 associates model data representing the characteristics of a sound generated by the action of a person or the like with a position context that describes the environment in which the sound is generated.
- the position context analogy dictionary 1311 is stored.
- the control unit 141 of the position context acquisition apparatus 101 acquires the position context associated with the first model data in step S306b of FIG.
- step S306c the control unit 141 calculates a time difference between the creation date / time of the first acoustic data and the creation date / time of the second acoustic data. Thereafter, the control unit 141 has been described in step S306d as correcting the position context describing the position of the position context acquisition apparatus 101 using the calculated time difference.
- the position context acquisition apparatus 101 is based on the magnitude relationship between the intensity of the acoustic signal used for creating the first acoustic data and the intensity of the acoustic signal used for creating the second acoustic data. Modify the location context describing the environment at the location. Hereinafter, differences from the third embodiment will be mainly described.
- the storage unit 131 includes model data that represents a feature amount of voice generated in equipment or the like installed in a specific environment, and a position context that describes the environment in which the voice is generated. Are stored in the position context analogy dictionary 1311.
- the control unit 141 of the position context acquisition apparatus 101 acquires the position context associated with the first model data in step S306b of FIG.
- step S400 the control unit 141 acquires the acoustic data, the sensor information used to generate the acoustic data, and the creation date and time of the acoustic data from the position context acquisition device 102. Thereafter, in step S306c, the control unit 141 calculates a time difference between the creation date / time of the first acoustic data and the creation date / time of the second acoustic data, and then determines whether or not the time difference is “0”.
- the control unit 141 determines that the time difference is not “0”, the control unit 141 ends the execution of the position context acquisition process after executing the processes of step S306d and step S106.
- the control unit 141 determines that the time difference is “0”
- the sensor information used for creating the first acoustic data hereinafter referred to as first sensor information
- the second acoustic data are created.
- Sensor information hereinafter referred to as second sensor information
- control unit 141 has an intensity of the acoustic signal represented by the first sensor information (hereinafter referred to as the first acoustic signal) of the acoustic signal represented by the second sensor information (hereinafter referred to as the second acoustic signal). It is determined whether or not it is greater than the strength.
- control unit 141 determines that the intensity of the first acoustic signal is greater than the intensity of the second acoustic signal, the control unit 141 performs the position context acquisition device 101 on the sound source represented by these acoustic signals. Is determined to be closer than the position context acquisition device 102.
- control unit 141 determines that the intensity of the first acoustic signal is smaller than the intensity of the second acoustic signal, the control unit 141 performs a position context acquisition device for the sound source represented by these acoustic signals. It is determined that 101 is farther than the position context acquisition device 102.
- the controller 141 determines that the intensity of the first acoustic signal is the same as the intensity of the second acoustic signal, the position context acquisition device 101 for the sound source represented by these acoustic signals. And the position context acquisition device 102 are determined to be at the same distance.
- control unit 141 corrects the position context acquired in step S306b so as to add a description of the determination result (step S306d), and then ends the execution of the position context acquisition process after executing step S106. .
- control unit 141 will be described with reference to a case where, in step S306b, the environment at the position where the position context acquisition device 101 is located acquires a position context that is described as an environment where the toilet is within a predetermined range. I do.
- the control unit 141 determines that the intensity of the first acoustic signal is greater than the intensity of the second acoustic signal, the position context describing the position environment of the position context acquisition apparatus 101 is closer to the toilet than the position context acquisition apparatus 102 To the position context described as follows.
- control unit 141 determines that the intensity of the first acoustic signal is smaller than the intensity of the second acoustic signal, the control unit 141 changes the position context to a position context that is farther from the toilet than the position context acquisition device 102. Make corrections.
- control unit 141 determines that the intensity of the first acoustic signal is the same as the intensity of the second acoustic signal, the position that explains that the position context is as far away from the toilet as the position context acquisition device 102 is. Modify to context.
- the position context acquisition system 200 includes position context acquisition apparatuses 101, 102, and 103 connected to each other.
- the control unit 141 of the position context acquisition apparatus 101 estimates the distance from the sound source of the facility or equipment to the position context acquisition apparatus 101 based on the intensity of the first acoustic signal.
- control unit 141 is used to create acoustic data (that is, second acoustic data) having the highest similarity to the first model data among the plurality of acoustic data acquired from the position context acquisition device 102.
- the second sensor information is specified.
- the control unit 141 identifies the intensity of the second acoustic signal used for creating the second sensor information, and estimates the distance from the sound source to the position context acquisition device 102.
- control unit 141 estimates the distance from the sound source to the position context acquisition device 103.
- control unit 141 specifies the position of the sound source by three-point positioning, and corrects the position context acquired in step S306b so as to add a description of the specified position.
- the control unit 141 of the position context acquisition apparatus 101 corrects the position context using the time difference between the creation date and time of the first acoustic data and the creation date and time of the second acoustic data in step S306c of FIG. Explained.
- the date and time of creation of the first acoustic data and the acoustic data representing the sound that a person, thing, or animal further produces after producing the sound represented by the first acoustic data.
- the position context is corrected using the time difference between the creation date and time.
- the storage unit 131 stores a position context analogy dictionary 1311 shown in FIG.
- the position context analogy dictionary 1311 includes model data representing the characteristics of speech generated by the action of a person, a position context that describes the environment in which the sound is generated, and a sound generated after the person or the like has generated the sound. There are a plurality of pieces of data in which related model data representing features are associated with each other.
- differences from the third embodiment will be mainly described.
- the control unit 141 of the position context acquisition apparatus 101 identifies the first model data in step S301 of FIG. 10, and then relates model data associated with the first model data from the position context analogy dictionary 1311 illustrated in FIG. To get.
- control unit 141 specifies model data “door.hmm” representing the characteristics of the door opening / closing sound in step S301 as the first model data will be described as an example.
- step S301 the control unit 141 acquires related model data “slippers.hmm” representing the characteristics of the footsteps of a person who wears slippers having a high probability after the door opening / closing sound. This is because, after a person opens a door and enters the room, he often wears slippers and moves around the room.
- the control unit 141 corresponds to the first model data “door.hmm”, the creation date / time of the first acoustic data (that is, the first creation date / time), and the first model data.
- the attached related model data (hereinafter referred to as first related model data) “slippers.hmm” is stored in the storage unit 131.
- control unit 141 executes Step S400, Step S303, and Step S304. After that, in step S ⁇ b> 305, the control unit 141 stores the feature amount represented by the first related model data “slippers.hmm”, not the first model data “door.hmm”, and the sound acquired from the position context acquisition device 102. The degree of similarity with the data feature amount is calculated.
- step S306a the control unit 141 identifies the acoustic data having the highest similarity as the second acoustic data.
- step S306b the control unit 141 corresponds to the position context “environment where the door is within a predetermined range” associated with the first model data “door.hmm” and the related model data “slippers.hmm”.
- the attached position context “environment where a person walks in slippers” is acquired.
- step S306b the control unit 141 calculates a time difference between the creation date / time of the first acoustic data and the creation date / time of the second acoustic data.
- the time difference is calculated as “3 seconds”.
- control unit 141 corrects the position context “environment where the door is within a predetermined range” by using the position context “environment where a person walks in a slipper” and the time difference “3 seconds”. Specifically, the control unit 141 acquires a position context that explains that the environment of the position where the position context acquisition apparatus 101 is “an environment away from the door by a distance of 3 seconds”.
- control unit 141 ends the execution of the position context acquisition process after executing the process of step S106.
- the position context acquisition apparatus 101 and the position context acquisition apparatus 102 have been described as transmitting and receiving acoustic data as position related information in the position related information acquisition process and the position related information output process illustrated in FIG.
- the position context acquisition apparatus 101 and the position context acquisition apparatus 102 transmit / receive the feature quantity of acoustic data as position related information.
- the position context acquisition apparatus 101 can omit the process of extracting the feature amount from the acoustic data after receiving the acoustic data, the process executed by the position context acquisition apparatus 101 to acquire the position context is distributed and Can be reduced in weight.
- the position context acquisition apparatus 101 and the position context acquisition apparatus 102 have been described as transmitting and receiving acoustic data as position related information.
- the position context acquisition apparatus 101 and the position context acquisition apparatus 102 transmit / receive position context information as position related information.
- the position context acquisition device 102 executes the position context acquisition process shown in FIG. 3 every time the acoustic data is generated, and associates the acquired position context with the creation date and time of the acoustic data. Save in the multiple storage unit 132.
- the position context acquisition apparatus 102 When the position context acquisition apparatus 102 receives the position related information acquisition request from the position context acquisition apparatus 101, the position context associated with the creation date and time from a predetermined time before the first creation date to a predetermined time after the first creation date and time. Are obtained from the storage unit 132. Thereafter, the position context acquisition apparatus 102 transmits a position related information acquisition response including a plurality of data in which the acquired position context is associated with the creation date and time to the position context acquisition apparatus 101.
- the position context acquisition device 101 identifies one or more position contexts that are the same as the position context associated with the first model data, among the acquired position contexts. Next, the location context acquisition apparatus 101 selects a creation date and time (hereinafter referred to as a second creation date and time) closest to the first creation date and time among the creation dates and times associated with the specified one or more location contexts. Identify.
- a creation date and time hereinafter referred to as a second creation date and time
- the position context acquisition apparatus 101 calculates a time difference between the first creation date and the second creation date and corrects the specified position context using the calculated time difference.
- the position context acquisition device 101 can acquire the position context.
- the processing executed by the context acquisition device 101 can be distributed and reduced in weight.
- the position context acquisition device 102 executes a position context acquisition process every time acoustic data is generated, and associates the acquired position context with the creation date and time of the acoustic data. It has been described that the data is stored in the storage unit 132.
- the position context acquisition apparatus 102 when the position context acquisition apparatus 102 receives a position related information acquisition request from the position context acquisition apparatus 101, the position context acquisition apparatus 102 corresponds to a creation date and time from a predetermined time before the first creation date to a predetermined time after the first creation date and time. A plurality of attached acoustic data is acquired from the storage unit 132.
- the position context acquisition apparatus 102 performs the position context acquisition process shown in FIG. 3 for each of the acquired plurality of acoustic data. Thereafter, the position context acquisition apparatus 102 transmits a position related information acquisition response including a plurality of data in which the acquired position context is associated with the creation date and time to the position context acquisition apparatus 101.
- the position context acquisition apparatuses 101 and 102 of the first to sixth modifications and the second modification of the position context acquisition apparatus 103 of the third embodiment can be realized using a normal computer system, not a dedicated system.
- the location context acquisition devices 100 to 103 are stored by distributing a program for executing the above-described operation in a computer-readable recording medium, installing the program in a computer, and executing the above-described processing. It may be configured.
- the above-described operation may be realized by joint operation of the OS and application software. In this case, only the part other than the OS may be stored and distributed in a medium, or may be downloaded to a computer.
- Recording media for recording the above programs include USB memory, flexible disk, CD, DVD, Blu-ray Disc (registered trademark), MO, SD card, Memory Stick (registered trademark), magnetic disk, optical disk, magneto-optical disk
- Computer-readable recording media such as semiconductor memory and magnetic tape can be used.
- a recording medium that is usually fixed to a system or apparatus, such as an HDD (hard disk) or an SSD (solid state drive).
- Embodiment 1 to Modification 16 of Embodiment 1 can be combined with each other.
- the position context acquisition method according to the present invention can be implemented using the position context acquisition apparatuses 100 to 103 according to the present invention.
- (Appendix 1) Storage means for storing a plurality of model data representing the characteristics of the environment and location contexts describing the environment in association with each other;
- Sensor information acquisition means for acquiring sensor information representing a measurement result output by a sensor that measures the environment;
- model data representing features whose similarity is equal to or greater than a predetermined value indicating how similar to the feature of the measurement result represented by the sensor information acquired by the sensor information acquisition means, and corresponds to the specified model data
- a position context acquisition means for acquiring the attached position context;
- a position context acquisition device characterized by that.
- the storage means further stores a type of a sensor for measuring the environment in association with model data representing the characteristics of the environment and a position context describing the environment
- the position context acquisition means includes a sensor type that outputs a measurement result represented by the sensor information, a feature represented by model data associated with the sensor type, and a measurement result represented by the sensor information. Based on the degree of similarity to the feature of The position context acquisition apparatus according to supplementary note 1, wherein:
- the storage means stores a plurality of positional contexts that explain the environment and sensors that stop operation in the environment in association with each other,
- the position context acquisition unit stops the operation of the sensor when the position context acquired by the position context acquisition unit is associated with a sensor that stops the operation.
- the position context acquisition apparatus according to Supplementary Note 1 or 2, characterized by:
- the position context acquisition means updates the model data and the position context stored in the storage means;
- the position context acquisition device according to any one of supplementary notes 1 to 3, characterized in that:
- the position context acquisition unit is configured to acquire the position context acquired based on the characteristic of the measurement result received by the communication unit and the characteristic of the measurement result represented by the sensor information acquired by the sensor information acquisition unit.
- the communication means receives a plurality of measurement results represented by the sensor information acquired by the communication destination device from the communication destination device in association with the measurement date and time of the measurement result,
- the position context acquisition unit specifies a measurement result having a feature whose similarity to the feature represented by the specified model data is a predetermined value or more from the features of the plurality of measurement results received by the communication unit,
- the acquired position context is corrected so as to explain an environment in which a time difference occurs between the measurement date and time of the specified measurement result and the measurement date and time of the measurement result represented by the sensor information acquired by the sensor information acquisition unit.
- the communication unit transmits a plurality of measurement results represented by the sensor information acquired by the sensor information acquisition unit and the measurement date and time of the measurement results in association with the communication destination device.
- Item 7 The position context acquisition device according to appendix 6.
- (Appendix 8) Computer Storage means for storing a plurality of model data representing the features of the environment and positional contexts describing the environment in association with each other; Identifies model data that represents features whose degree of similarity is greater than or equal to a predetermined value indicating how similar to the features of the measurement results output by the sensor that measures the environment, and obtains the position context associated with the identified model data Function as location context acquisition means,
- the computer-readable recording medium which recorded the position context acquisition program characterized by the above-mentioned.
- the present invention is suitable for a position context acquisition device that acquires a position context that describes an environment of a position where a sensor exists.
- Position context acquisition device 110 100, 101, 102, 103 Position context acquisition device 110, 111, 112 Sensor 120, 121, 122 Sensor information acquisition unit 130, 131, 132 Storage unit 1310, 1311, 1312 Position context analogy dictionary 140, 141, 142 Control unit 150 151, 152 Communication unit 200
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Telephone Function (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
La présente invention se rapporte à un dispositif d'acquisition de contexte de position (100) qui est pourvu d'une unité de stockage (130) qui associe et stocke de multiples données de modèle qui représentent les caractéristiques d'un environnement, ainsi que des contextes de position qui décrivent ledit environnement. Le dispositif d'acquisition de contexte de position (100) est pourvu : d'une unité d'acquisition d'informations de capteur (120) qui acquiert des informations de capteur qui représentent les résultats des mesures transmis par un capteur qui mesure l'environnement ; et d'une unité de commande (140) qui spécifie les données de modèle qui représentent des caractéristiques qui présentent un degré de similarité d'une valeur donnée ou supérieure, ledit degré de similarité représentant l'importance selon laquelle les caractéristiques des résultats des mesures représentés par les informations de capteur acquises par l'unité d'acquisition d'informations de capteur (120) sont similaires, et qui acquiert le contexte de position associé aux données de modèle spécifiées.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013512499A JPWO2012147970A1 (ja) | 2011-04-28 | 2012-04-27 | 位置コンテキスト取得装置、位置コンテキスト取得プログラム、及び位置コンテキスト取得方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-101969 | 2011-04-28 | ||
JP2011101969 | 2011-04-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012147970A1 true WO2012147970A1 (fr) | 2012-11-01 |
Family
ID=47072481
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/061483 WO2012147970A1 (fr) | 2011-04-28 | 2012-04-27 | Dispositif d'acquisition de contexte de position, support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme d'acquisition de contexte de position, et procédé d'acquisition de contexte de position |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2012147970A1 (fr) |
WO (1) | WO2012147970A1 (fr) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017508340A (ja) * | 2014-01-10 | 2017-03-23 | クアルコム,インコーポレイテッド | 近位ピアツーピアデバイスのパターン照合を使用した屋内ロケーションの判定 |
CN111132000A (zh) * | 2018-10-15 | 2020-05-08 | 上海博泰悦臻网络技术服务有限公司 | 一种位置共享的方法及系统 |
JP2021157544A (ja) * | 2020-03-27 | 2021-10-07 | Kddi株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP2022024384A (ja) * | 2020-07-28 | 2022-02-09 | Kddi株式会社 | 情報処理装置及び情報処理方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010011079A (ja) * | 2008-06-26 | 2010-01-14 | Kyocera Corp | 携帯電子機器及び通信システム |
JP2010178200A (ja) * | 2009-01-30 | 2010-08-12 | Nec Corp | 携帯端末装置、シチュエーション推定方法及びプログラム |
-
2012
- 2012-04-27 JP JP2013512499A patent/JPWO2012147970A1/ja active Pending
- 2012-04-27 WO PCT/JP2012/061483 patent/WO2012147970A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010011079A (ja) * | 2008-06-26 | 2010-01-14 | Kyocera Corp | 携帯電子機器及び通信システム |
JP2010178200A (ja) * | 2009-01-30 | 2010-08-12 | Nec Corp | 携帯端末装置、シチュエーション推定方法及びプログラム |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017508340A (ja) * | 2014-01-10 | 2017-03-23 | クアルコム,インコーポレイテッド | 近位ピアツーピアデバイスのパターン照合を使用した屋内ロケーションの判定 |
CN111132000A (zh) * | 2018-10-15 | 2020-05-08 | 上海博泰悦臻网络技术服务有限公司 | 一种位置共享的方法及系统 |
CN111132000B (zh) * | 2018-10-15 | 2023-05-23 | 上海博泰悦臻网络技术服务有限公司 | 一种位置共享的方法及系统 |
JP2021157544A (ja) * | 2020-03-27 | 2021-10-07 | Kddi株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP7297712B2 (ja) | 2020-03-27 | 2023-06-26 | Kddi株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP2022024384A (ja) * | 2020-07-28 | 2022-02-09 | Kddi株式会社 | 情報処理装置及び情報処理方法 |
JP7352523B2 (ja) | 2020-07-28 | 2023-09-28 | Kddi株式会社 | 情報処理装置及び情報処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2012147970A1 (ja) | 2014-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6731894B2 (ja) | デバイス制御方法及び電子機器 | |
JP6107409B2 (ja) | 位置特定処理装置及び位置特定処理プログラム | |
TWI500003B (zh) | 基於虛擬地標之定位及地圖繪製技術 | |
US20190383620A1 (en) | Information processing apparatus, information processing method, and program | |
JP4840395B2 (ja) | 情報処理装置、プログラム、情報処理方法、および情報処理システム | |
US9268006B2 (en) | Method and apparatus for providing information based on a location | |
CN110088833A (zh) | 语音识别方法和装置 | |
US20170307393A1 (en) | Information processing apparatus, information processing method, and program | |
WO2015194081A1 (fr) | Appareil, procédé et programme pour positionner une infrastructure de bâtiment par le biais d'informations utilisateur | |
JP2016507747A (ja) | 言語入力によるランドマークベースの測位 | |
JP2008271465A (ja) | 携帯通信端末、位置特定システム、位置特定サーバ | |
JP6681940B2 (ja) | ユーザの位置及び空間に適した情報を能動的に提供する方法及び装置 | |
WO2012147970A1 (fr) | Dispositif d'acquisition de contexte de position, support d'enregistrement lisible par ordinateur sur lequel est enregistré un programme d'acquisition de contexte de position, et procédé d'acquisition de contexte de position | |
JP2012103223A (ja) | 移動端末の位置情報判別方法および装置 | |
KR20180134628A (ko) | 무빙 디바이스를 이용하여 사용자의 위치 및 공간에 알맞은 정보를 제공하는 방법 및 장치 | |
CN113574906A (zh) | 信息处理设备、信息处理方法和信息处理程序 | |
EP2629261A1 (fr) | Système service multimédia et son procédé de fonctionnement | |
KR102012927B1 (ko) | 인공지능 기기의 자동 불량 검출을 위한 방법 및 시스템 | |
US20180139592A1 (en) | Information processing apparatus, information processing method, and program | |
US20210224066A1 (en) | Information processing device and information processing method | |
EP3678126A1 (fr) | Dispositif de traitement d'informations, procédé de traitement d'informations et programme | |
EP4131266A1 (fr) | Dispositif, procédé et programme de traitement d'informations | |
JP2014119995A (ja) | 情報処理装置及び情報処理プログラム | |
KR20210119966A (ko) | 정보 기기, 정보 처리 방법, 정보 처리 프로그램, 제어 장치, 제어 방법 및 제어 프로그램 | |
JP2014109601A (ja) | 音声処理システム、音声処理装置、音声処理方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12776220 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013512499 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12776220 Country of ref document: EP Kind code of ref document: A1 |