WO2015178065A1 - 情報処理装置および情報処理方法 - Google Patents
情報処理装置および情報処理方法 Download PDFInfo
- Publication number
- WO2015178065A1 WO2015178065A1 PCT/JP2015/056207 JP2015056207W WO2015178065A1 WO 2015178065 A1 WO2015178065 A1 WO 2015178065A1 JP 2015056207 W JP2015056207 W JP 2015056207W WO 2015178065 A1 WO2015178065 A1 WO 2015178065A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- action
- information
- speed
- unit
- user
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
- H04W4/027—Services making use of location information using location based information parameters using movement velocity, acceleration information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1694—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
Definitions
- This disclosure relates to an information processing apparatus and an information processing method.
- a technology for recognizing a user's operation behavior from sensor information acquired using various sensing technologies has been proposed.
- the recognized user action is automatically recorded as an action log.
- this action log for example, it can be reproduced by various methods, such as reproducing it by animation such as an avatar, showing the movement trajectory of the user on a map, and expressing various motion actions using abstracted indicators. Can do.
- Patent Document 1 proposes a method of generating an action log using a recording device with a small sensor such as a smartphone and presenting it to a user.
- the behavioral behavior data indicating the behavioral behavior of the user is analyzed based on the sensor information, and behavioral segments expressed by the semantic content of the behavioral behavior are generated from the behavioral behavior data.
- information can be presented to the user in an easy-to-understand manner.
- the accuracy of the action recognition process there may be an error in the action log presented to the user.
- the accuracy of the speed information has a great influence, so it is considered that the accuracy of the action recognition process can be improved by considering the speed information.
- an action recognition unit that recognizes a user's action based on sensor information, a speed acquisition unit that acquires speed information indicating a user's moving speed, and the speed information and the action recognition result are set.
- An information processing apparatus includes a correction unit that corrects an action recognition result based on a comparison result with a speed threshold.
- FIG. 12 is an explanatory diagram illustrating a state in which the position information of point D in FIG. 11 is excluded and the position information of point E is matched with the position information of point C by stay filter processing. It is explanatory drawing explaining a concentric-circle filter process. It is explanatory drawing which shows the state from which the positional information on the C point of FIG. 13 was excluded by the concentric filter process. It is explanatory drawing which shows the locus
- FIG. 2 is a hardware configuration diagram illustrating a hardware configuration of a client terminal according to the embodiment.
- FIG. It is a block diagram which shows an example of the information processing terminal provided with the function of an action log display system.
- FIG. 1 is explanatory drawing which shows schematic structure of the action log display system which concerns on this embodiment.
- FIG. 2 is an explanatory diagram illustrating a display example of an action log.
- the behavior log display system is a system that analyzes a user's behavior by behavior recognition processing based on information related to the user's behavior and presents it to the user.
- the behavior log system according to the present embodiment includes a client terminal 100, a log server 200, and an analysis server 300 that are communicably connected via a network.
- the client terminal 100 acquires information related to the user's behavior and presents the behavior log acquired by the behavior recognition to the user.
- the client terminal 100 is an information communication terminal such as a smartphone.
- the client terminal 100 includes information acquisition functions such as an acceleration sensor, a GPS, an imaging device, and a gyro sensor, for example, in order to collect information related to user behavior.
- the client terminal 100 includes a display unit for presenting an action log to the user and an input unit for inputting correction information for correcting the action log.
- the client terminal 100 may be constructed by a plurality of different terminals such as a log collection terminal that collects information related to the user's action and a browsing terminal that presents the action log to the user.
- a log collection terminal a wearable device such as a pendant type or a wristband type can be used in addition to the information communication terminal such as the smartphone described above.
- a browsing terminal a personal computer, a tablet terminal, etc. other than information communication terminals like the smart phone mentioned above can be used.
- the client terminal 100 transmits measurement data acquired by various centers as sensor information to the log server 200 at a predetermined timing.
- the sensor information used for recognizing each user's action is accumulated in the log server 200.
- the analysis server 300 calculates an action log representing the content of the user's action by action recognition processing based on the sensor information stored in the log server 200.
- the action log is a record of actions such as “meal”, “movement”, and “sleep” together with action time, position information, and the like.
- the behavior log display system according to the present embodiment further analyzes the behavior log representing the operation content by the analysis server 300 to recognize the meaning of the behavior, and generates information (behavior segment) to which the meaning of the behavior is added.
- the action segment is unit information in which the action log is expressed in a manner that is easy for the user to understand.
- the behavior log can be presented so that the meaning of the behavior can be understood rather than simply presenting the behavior log to the user by the behavior segment.
- FIG. 2 shows an example of the action log display screen 400 displayed on the display unit of the client terminal 100.
- the action log display screen 400 displays, for example, an action display area 410 in which an action recognition result object 414 representing the contents of an action recognition result is displayed, and an outline of actions in a predetermined period (for example, one day).
- the summary area 420 is shown.
- the action display area 410 has a time axis 412 set in the horizontal direction of the screen, and the action recognition result object 414 is displayed so as to correspond to the time position on the time axis 412 where the action is performed.
- the action recognition result object 414 indicates that the user has walked past 20:30.
- the time position displayed in the action display area 410 is changed.
- the display of the action recognition result object 414 displayed in the action display area 410 is also changed to the action recognition result at the time position.
- the display of the action recognition result object 414 is switched for each action segment.
- the action segment is represented by action start time, end time, and action content. For example, “walking”, “running”, “moving by bicycle”, “moving by train”, “moving by bus”, “moving by car”, “moving by other vehicles”, “stay” Etc.
- each action time such as “walking”, “running”, “moving by bicycle”, “moving by car”, “sleep”, etc., the number of steps, calories consumed by the action, photos, bookmarks, etc.
- the number of data is displayed.
- the action log display system determines whether the content of the action log presented to the user is incorrect, or there is a case where it is desired to present more detailed action content than the presented action content. Therefore, in the action log display system according to the present embodiment, determination processing is performed to make the content of the action log presented to the user more correct, and the action log is corrected by the user, and the subsequent action Reflect in the recognition process. Thereby, an action log can be shown to a user with the right contents according to a user's intention.
- the configuration and function of the action log display system according to the present embodiment will be described in detail.
- FIG. 3 shows a functional configuration of the action log display system according to the present embodiment.
- the action log display system includes the client terminal 100, the log server 200, and the analysis server 300.
- the client terminal 100 includes a sensor unit 110, a control unit 120, a communication unit 130, an input unit 140, a display processing unit 150, and a display unit 160.
- the sensor unit 110 is a detection unit that acquires movement information related to user behavior such as the position and movement of the user.
- Examples of the sensor unit 110 include an acceleration sensor, a gyro sensor, an imager, and other sensors. Measurement data such as acceleration, angular velocity, imaging data, audio data, and biological information is acquired. Measurement data acquired by the sensor unit 110 is output to the control unit 120 and transmitted to the log server 200 via the communication unit 130.
- the control unit 120 is a functional unit that controls the overall functions of the client terminal 100. For example, the control unit 120 transmits the measurement data acquired by the sensor unit 110 to the communication unit 130 in association with user information including a user ID that identifies the user. Further, the control unit 120 receives the operation input from the user, and controls the client terminal 100 to execute a function corresponding to the content of the operation input. Furthermore, when acquiring display information such as an action log, the control unit 120 controls the client terminal 100 to display on the display unit 160.
- the communication unit 130 is a functional unit that transmits and receives information to and from a server connected via a network.
- the communication unit 130 of the client terminal 100 transmits the sensor information acquired by the sensor unit 110 to the log server 200.
- the communication unit 130 receives an action log provided from the analysis server 300 based on the action log acquisition information, and outputs the action log to the control unit 120.
- the input unit 140 is an operation unit for a user to input information.
- a touch panel, a keyboard, a button, or the like can be used.
- the user uses the input unit 140 to start an application that displays an action log, to perform an action log display operation, to input action log correction information, and the like.
- the display processing unit 150 displays the action log provided from the analysis server 300 on the display unit 160.
- the display processing unit 150 represents the behavior log using a behavior recognition result object 414 and causes the display unit 160 to display the behavior log.
- the display processing unit 150 also changes the display content of the display unit 160 in accordance with the input information from the input unit 140.
- the display unit 160 is provided to display information, and for example, a liquid crystal display, an organic EL display, or the like can be used.
- the display unit 160 displays a UI or the like that has been subjected to display processing by the display processing unit 150.
- the log server 200 includes a communication unit 210, a control unit 220, and a log DB 230.
- the communication unit 210 is a functional unit that transmits and receives information to and from terminals and servers connected via a network.
- the communication unit 210 of the log server 200 outputs the sensor information received from the client terminal 100 to the control unit 220.
- the communication unit 210 receives an information presentation request from the analysis server 300 and transmits the sensor information acquired by the control unit 220 to the analysis server 300.
- the control unit 220 is a functional unit that controls the overall functions of the log server 200. For example, the control unit 220 records the sensor information received from the client terminal 100 in the log DB 230 for each user. The control unit 220 acquires sensor information based on a request from the analysis server 300 and transmits the sensor information to the analysis server 300 via the communication unit 210.
- the log DB 230 is a storage unit that stores sensor information acquired as information related to user behavior.
- the log DB 230 stores sensor information for each user.
- the analysis server 300 includes a communication unit 310, a control unit 320, an action recognition unit 330, a correction unit 340, an action log DB 350, and an analysis DB 360.
- the communication unit 310 is a functional unit that transmits and receives information to and from terminals and servers connected via a network.
- the communication unit 310 of the analysis server 300 acquires sensor information from the log server 200 and outputs the sensor information to the control unit 320.
- the communication unit 310 receives the action log presentation request received from the client terminal 100 and transmits the corresponding user action log to the client terminal 100.
- the control unit 320 is a functional unit that controls the overall functions of the analysis server 300. For example, the control unit 320 outputs the sensor information acquired from the log server 200 to the behavior recognition unit 330. Further, the control unit 320 receives an action log presentation request from the client terminal 100, acquires a corresponding action log from the action log DB 350, and transmits the action log to the client terminal 100 via the communication unit 310. Further, the control unit 320 outputs the correction information of the action log received from the client terminal 100 to the correction unit 340.
- the behavior recognition unit 330 performs behavior recognition processing based on the sensor information received from the log server 200, and analyzes the user behavior.
- the behavior recognition unit 330 records the behavior recognition result in the behavior log DB 350 as a behavior log.
- the correction unit 340 corrects the action recognition result acquired by the action recognition process by the action recognition unit 330 based on the correction information of the action log received from the client terminal 100. Details of the correction unit 340 will be described later.
- the behavior log DB 350 stores the behavior recognition result analyzed by the behavior recognition unit 330 as a behavior log.
- the action log stored in the action log DB 350 is provided to the client terminal 100 in response to a request from the client terminal 100 in response to an action log presentation request.
- the analysis DB 360 is a storage unit that stores various types of information used in processing performed in the action recognition unit 330 and the correction unit 340.
- the analysis DB 360 stores, for example, threshold information used for the vehicle determination process in the action recognition process performed by the action recognition unit 330 and various types of information used for the correction process performed by the correction unit 340.
- Various information stored in the analysis DB 360 is set in advance, but can be changed as appropriate.
- FIG. 4 is a timing chart showing an outline of the action log display process according to the present embodiment.
- the action log display process according to the present embodiment includes processes related to sensor information acquisition (S10, S20), action recognition process (S30), action log presentation process (S40 to S60), action log correction process by a user (S70), And a personalized learning process (S80) based on the correction information.
- sensor information is acquired as information regarding the user's action.
- the sensor information is acquired by the sensor unit 110 of the client terminal 100 (S10).
- the client terminal 100 is a terminal that a user who receives the service of the action log display system holds on a daily basis.
- the sensor unit 110 obtains information such as the user's position and movement from time to time, and associates it with time information to obtain sensor information.
- the client terminal 100 has an authentication function, and the acquired sensor information is used as information related to the action of the authenticated user.
- the client terminal 100 transmits the acquired sensor information to the log server 200 at a predetermined timing.
- the sensor information is transmitted at a predetermined time interval, or when the user explicitly instructs the transmission of the sensor information.
- the log server 200 that has received the sensor information associates the user ID of the user with the sensor information and records it in the log DB 230 (S20).
- the analysis server 300 performs behavior recognition processing based on sensor information recorded in the log server 200 at a predetermined timing (S30). For example, the analysis server 300 acquires sensor information from the log server 200 at predetermined time intervals, and analyzes the behavior of each user. In the behavior recognition process, sensor information signal processing, statistical processing, and the like are performed to recognize the user behavior and situation.
- the action recognition process may be performed using a well-known technique such as the technique described in Patent Document 1 above.
- the action recognition unit 330 holds in advance a correspondence relationship between an action model that is information related to a user's action obtained as a result of processing sensor information and an action action.
- the behavior recognition unit 330 obtains the behavior parameter by processing the sensor information
- the behavior recognition unit 330 specifies the behavior content corresponding to the behavior parameter.
- the action recognition unit 330 associates the specified action content with action time, action time, position information, user ID, and the like as an action log and records it in the action log DB 350.
- FIG. 5 is a block diagram illustrating each functional unit that performs a filtering process on position information and performs an action recognition result determination process.
- FIG. 6 is a flowchart showing a filtering process for position information and a process for calculating an average speed of a segment section. 7 to 16 are explanatory diagrams for explaining the contents of the filter processing for the position information.
- FIG. 17 is a flowchart showing the action recognition result determination process based on the speed information calculated from the filtered position information.
- a functional unit that performs a filtering process for position information and a process for determining an action recognition result is provided in the analysis server 300.
- a speed acquisition unit 332 and a vehicle determination unit 334 are provided in the analysis server 300.
- the speed acquisition unit 332 performs a filtering process on the position information and acquires speed information.
- the position information that is filtered by the speed acquisition unit 332 includes time information and longitude / latitude information at the time, and is specified by, for example, GPS, network information, or information acquired by an acceleration sensor.
- the speed acquisition unit 332 calculates the average speed of the segment section after performing filter processing such as a section filter, an accuracy filter, a speed filter, a stay filter, and a concentric filter described later on the position information.
- the vehicle determination unit 334 performs action recognition result determination processing based on the average speed of the segments acquired by the speed acquisition unit 332.
- the vehicle determination unit 334 determines, based on the average speed of the segment, whether or not the moving means is correct among the behavior contents of the segment specified by the behavior recognition unit 330.
- the moving means output by the vehicle determination unit 334 is determined as the final action content.
- the speed acquisition part 332 performs the section filter process which identifies the area of an action segment based on the positional information identified by GPS, network information, and the information acquired by the acceleration sensor (S110).
- the section of the action segment is specified by the start time and end time of the action.
- the speed acquisition unit 332 finally calculates the average speed in the identified section.
- the speed acquisition unit 332 may set the section of the action segment as a section that is longer by a predetermined time than the section specified by the start time and end time of the action. As a result, it is possible to acquire the speed at the start time and end time of the action, and it is possible to detect that the position information is incorrect when the movement start position or end position is detected in error. Can increase the sex.
- the predetermined time for setting the action segment section to be longer may be about several seconds (for example, 3 seconds) before and after the section, but the present technology is not limited to this example. If location information cannot be acquired within ⁇ several seconds due to system reasons, the number of data added before and after can be reduced, but by reducing the number of data, there is a possibility of overlooking errors in the start and end positions. Get higher. On the other hand, increasing the number of data added before and after increases the possibility of detecting the start position and the end position, but increases the number and processing amount of position information that must be stored. In consideration of these, the predetermined time for setting the action segment section to be longer is appropriately set.
- the speed acquisition part 332 performs the accuracy filter process which excludes the positional information in which a position is inaccurate among the positional information contained in the area of an action segment (S111).
- the accuracy filter process is performed based on the accuracy value added to the position information.
- the accuracy value is information attached to the position information output from GPS or the like.
- the accuracy of the position information is determined by the probability that it exists in a circle centered on the position specified by the latitude / longitude information and whose radius is the accuracy value. Is represented.
- the accuracy of the position information is expressed as “the possibility that the radius is in the circle having the accuracy value [m] is 85%”. Therefore, the greater the accuracy value, the more inaccurate the position.
- the accuracy of the position information decreases as the accuracy value increases.
- the accuracy value exceeds a predetermined value the accuracy of the position information tends to be lower than the accuracy of the position information represented by the accuracy value. . Therefore, the speed acquisition unit 332 excludes the position information whose accuracy value exceeds the predetermined value, assuming that the accuracy necessary for speed calculation cannot be obtained.
- the threshold value of the accuracy value that excludes the position information can be appropriately set according to the system, and may be set to 2000 m, for example. If this threshold value is increased, more points can be adopted to cope with a fine change in position, while an erroneous position can be easily picked up. Taking these into consideration, the threshold value of the accuracy value is set.
- the user as shown in FIG. 7, from the point a to point b, consider the behavior segment that has moved along a trajectory L R.
- a trajectory L R As shown in FIG. 8, it is assumed that 10 pieces of position information from point A to point J are included in the action segment section in which the point a is the start position and the point b is the end position.
- a circle Cp inside the points A to J in FIG. 8 represents the points where the position information has been acquired, and the locus L is acquired by interpolating these points with straight lines.
- the circle Ca outside the points A to J represents the accuracy value, and the larger the circle, the more inaccurate the position information at that point.
- the speed acquisition unit 332 excludes the position information of the point B and resets the locus L.
- the speed acquisition unit 332 calculates the speed between two points that are temporally adjacent based on the position information included in the action segment section (S112), A speed filter process is performed on each calculated speed (S113).
- the speed acquisition unit 332 calculates the average speed between adjacent points from the latitude and longitude information of each point included in the action segment section and the time, and uses the average speed as the speed between the two points to determine the position on the end point side of the two points. Link with information. For example, the speed between the points C and D shown on the left side of FIG. 10 is represented by the average speed calculated from the latitude and longitude information of the points C and D and the time, and the speed is linked to the point D on the end point side. It is attached.
- the speed acquisition unit 332 determines whether or not each calculated speed exceeds a predetermined speed that is actually considered suspicious, and excludes position information associated with the speed determined to be suspicious. For example, in FIG. 10, when the speed of the point D exceeds a predetermined speed, the position information of the point D is excluded as suspicious.
- the predetermined speed determined to be suspicious may be set to several hundred km / h (for example, 400 km / h). Here, an airplane etc. may move at 400 km / h or more.
- step S113 Even if all points of 400 km / h or higher in the section of the action segment are excluded in step S113, if the position information before and after is correct in the final speed recalculation (S116), the speed of 400 km / h or higher is used. There is no problem because the possibility of being calculated correctly remains.
- the speed threshold is set in consideration of these.
- the speed acquisition unit 332 excludes the point J from the action segment section as suspicious.
- the speed acquisition part 332 performs the stay filter process which pinpoints and collects the points which remain in the same location for a while from the points included in the action segment (S114).
- the stay filter processing when there are a plurality of points that stay in the same place for a while, it is determined that the user is staying, and these points are aggregated into two points, a temporal start point and an end point.
- the left side of FIG. 12 is position information before stay filter processing included in a certain action segment, and there are four points from point E to point H.
- the speed acquisition unit 332 pays attention to the earliest E point in time, and determines whether or not the stay is based on whether or not the next F point is within a predetermined range from the E point.
- the predetermined range may be, for example, within 50 m from the starting point (here, point E). This range can be changed as appropriate.
- the speed acquisition unit 332 determines whether the next point G is within a predetermined range from the point E that is the starting point. This is repeated, and it is determined that the user is staying until the point E is within a predetermined range of the point E. For example, it is assumed that the point H is away from a predetermined range of the point E. At this time, the speed acquisition unit 332 determines that a three-point section from point E to point G is a stay.
- the speed acquisition unit 332 excludes the F point that is a point other than the start point or the end point among the three points. Further, the speed acquisition unit 332 corrects the position information of the G point as the end point to the same position information as the E point as the start point. As a result, in the stay section, two points of point E and point G having the same position information and different time information remain. By leaving two points at different times, it is possible to obtain time information of the stay section. The reason why the position information of the end point is matched with the position information of the start point is to use the stay section as a concentric circle filter process described later.
- the speed acquisition unit 332 leaves the point C, which is the start point, and the point E, which is the end point, excludes the point D, which is the intermediate point, and obtains position information of the point E, which is the end point. Match the position information of the starting point C.
- the speed acquisition unit 332 determines the positional relationship between three points that are temporally continuous, and performs concentric filter processing to determine whether or not there is an unnatural movement (S115).
- the concentric filter processing for three consecutive points in time, the intermediate point between the start point and the end point is a concentric circle with a diameter that is a straight line connecting the start point and the end point, and the diameter is larger than the reference circle. It is determined whether or not the area is outside the large determination circle. For example, in FIG. 14, it is assumed that there are three points that are continuous in time, point I, point J (J 1 , J 2 ), and point K.
- a determination circle having a diameter d 1 that is concentric with the reference circle and has a larger diameter d 1 is set with respect to a reference circle having a diameter d 0 that is a straight line connecting the start point I and the end point K.
- the diameter d 1 of the determination circle only needs to be larger than the diameter d 0 , and may be, for example, twice the diameter d 0 .
- One action segment indicates either one action content or stay. Therefore, it is considered that an intermediate point between three points that are continuous in time hardly exists outside the region of the determination circle. Therefore, the speed acquisition unit 332 excludes the intermediate point when the intermediate point between the start point and the end point is outside the determination region for three points that are temporally continuous. For example, in the example of FIG. 14, the midpoint J 1 point, but is left because of the test circle area, midpoint J 2 points are excluded because they are outside the test circle area.
- the intermediate point C point is a determination circle for points A, C, and F that are three points that are temporally continuous. Suppose that it is determined that it is outside the area. At this time, the speed acquisition unit 332 excludes the point C that is an intermediate point as shown in FIG. Further, the E point collected in the same position information as the C point is also excluded together with the C point.
- the speed acquisition unit 332 calculates the speed between two adjacent points based on the extracted position information, similarly to step S112 (S116). ). Then, the average speed of each speed calculated in step S116 is calculated as the speed in the action segment section (S117).
- the vehicle determination unit 334 determines the action recognition result based on the average speed of the segment acquired by the speed acquisition unit 332. I do. The vehicle determination unit 334 determines, based on the average speed of the segment, whether or not the moving means is correct among the behavior contents of the segment specified by the behavior recognition unit 330.
- the action segment is represented by action contents and the start time and end time of the action, and is acquired by the action recognition unit 330 as an action recognition result.
- the vehicle determination unit 334 is acquired by the processing of the speed acquisition unit 332 on the assumption that the section of the action segment (that is, the start time and end time of the action) obtained by the action recognition unit 330 is correct.
- the movement means of the action recognition result is corrected based on the average speed.
- the discomfort is greater.
- the discomfort is greater than when the walking state is misrecognized as the running state, the running state is the walking state, the walking state is the movement by the vehicle, the walking state is the staying state, and the movement by the bicycle is the walking state or the running state. Therefore, when the walking state is moved by a bicycle, the staying state is moved by a vehicle, or the movement by a vehicle is misrecognized as a staying state, an example of an action recognition result correction process for correcting the action content Is shown in FIG.
- the vehicle determination unit 334 first determines whether there are a plurality of valid position information included in the action segment section (S120).
- the threshold value of the number of effective position information can be set as appropriate by the system, and in the present embodiment, it is determined whether or not there are a plurality (that is, two), but the present disclosure is not limited to such a technique. For example, it may be determined whether the effective position information is 0, or may be determined whether it is two or less. Increasing the threshold value increases the possibility of leading the action recognition result to the correct answer, while also increasing the possibility of passing through an error.
- the threshold value of the number of effective position information is determined in consideration of such points.
- step S120 If it is determined in step S120 that the number of effective position information is one or less, the action recognition result by the action recognition unit 330 is followed (S129), and the process ends. On the other hand, when there are a plurality of valid position information, the vehicle determination unit 334 next determines that the average speed of the segment section is slower than the speed V 1 and the action content of the action recognition result by the action recognition unit 330 is a vehicle. It is determined whether or not it is moving (S121).
- step S121 the state on the vehicle and the staying state are separated.
- the speed V 1 is set to a speed that is low enough to be regarded as staying because it is unlikely that the user is on the vehicle. For example, the speed V 1 was, may 0.5km / h.
- Step S121 average speed of the segment section is slower than the speed V 1 at, and, when the action content of the action recognition result by the behavior recognition unit 330 is one that is moving in a vehicle, the vehicle determining unit 334, the behavior recognition result Is corrected from the vehicle to the staying state (S122).
- step S123 the state of riding a bicycle and the state of walking are separated.
- the speed V 2 is a value larger than the speed V 1 , and it is difficult to consider that the user is riding a bicycle, and a speed that is low enough to be regarded as walking is set.
- the speed V 2 may be 3 km / h.
- the average speed of the segment section is slower than the speed V 2 at step S123, and, when the action content of the action recognition result by the behavior recognition unit 330 is one that is moving in bicycle rides determination unit 334, the behavior recognition result Is in a walking state from movement by bicycle (S124).
- step S125 it is the process for selecting the object which performs the process (step S126, S128) for determining the other vehicle mentioned later, and the object considered to be moving with a certain vehicle is extracted. If the average speed of the segment interval velocity V 3 or less at the step S125, the vehicle determining unit 334 follows the action recognition result by the behavior recognition unit 330 (S129), the process ends. On the other hand, if the average speed of the segment section is greater than V 3, the vehicle determining unit 334, the action content of the action recognition result by the behavior recognition unit 330 determines whether or not the moving outside the vehicle (S126).
- step S126 When it is determined in step S126 that the action content of the action recognition result is moving other than the vehicle, the vehicle determination unit 334 assumes that the vehicle is moving with another vehicle (S127). On the other hand, moving in step S126, if it is determined to be moving in a vehicle, the vehicle determining unit 334, the average speed of the segment section is faster than the speed V 4, and the action content of the action recognition result on a bicycle It is determined whether or not it is the one being operated (S128).
- the speed V 4 is a value greater than the speed V 3 and is set to a speed that is so high that it is unlikely that the user is riding a bicycle. For example, the speed V 4 may be 40 km / h.
- the average speed of the segment section is faster than the speed V 4 at step S128, and, when the action content of the action recognition result is what is moving in bicycle rides determination unit 334, a move action recognition result by the bicycle The movement is corrected to other vehicles (S127).
- the vehicle determination unit 334 follows the action recognition result by the action recognition unit 330 (S129), and ends the process.
- Step S128 average speed of the segment section is faster than the speed V 4 at, and, when the action content of the action recognition result is what is moving in bicycle rides determination unit 334, a move action recognition result by the bicycle The movement is corrected to other vehicles (S127).
- the vehicle determination unit 334 follows the action recognition result by the action recognition unit 330 (S129), and ends the process.
- the user can browse the user's action log by operating the client terminal 100.
- the user transmits action log request information for obtaining an action log to the analysis server 300 from the input unit 140 of the client terminal 100 (S40).
- the action log request information includes a user ID of the user and a message requesting to present an action log.
- the analysis server 300 that has received the action log request information acquires the action log of the corresponding user from the action log DB by the control unit 320, and transmits it to the client terminal 100 (S50).
- the client terminal 100 that has received the action log causes the display processing unit 150 to display the content of the action log as a UI and display it on the display unit 160 (S60). For example, the content of the action log is displayed on the display unit 160 of the client terminal 100 as shown in FIG.
- the content of the action log displayed on the client terminal 100 in step S60 can be corrected by the user on the action correction screen (S70).
- the contents of the action log are corrected, there are cases where more detailed action contents are desired to be displayed in addition to the case where the action recognition result by the action recognition processing unit is incorrect. For example, when the content of the displayed action log is “move by (some) vehicle”, it is necessary to display up to what vehicle the vehicle has moved.
- the action recognition result of the action recognition unit 330 can recognize up to which vehicle the object has moved.
- the three times “moving by vehicle” shown in the upper part of FIG. 18 are movement by train twice and movement by car.
- the action recognition result indicates that the user has moved by train
- the lower side of FIG. As described above, “movement by train” is displayed for those whose movement and behavior are recognized. Note that the content “move by car” that has not been corrected by the user is displayed as “move by vehicle”.
- FIG. 19 is a flowchart showing the action log correction process by the user.
- FIG. 20 is an explanatory diagram for explaining the operation of the action log correction process by the user.
- the action content to be corrected is first selected (S200), and an action correction screen for correcting the action content is displayed (S210).
- the correction content in step S ⁇ b> 200 is selected from the action recognition result object 414 displayed in the action display area 410 on the action log display screen 400 displayed on the display unit 160 of the client terminal 100. This can be done by tapping. It is assumed that a touch panel capable of detecting the contact or proximity of an operating body such as a finger is provided as the input unit 140 in the display area of the display unit 160.
- the control unit 120 may cause the display unit 160 to display a tap display object so that the tapped position P can be visually recognized.
- the control unit 120 instructs the display processing unit 150 to display the process selection screen 430, and the action content of the action recognition result object 414 is determined.
- the user selects the process to be performed.
- On the process selection screen 430 for example, as shown in the center of FIG. 20, it is possible to select processes such as browsing detailed information of action contents, editing action contents, etc., and deleting action contents.
- the user taps and selects the process for editing the action content from the process selection screen 430 to display the action correction screen 440.
- the behavior correction screen 440 When the behavior correction screen 440 is displayed on the display unit 160, the user inputs the correction content on the behavior correction screen 440 and corrects the behavior content (S220).
- the behavior correction screen 440 includes, for example, a behavior log (behavior segment) start time input area 441, an end time input area 443, and an action content input area 445, as shown on the right side of FIG. The user corrects the action log by inputting the corrected content in the input area of the content to be corrected.
- the action content input area 445 includes a preset input area 445a in which changeable action contents are displayed as correction candidates, and a direct input area 445b in which the user can directly input action contents. You may comprise by.
- the action contents are corrected by selecting buttons associated with the action contents such as "walking", “running”, “moving by bicycle”, “moving by other vehicles", “stay”, etc. it can.
- the user can directly input the action contents in the direct input area 445b to correct the action contents.
- the user taps the completion button 447 to reflect the correction content input to the action correction screen 440 in the action log, and the correction process is completed (S230). In this way, the user can easily correct the action recognition result on the action correction screen 440.
- FIGS. 21 to 24 are explanatory diagrams for explaining the outline of the personalized learning process based on the correction information.
- FIG. 25 is a block diagram illustrating a functional unit that performs personalized learning processing.
- FIG. 26 is a flowchart showing the action recognition result determination process in consideration of the individual model by the correction unit 340.
- 27 to 29 are explanatory diagrams for explaining the feature vector.
- FIG. 30 is an explanatory diagram illustrating the action recognition result merging process.
- FIG. 31 is an explanatory diagram illustrating an example of setting selection conditions for action segments used for personalized learning.
- FIG. 21 shows an action log of a user from 8:00 am on Monday to 8:00 am on Wednesday.
- the label information that can be recognized by the user includes the action content (HAct) of the action segment and the correction information (Feedback) from the user.
- the internal information there is feature vector information used for acquiring an action segment by action recognition processing.
- the feature vector information includes, for example, an action recognition result (UnitAct) per unit time acquired by action recognition processing, position information (location), day of the week, time (hour), and the like.
- other information such as an application running on the client terminal 100 at the weather or the time can be used as a feature vector.
- the correction unit 340 performs personalization learning of the user, and thereafter, for the same action content in the same action pattern, action recognition based on the correction contents is performed. 21, the action content recognized as “moving by train” at 10 pm after Tuesday is corrected to “moving by car”, as shown in FIG.
- the correction unit 340 uses the label information and feature vector information for each action segment in a predetermined period (for example, one day) as teacher data. Generate individual models specific to the user. Then, the correcting unit 340 corrects the action content of each action segment for the next predetermined period based on the generated individual model and the feature vector information for that period. Thereby, it becomes possible to recognize the action content peculiar to each user.
- a predetermined period for example, one day
- FIG. 25 shows a functional unit that performs the personalized learning process of the correcting unit 340.
- the correction unit 340 includes a feature vector generation unit 342, an individual learning unit 344, and a merge unit 346, as shown in FIG.
- the feature vector generation unit 342 generates feature vector information used to generate an individual model.
- the feature vector generation unit 342 includes an action recognition result per unit time (UnitAct), location information (location), day of the week (day of the week), time (hour), weather, and other applications started on the client terminal 100 Feature vector information is generated from information (others) and the like. Details of the feature vector information generation process will be described later.
- the individual learning unit 344 performs learning based on the feature vector information generated by the feature vector generation unit 342, and generates an individual model for each user.
- the individual learning unit 344 generates an individual model by a learning method such as linear SVM, SVM (RBF (Kernel), k-NN, Naive Bayes, Decision Tree, Random Forest, AdaBoost, or the like.
- a learning method such as linear SVM, SVM (RBF (Kernel), k-NN, Naive Bayes, Decision Tree, Random Forest, AdaBoost, or the like.
- linear SVM SVM
- RBF Kernet altitude
- k-NN k-NN
- Naive Bayes Decision Tree
- Random Forest Random Forest
- AdaBoost AdaBoost
- the merge unit 346 merges the action recognition result acquired based on the individual model generated by the individual learning unit 344 and the action recognition result acquired by the action recognition unit 330, and determines a final action recognition result. To do. For example, the merging unit 346 linearly combines the action recognition result acquired based on the individual model and the action recognition result acquired by the action recognition unit 330 with a predetermined weight, and obtains the action recognition result that maximizes the score. This is the final action recognition result.
- the feature vector generation unit 342 of the correction unit 340 generates feature vector information used to generate an individual model for each user (S300).
- the feature vector generation unit 342 generates a feature vector from the following information.
- UnitAct Histogram UnitAct which is a behavior recognition result for each unit time, is information acquired by the behavior recognition unit 330, and the behavior content of the behavior segment is determined by the time ratio of UnitAct within the segment section.
- UnitACT represents a plurality of action contents, for example, the following action contents.
- the action content represented by UnitAct is not limited to the following contents, and more action contents may be specified.
- movement / elevation / descent on an escalator, a ropeway, a cable car, a motorcycle, a ship, an airplane, or the like may be specified as the action content related to movement as described above.
- non-mobile behavior such as meals, telephone calls, watching TV and video, music, operation of mobile communication terminals such as smartphones, sports (tennis, skiing, fishing, etc.) Also good.
- the feature vector generation unit 342 obtains a time ratio for a plurality of unit acts included in the action segment, normalizes the action segments so that the sum of the time ratios of unit act is 1.0, and obtains a feature vector. .
- a feature vector For example, as shown in FIG. 27, in an action segment having an action content of “Train by train”, “Still”, “Standing on a train”, “Standing” “StillStand” ”includes three unit actions.
- the feature vector generation unit 342 calculates and normalizes the time ratio of each unit act of the action segment.
- a feature vector related to location information may be set based on the latitude and longitude information in the action segment. For example, an average value of latitude and longitude information in the action segment may be used as a feature vector related to position information. Alternatively, an average value of latitude and longitude information of each user is used as a feature vector, and clustering is performed using a technique such as the k-means method, and a k-dimensional feature vector is generated by 1-of-k expression for each cluster id. May be. By using the result of clustering as a feature vector, it is possible to represent a place (for example, “home”, “company”, “supermarket”, etc.) where a user is frequently represented by a k-dimensional feature vector.
- -Movement information You may set the movement vector showing a user's moving direction and moving amount as a feature vector using the positional information in an action segment. For example, a three-dimensional feature vector can be generated from the movement direction and the movement amount.
- the segment time length (hour) may be a one-dimensional feature amount.
- the day of the week when the action content of the action segment was performed may be used as the feature vector.
- the day of the week may be a sin vector and a cosine value of a circle having a round of 7 days as a feature vector (each two-dimensional).
- the seven days from Sunday to Saturday may be represented as a 7-dimensional feature vector by 1-of-K representation.
- two weekdays or holidays may be represented as a two-dimensional feature vector by 1-of-K expression.
- an x coordinate (1 + sin ⁇ ) and a y coordinate (1 + cos ⁇ ) in a circle having a radius of 1 centered at (x, y) (1, 1). it can.
- 24 hours from 0 to 23:00 may be represented as a 24-dimensional feature vector by 1-of-K expression.
- feature vectors may be set using measurable information in which different situations appear depending on the individual.
- the weather when the action is performed may be used as the feature vector.
- the feature vector Based on the weather information, for example, it is possible to recognize behavioral characteristics such as movement on a bus when it is raining, walking when it is clear.
- the feature of the action can be recognized also by an application running on the client terminal 100, music being listened to, or the like.
- the feature vector generation unit 342 generates feature vector information based on such information.
- the individual learning unit 344 generates an individual model for recognizing the characteristic behavior of each user by the individual learning process (S310).
- the individual learning unit 344 generates an individual model by a learning method such as the above-described linear SVM, SVM (RBF ⁇ Kernel), k-NN, Naive Bayes, Decision Tree, RandomAForest, AdaBoost, or the like.
- the merge recognition unit 346 merges the action recognition result acquired based on the individual model generated by the individual learning unit 344 and the action recognition result acquired by the action recognition unit 330 (S320), and the final action A recognition result is determined (S330).
- the merging unit 346 uses the weight a as shown in the following formula (1) for the action recognition result (PAct) acquired based on the individual model and the action recognition result (HAct) acquired by the action recognition unit 330.
- a linear combination is performed to obtain an action recognition result (Merge) considering an individual model.
- the action recognition result (HAct) and the action recognition result (PAct) are normalized so that they can be evaluated equally.
- the element (result per minute) included in the action segment may have a length of 1.0.
- the action recognition result (PAct) may be normalized by setting the distance to the hyperplane of the SVM as a minimum value 0 and a maximum value 1.
- the weight a for linearly combining these action recognition results can be set as appropriate, and may be set to 0.4, for example.
- FIG. 30 shows an example of merge processing by the merge unit 346.
- the result of “movement by bicycle (Bicycle)” is obtained as the action recognition result of the action segment by the action recognition unit 330.
- This action segment is composed of two elements, “bicycle” and “run”, and the score is determined by the time ratio in the action segment. This content is the action recognition result (HAct).
- “Bicycle” is the maximum score in the action recognition result (HAct).
- the correction unit 340 can improve the accuracy of the action recognition result by reflecting the action specific to each user using the correction information in the action recognition process.
- the action recognition result (HAct) and action recognition result (PAct) merge process can reflect user-specific actions in the action recognition process. For example, if the number of individual samples is small, only the action recognition result (HAct) May be less than the accuracy of the action recognition result. Therefore, only the correction information of the pattern that the user has corrected in the past may be reflected.
- the filtering process of the correction information to be fed back may use an action recognition result (Merge) in consideration of the individual model in the correction with the pattern performed once in the past.
- an action recognition result (Merge) considering an individual model may be used.
- the action recognition result presented to the user is not always confirmed by the user and is not necessarily corrected appropriately.
- the correction unit 340 may use only the information of the predetermined section in which the correction information is input from the user, and may be used for learning the individual model.
- the individual model learning is performed using only the information of the action segment of the section in which the correction information is input at least once by checking the presence / absence of the correction information every predetermined section (for example, one day). I do.
- FIG. 32 is a hardware configuration diagram illustrating a hardware configuration of the client terminal 100 according to the present embodiment.
- the client terminal 100 can be realized by a processing device such as a computer as described above. As shown in FIG. 32, the client terminal 100 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, and a host bus 904a. The client terminal 100 also includes a bridge 904, an external bus 904b, an interface 905, an input device 906, an output device 907, a storage device 908, a drive 909, a connection port 911, and a communication device 913. .
- a processing device such as a computer as described above.
- the client terminal 100 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, and a host bus 904a.
- the client terminal 100 also includes a bridge 904, an external bus 904b, an interface 905, an input device 906, an output device 907, a storage device 908, a drive 909, a connection port 911, and
- the CPU 901 functions as an arithmetic processing device and a control device, and controls the overall operation in the client terminal 100 according to various programs. Further, the CPU 901 may be a microprocessor.
- the ROM 902 stores programs used by the CPU 901, calculation parameters, and the like.
- the RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like. These are connected to each other by a host bus 904a including a CPU bus.
- the host bus 904a is connected to an external bus 904b such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 904.
- an external bus 904b such as a PCI (Peripheral Component Interconnect / Interface) bus
- PCI Peripheral Component Interconnect / Interface
- the host bus 904a, the bridge 904, and the external bus 904b do not necessarily have to be configured separately, and these functions may be mounted on one bus.
- the input device 906 includes an input means for inputting information by the user such as a mouse, keyboard, touch panel, button, microphone, switch, and lever, and an input control circuit that generates an input signal based on the input by the user and outputs the input signal to the CPU 901. Etc.
- the output device 907 includes, for example, a liquid crystal display (LCD) device, an OLED (Organic Light Emitting Diode) device and a display device such as a lamp, and an audio output device such as a speaker.
- LCD liquid crystal display
- OLED Organic Light Emitting Diode
- the storage device 908 is an example of a storage unit of the client terminal 100 and is a device for storing data.
- the storage device 908 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like.
- the storage device 908 drives a hard disk and stores programs executed by the CPU 901 and various data.
- the drive 909 is a storage medium reader / writer, and is built in or externally attached to the client terminal 100.
- the drive 909 reads information recorded on a mounted removable recording medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and outputs the information to the RAM 903.
- the connection port 911 is an interface connected to an external device, and is a connection port with an external device capable of transmitting data by USB (Universal Serial Bus), for example.
- the communication device 913 is a communication interface configured by a communication device or the like for connecting to the communication network 5, for example.
- the communication device 913 may be a wireless LAN (Local Area Network) compatible communication device, a wireless USB compatible communication device, or a wire communication device that performs wired communication.
- the action log display system including the client terminal 100, the log server 200, and the analysis server 300 has been described, but the present technology is not limited to such an example.
- the function of the above-described behavior log display system may be realized by the information processing terminal 500 including the functions of the client terminal 100, the log server 200, and the analysis server 300 in one.
- the information processing terminal 500 is assumed to be a terminal that is held and used by a user, for example.
- the information processing terminal 500 includes a sensor unit 510, a control unit 520, a log DB 530, an input unit 540, a display processing unit 552, a display unit 554, and an action recognition unit 560. , A correction unit 570, an action log DB 580, and an analysis DB 590.
- the sensor unit 510, the input unit 540, the display processing unit 552, and the display unit 554 function as the client terminal 100 described above
- the log DB 530 functions as the log server 200 described above.
- the action recognition unit 560, the correction unit 570, the action log DB 580, and the analysis DB 590 function as the analysis server 300 described above
- the control unit 520 functions as the control unit of the client terminal 100, the log server 200, and the analysis server 300. Realize.
- the functions of the client terminal 100, the log server 200, and the analysis server 300 can be appropriately executed by a terminal or a server in accordance with the system configuration in addition to being integrated into one as shown in FIG. .
- An action recognition unit that recognizes a user's action based on sensor information;
- a speed acquisition unit that acquires speed information representing the moving speed of the user;
- a correction unit for correcting the action recognition result based on a comparison result between the speed information and a speed threshold set according to the action recognition result;
- An information processing apparatus comprising: (2) The information processing apparatus according to (1), wherein the speed acquisition unit calculates the speed information based on user position information. (3) The information processing apparatus according to (2), wherein the speed acquisition unit excludes the position information and acquires the speed information when an accuracy value representing the accuracy of the position information is equal to or greater than a predetermined value.
- amendment part excludes the positional information on the end point of the area which calculated the said speed information, when the said speed information is more than predetermined value, The said action recognition result is corrected, The said (2) or (3) Information processing device.
- the correction unit corrects the action recognition result by excluding position information other than a start point and an end point in a stay section when it is determined that the user stays within a predetermined range for a predetermined time or more.
- the information processing apparatus according to any one of 4).
- (6) The information processing apparatus according to (5), wherein the correction unit changes position information of an end point in the stay section to position information of the start point.
- the correction unit has a position of the intermediate point when the intermediate point is located outside a circular region concentric with a circle whose diameter is a line connecting the start point and the end point among the three pieces of position information that are temporally continuous.
- the information processing apparatus according to (5) or (6), wherein information is excluded and the action recognition result is corrected.
- the correction unit calculates an average speed in an action segment recognized as a section performing the same action based on the speed information, and corrects the action recognition result when the average speed is a speed threshold or more.
- the information processing apparatus according to any one of (1) to (7). (9) Recognizing user behavior based on sensor information; Obtaining speed information representing the moving speed of the user, Correcting the action recognition result based on a comparison result between the speed information and a speed threshold set according to the action recognition result; Including an information processing method.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- Traffic Control Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
1.行動ログ表示システムの概要
2.システム構成
2.1.クライアント端末
2.2.ログサーバ
2.3.解析サーバ
3.行動ログ表示処理
3.1.センサ情報の取得に関する処理
3.2.行動認識処理
(1)行動認識処理の概要
(2)位置情報へのフィルタ処理
(a.機能構成)
(b.速度取得部による処理)
(c.乗り物判定部による処理)
3.3.行動ログ提示処理
3.4.ユーザによる行動ログの修正処理
3.5.修正情報に基づく個人化学習処理
(1)概要
(2)機能構成
(3)個別モデルを考慮した行動認識結果決定処理
(4)修正情報のフィードバックに関するフィルタ処理
4.まとめ
5.ハードウェア構成例
まず、図1および2を参照して、本開示の一実施形態に係る行動ログ表示システムの概要について説明する。なお、図1は、本実施形態に係る行動ログ表示システムの概略構成を示す説明図である。図2は、行動ログの表示例を示す説明図である。
図3に、本実施形態に係る行動ログ表示システムの機能構成を示す。行動ログ表示システムは、上述したように、クライアント端末100と、ログサーバ200と、解析サーバ300とからなる。
クライアント端末100は、センサ部110と、制御部120と、通信部130と、入力部140と、表示処理部150と、表示部160とからなる。
ログサーバ200は、図2に示すように、通信部210と、制御部220と、ログDB230とを備える。
解析サーバ300は、図2に示すように、通信部310と、制御部320と、行動認識部330と、修正部340と、行動ログDB350と、解析DB360とを備える。
図4に基づいて、本実施形態に係る行動ログ表示処理を説明する。図4は、本実施形態に係る行動ログ表示処理の概要を示すタイミングチャートである。本実施形態に係る行動ログ表示処理は、センサ情報の取得に関する処理(S10、S20)、行動認識処理(S30)、行動ログ提示処理(S40~S60)、ユーザによる行動ログ修正処理(S70)、および修正情報に基づく個人化学習処理(S80)とからなる。
ユーザに提示する行動ログを取得するために、ユーザの行動に関する情報としてセンサ情報が取得される。センサ情報は、クライアント端末100のセンサ部110によって取得される(S10)。クライアント端末100は、行動ログ表示システムのサービスを受けるユーザが日常保持している端末であり、センサ部110により、ユーザの位置や動き等の情報を時々刻々取得し、時間情報と関連付けてセンサ情報として記録し続ける。クライアント端末100は認証機能を備えており、取得したセンサ情報は認証したユーザの行動に関する情報として利用される。
(1)行動認識処理の概要
解析サーバ300は、所定のタイミングで、ログサーバ200に記録されたセンサ情報に基づいて行動認識処理を行う(S30)。解析サーバ300は、例えば、所定の時間間隔でログサーバ200からセンサ情報を取得して、各ユーザの行動を解析する。行動認識処理では、センサ情報の信号処理や統計処理等が行われ、ユーザの行動や状況を認識する。行動認識処理は、例えば上記特許文献1に記載された手法等、周知の技術を用いて行ってもよい。
ここで、行動認識部330で認識される行動内容のうち、「歩き」、「走り」、「自転車で移動」、「電車で移動」、「バスで移動」、「自動車で移動」等の、移動手段に関する行動の特定は、センサ情報を信号処理や統計処理を行った結果、または位置情報から算出される速度情報に基づき行われる。速度情報を位置情報に基づき算出する場合、この位置情報は、例えばクライアント端末100に設けられたGPSや、クライアント端末100が接続したWi-Fi等のネットワーク情報等により取得される。しかし、位置同定技術の精度によっては、このように取得された位置情報にはノイズが多く含まれてしまうことがある。ノイズの多い位置情報から算出された速度情報の信頼度は低く、行動内容を正確に判定することの妨げとなる。
本実施形態において、位置情報に対するフィルタ処理と、行動認識結果の決定処理とを行う機能部は、解析サーバ300に設けられる。具体的には、図5に示すように、速度取得部332および乗り物判定部334が、解析サーバ300に設けられる。
図6に基づき、速度取得部332による、位置情報に対するフィルタ処理とセグメント区間の平均速度を算出する処理とについて説明する。
まず、速度取得部332は、GPSやネットワーク情報、加速度センサにより取得された情報により特定される位置情報に基づき、行動セグメントの区間を特定する区間フィルタ処理を行う(S110)。行動セグメントの区間は、行動の開始時刻および終了時刻により特定される。速度取得部332は、最終的に、特定された区間での平均速度を算出する。
次いで、速度取得部332は、行動セグメントの区間に含まれる位置情報のうち、位置が不正確である位置情報を排除するアキュラシフィルタ処理を行う(S111)。アキュラシフィルタ処理は、位置情報に付加されたアキュラシ値に基づき行われる。アキュラシ値とは、GPS等から出力される位置情報に付随する情報であり、例えば緯度経度情報により特定される位置を中心としてアキュラシ値を半径とする円内に存在する確率により位置情報の正確さが表される。例えば「半径がアキュラシ値[m]である円の中にいる可能性が85%である」というように位置情報の正確さが表される。したがって、アキュラシ値が大きいほど位置が不正確であるということになる。
そして、速度取得部332は、アキュラシフィルタ処理を行った後、行動セグメントの区間に含まれる位置情報に基づき、時間的に隣り合う2点間の速度を算出し(S112)、算出した各速度に対して速度フィルタ処理を施す(S113)。
次いで、速度取得部332は、行動セグメントに含まれる地点から、同一箇所にしばらくとどまっている地点を特定して集約する滞在フィルタ処理を施す(S114)。滞在フィルタ処理では、同一箇所にしばらくとどまっている地点が複数ある場合、滞在していると判断し、これらの点を時間的な始点と終点との2点に集約する。
さらに、速度取得部332は、時間的に連続する3つの地点の位置関係を判定し、不自然な動きがないか否かを判定する同心円フィルタ処理を施す(S115)。同心円フィルタ処理では、時間的に連続する3つの地点について、始点と終点の間にある中間点が、始点と終点とを結ぶ直線を直径とする基準円と同心円であって、基準円より直径の大きい判定円の領域外にあるか否かを判定する。例えば図14において、時間的に連続する3つの地点であるI点、J(J1、J2)点、K点があるとする。このとき、始点であるI点と終点であるK点とを結ぶ直線を直径d0とする基準円に対して、基準円と同心円でこれより大きい直径d1を有する判定円を設定する。判定円の直径d1は、直径d0より大きければよく、例えば直径d0の2倍としてもよい。
ステップS110~S115の処理を行った後、速度取得部332は、抽出された位置情報に基づき、ステップS112と同様、隣接する2点間の速度を算出する(S116)。そして、ステップS116にて算出した各速度の平均速度を、行動セグメントの区間における速度として算出する(S117)。
図6に示した速度取得部332の処理によってセグメント区間の平均速度が算出されると、乗り物判定部334は、速度取得部332により取得されたセグメントの平均速度に基づいて行動認識結果の決定処理を行う。乗り物判定部334は、行動認識部330により特定されたセグメントの行動内容のうち、移動手段が正しいか否かをセグメントの平均速度に基づいて判定する。
図4の説明に戻り、ユーザは、クライアント端末100を操作して、ユーザの行動ログを閲覧することができる。このとき、ユーザは、クライアント端末100の入力部140から、行動ログを取得する行動ログ要求情報を解析サーバ300に対して送信する(S40)。行動ログ要求情報には、ユーザのユーザIDと、行動ログの提示を要求するメッセージとが含まれる。
ステップS60によりクライアント端末100に表示された行動ログの内容は、行動修正画面によりユーザにより修正可能である(S70)。行動ログの内容を修正する場合としては、行動認識処理部による行動認識結果が誤っている場合に加え、より詳細な行動内容を表示させたい場合等がある。例えば、表示された行動ログの内容が「(何らかの)乗り物で移動」というものである場合に、どのような乗り物で移動したかまで表示させたい場合等である。
上述のように、ユーザにより行動内容の修正が行われると、この修正情報を用いて、各ユーザに特有の行動を行動認識処理に反映させて、行動認識結果の精度を向上させることも可能となる(S80)。すなわち、修正情報に基づき行動認識結果の個人化学習を行うことで、ユーザ毎に適切な行動ログを提示することができるようになる。
まず、ユーザによる行動認識結果の修正情報を用いた個人化学習処理の概要を説明する。図21に、行動認識部330による行動認識結果の一例を示す。図21には、あるユーザの、月曜日の午前8時から水曜日の午前8時までの行動ログが示されている。ここで、ユーザが認識可能なラベル情報として、行動セグメントの行動内容(HAct)と、ユーザからの修正情報(Feedback)とがある。また、内部情報として、行動認識処理によって行動セグメントを取得するために用いられる特徴ベクトル情報がある。特徴ベクトル情報としては、例えば、行動認識処理によって取得される単位時間毎の行動認識結果(UnitAct)や、位置情報(location)、曜日(day of the week)、時間(hour)等がある。さらに、天気やその時刻においてクライアント端末100で起動していたアプリケーション等のその他情報(others)を、特徴ベクトルとして利用することもできる。
図25に、修正部340の個人化学習処理を行う機能部を示す。修正部340は、図25に示すように、特徴ベクトル生成部342と、個別学習部344と、マージ部346とからなる。
図26に基づき、修正部340による個別モデルを考慮した行動認識結果決定処理を説明する。まず、修正部340の特徴ベクトル生成部342は、各ユーザの個別モデルを生成するために用いる特徴ベクトル情報を生成する(S300)。特徴ベクトル生成部342は、以下のような情報から特徴ベクトルを生成する。
単位時間毎の行動認識結果であるUnitActは、行動認識部330により取得される情報であって、行動セグメントの行動内容は、セグメント区間内のUnitActの時間割合により決定されている。UnitActは複数の行動内容を表しており、例えば以下のような行動内容を表している。
「自転車で移動(Bicycle)」、「バスで移動(Bus)」、
「バスに乗って座っている(Bus Sit)」、
「バスに乗って立っている(BusStand)」、
「自動車で移動(Car)」、「エレベータで降下(ElevDown)」、
「エレベータで上昇(ElevUp)」、「飛び跳ねる(Jump)」、
「(NotCarry)」、「走り(Run)」、「滞在(Still)」、
「座って滞在(StillSit)」、「立って滞在(StillStand)」、
「電車で移動(Train)」、「電車に乗って座っている(TrainSit)」、
「電車に乗って立っている(TrainStand)」、「歩き(Walk)」
行動セグメント内の緯度経度情報に基づき、位置情報に関する特徴ベクトルを設定してもよい。例えば、行動セグメント内の緯度経度情報の平均値を位置情報に関する特徴ベクトルとしてもよい。あるいは、各ユーザの緯度経度情報の平均値を特徴ベクトルとして、例えばk-means法等の手法を用いてクラスタリングし、各クラスタのidを1-of-k表現によりk次元の特徴ベクトルを生成してもよい。クラスタリングの結果を特徴ベクトルとすることで、ユーザがよくいる場所(例えば、「自宅」、「会社」、「スーパー」等)をk次元の特徴ベクトルにより表すことが可能となる。
行動セグメント内の位置情報を用いて、ユーザの移動方向、移動量を表す移動ベクトルを特徴ベクトルとして設定してもよい。例えば、移動方向および移動量から3次元の特徴ベクトルを生成することができる。
セグメントの時間長さ(hour)を1次元の特徴量としてもよい。
行動セグメントの行動内容が行われた曜日を特徴ベクトルとしてもよい。例えば、曜日を、7日間を一周とする円のsin値、cos値を特徴ベクトル(各2次元)としてもよい。このとき、図28に示すように、各曜日における時間は、例えば(x,y)=(1,1)を中心とする半径1の円におけるx座標(1+sinθ)およびy座標(1+cosθ)で表すことができる。例えば、月曜日の午前0時であれば、(x,y)=(1+sin0°,1+cos0°)と表される。
行動セグメントの行動内容が行われた各時間を特徴ベクトルとしてもよい。時間情報についても、曜日情報と同様、例えば24時間を一周とする円のsin値、cos値を特徴ベクトル(各2次元)としてもよい。このとき、図29に示すように、各時間は、例えば(x,y)=(1,1)を中心とする半径1の円におけるx座標(1+sinθ)およびy座標(1+cosθ)で表すことができる。例えば、18時であれば、(x,y)=(1+sin270°,1+cos270°)と表される。あるいは、0~23時までの24時間を、1-of-K表現により24次元の特徴ベクトルとして表してもよい。
上述の情報以外に、個人により異なった状況が現れる、測定可能な情報を用いて、特徴ベクトルを設定してもよい。例えば、行動が行われたときの天気を特徴ベクトルとしてもよい。天気情報により、例えば、雨だとバスで移動、晴れだと歩き、といった行動の特徴を認識することができる。あるいは、ユーザが行動を行った際にクライアント端末100で起動しているアプリケーションや聞いている音楽等によっても、行動の特徴を認識することができる。
ユーザによる行動内容の修正情報を用いて各ユーザに特有の行動を行動認識処理に反映させると、行動認識結果の精度が向上されるが、よりユーザの意図するように修正情報が反映されるよう、フィードバックする修正情報にフィルタをかけてもよい。
以上、本実施形態に係る行動ログ表示システムの構成と、当該システムにおける処理について説明した。本実施形態に係る行動ログ表示システムでは、ユーザに提示する行動ログの内容がより正しいものとなるようにする判定処理を行うとともに、さらにユーザによる行動ログの修正を受けて、その後の行動認識処理に反映させるようにする。これにより、行動ログを、ユーザの意図に沿った、正しい内容でユーザに提示することができる。
最後に、本実施形態に係るクライアント端末100、ログサーバ200および解析サーバ300のハードウェア構成例について説明する。これらの機器は同様に構成することができるため、以下では、クライアント端末100を例として説明する。図32は、本実施形態に係るクライアント端末100のハードウェア構成を示すハードウェア構成図である。
(1)
センサ情報に基づいてユーザの行動を認識する行動認識部と、
ユーザの移動速度を表す速度情報を取得する速度取得部と、
前記速度情報と行動認識結果に応じて設定される速度閾値との比較結果に基づいて、前記行動認識結果を修正する修正部と、
を備える、情報処理装置。
(2)
前記速度取得部は、ユーザの位置情報に基づいて前記速度情報を算出する、前記(1)に記載の情報処理装置。
(3)
前記速度取得部は、前記位置情報の正確さを表すアキュラシ値が所定以上のとき、当該位置情報を除外して前記速度情報を取得する、前記(2)に記載の情報処理装置。
(4)
前記修正部は、前記速度情報が所定値以上であるとき、当該速度情報を算出した区間の終点の位置情報を除外して前記行動認識結果を修正する、前記(2)または(3)に記載の情報処理装置。
(5)
前記修正部は、所定の範囲内に所定時間以上滞在していると判定したとき、滞在区間における始点および終点以外の位置情報を除外して前記行動認識結果を修正する、前記(2)~(4)のいずれか1項に記載の情報処理装置。
(6)
前記修正部は、前記滞在区間における終点の位置情報を、前記始点の位置情報に変更する、前記(5)に記載の情報処理装置。
(7)
前記修正部は、時間的に連続する3つの位置情報のうち、中間点が、始点と終点とを結ぶ線を直径とする円と同心の円領域より外側に位置するとき、前記中間点の位置情報は除外して前記行動認識結果を修正する、前記(5)または(6)に記載の情報処理装置。
(8)
前記修正部は、前記速度情報に基づいて同一行動を行っている区間と認識された行動セグメントにおける平均速度を算出し、前記平均速度が速度閾値以上であるとき、前記行動認識結果を修正する、前記(1)~(7)のいずれか1項に記載の情報処理装置。
(9)
センサ情報に基づいてユーザの行動を認識すること、
ユーザの移動速度を表す速度情報を取得すること、
前記速度情報と行動認識結果に応じて設定される速度閾値との比較結果に基づいて、前記行動認識結果を修正すること、
を含む、情報処理方法。
110 センサ部
120 制御部
130 通信部
140 入力部
150 表示処理部
160 表示部
200 ログサーバ
210 通信部
220 制御部
230 ログDB
300 解析サーバ
310 通信部
320 制御部
330 行動認識部
340 修正部
342 特徴ベクトル生成部
344 個別学習部
346 マージ部
350 行動ログDB
360 解析DB
400 行動ログ表示画面
410 行動表示領域
412 時間軸
414 行動認識結果オブジェクト
420 サマリー領域
430 処理選択画面
440 行動修正画面
500 情報処理端末
Claims (9)
- センサ情報に基づいてユーザの行動を認識する行動認識部と、
ユーザの移動速度を表す速度情報を取得する速度取得部と、
前記速度情報と行動認識結果に応じて設定される速度閾値との比較結果に基づいて、前記行動認識結果を修正する修正部と、
を備える、情報処理装置。 - 前記速度取得部は、ユーザの位置情報に基づいて前記速度情報を算出する、請求項1に記載の情報処理装置。
- 前記速度取得部は、前記位置情報の正確さを表すアキュラシ値が所定以上のとき、当該位置情報を除外して前記速度情報を取得する、請求項2に記載の情報処理装置。
- 前記修正部は、前記速度情報が所定値以上であるとき、当該速度情報を算出した区間の終点の位置情報を除外して前記行動認識結果を修正する、請求項2に記載の情報処理装置。
- 前記修正部は、所定の範囲内に所定時間以上滞在していると判定したとき、滞在区間における始点および終点以外の位置情報を除外して前記行動認識結果を修正する、請求項2に記載の情報処理装置。
- 前記修正部は、前記滞在区間における終点の位置情報を、前記始点の位置情報に変更する、請求項5に記載の情報処理装置。
- 前記修正部は、時間的に連続する3つの位置情報のうち、中間点が、始点と終点とを結ぶ線を直径とする円と同心の円領域より外側に位置するとき、前記中間点の位置情報は除外して前記行動認識結果を修正する、請求項5に記載の情報処理装置。
- 前記修正部は、前記速度情報に基づいて同一行動を行っている区間と認識された行動セグメントにおける平均速度を算出し、前記平均速度が速度閾値以上であるとき、前記行動認識結果を修正する、請求項1に記載の情報処理装置。
- センサ情報に基づいてユーザの行動を認識すること、
ユーザの移動速度を表す速度情報を取得すること、
前記速度情報と行動認識結果に応じて設定される速度閾値との比較結果に基づいて、前記行動認識結果を修正すること、
を含む、情報処理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/302,771 US10165412B2 (en) | 2014-05-22 | 2015-03-03 | Information processing device and information processing method |
EP15796885.0A EP3147831B1 (en) | 2014-05-22 | 2015-03-03 | Information processing device and information processing method |
JP2016520961A JP6572886B2 (ja) | 2014-05-22 | 2015-03-03 | 情報処理装置および情報処理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014106144 | 2014-05-22 | ||
JP2014-106144 | 2014-05-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015178065A1 true WO2015178065A1 (ja) | 2015-11-26 |
Family
ID=54553742
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/056207 WO2015178065A1 (ja) | 2014-05-22 | 2015-03-03 | 情報処理装置および情報処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10165412B2 (ja) |
EP (1) | EP3147831B1 (ja) |
JP (1) | JP6572886B2 (ja) |
WO (1) | WO2015178065A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108228379A (zh) * | 2018-01-24 | 2018-06-29 | 广东远峰汽车电子有限公司 | 日志统计方法、收集服务器、分布式服务器及汇总服务器 |
JP2018165700A (ja) * | 2017-03-28 | 2018-10-25 | カシオ計算機株式会社 | 電子機器、位置特定システム、位置特定方法及びプログラム |
US10791420B2 (en) | 2017-02-22 | 2020-09-29 | Sony Corporation | Information processing device and information processing method |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6436148B2 (ja) * | 2016-11-18 | 2018-12-12 | 横河電機株式会社 | 情報処理装置、保全機器、情報処理方法、情報処理プログラム及び記録媒体 |
CN110767960B (zh) * | 2019-11-15 | 2021-01-01 | 广东轻工职业技术学院 | 微生物燃料电池与混合型超级电容器集成的柔性器件及制备方法与应用 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008152655A (ja) * | 2006-12-19 | 2008-07-03 | Ntt Docomo Inc | 情報サービス提供システム、対象行動推定装置、対象行動推定方法 |
JP2012083323A (ja) * | 2010-09-15 | 2012-04-26 | Casio Comput Co Ltd | 測位装置、測位方法およびプログラム |
US20120232432A1 (en) * | 2008-08-29 | 2012-09-13 | Philippe Kahn | Sensor Fusion for Activity Identification |
JP2013003649A (ja) * | 2011-06-13 | 2013-01-07 | Sony Corp | 情報処理装置、情報処理方法およびコンピュータプログラム |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4966722B2 (ja) * | 2007-04-19 | 2012-07-04 | クラリオン株式会社 | 車載地図表示装置 |
CA2739300A1 (en) | 2008-10-09 | 2010-04-15 | University Of Utah Research Foundation | System and method for preventing cell phone use while driving |
EP2721541B1 (en) * | 2011-06-17 | 2022-02-02 | Myotest SA | An athletic performance monitoring device |
JP6003284B2 (ja) * | 2012-06-22 | 2016-10-05 | セイコーエプソン株式会社 | 携帯型機器 |
KR102281233B1 (ko) * | 2013-03-14 | 2021-07-23 | 삼성전자 주식회사 | 화면 제어 방법 및 장치 |
US11093196B2 (en) * | 2013-10-07 | 2021-08-17 | Intel Corporation | Method, system, and device for selecting and displaying information on a mobile digital display device |
CN104807466B (zh) * | 2014-01-24 | 2017-10-10 | 腾讯科技(深圳)有限公司 | 地图信息显示方法及装置 |
WO2015126182A1 (ko) * | 2014-02-21 | 2015-08-27 | 삼성전자 주식회사 | 콘텐츠를 표시하는 방법 및 이를 위한 전자 장치 |
WO2015145544A1 (ja) * | 2014-03-24 | 2015-10-01 | パイオニア株式会社 | 表示制御装置、制御方法、プログラム及び記憶媒体 |
US9529089B1 (en) * | 2014-03-31 | 2016-12-27 | Amazon Technologies, Inc. | Enhancing geocoding accuracy |
US9696428B2 (en) * | 2014-06-20 | 2017-07-04 | Samsung Electronics Co., Ltd. | Electronic device and method for measuring position information of electronic device |
US9836963B1 (en) * | 2015-01-20 | 2017-12-05 | State Farm Mutual Automobile Insurance Company | Determining corrective actions based upon broadcast of telematics data originating from another vehicle |
JP5910903B1 (ja) * | 2015-07-31 | 2016-04-27 | パナソニックIpマネジメント株式会社 | 運転支援装置、運転支援システム、運転支援方法、運転支援プログラム及び自動運転車両 |
KR101667736B1 (ko) * | 2015-09-25 | 2016-10-20 | 엘지전자 주식회사 | 이동단말기 및 그 제어방법 |
-
2015
- 2015-03-03 JP JP2016520961A patent/JP6572886B2/ja active Active
- 2015-03-03 US US15/302,771 patent/US10165412B2/en active Active
- 2015-03-03 EP EP15796885.0A patent/EP3147831B1/en active Active
- 2015-03-03 WO PCT/JP2015/056207 patent/WO2015178065A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008152655A (ja) * | 2006-12-19 | 2008-07-03 | Ntt Docomo Inc | 情報サービス提供システム、対象行動推定装置、対象行動推定方法 |
US20120232432A1 (en) * | 2008-08-29 | 2012-09-13 | Philippe Kahn | Sensor Fusion for Activity Identification |
JP2012083323A (ja) * | 2010-09-15 | 2012-04-26 | Casio Comput Co Ltd | 測位装置、測位方法およびプログラム |
JP2013003649A (ja) * | 2011-06-13 | 2013-01-07 | Sony Corp | 情報処理装置、情報処理方法およびコンピュータプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3147831A4 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10791420B2 (en) | 2017-02-22 | 2020-09-29 | Sony Corporation | Information processing device and information processing method |
JP2018165700A (ja) * | 2017-03-28 | 2018-10-25 | カシオ計算機株式会社 | 電子機器、位置特定システム、位置特定方法及びプログラム |
US10571282B2 (en) | 2017-03-28 | 2020-02-25 | Casio Computer Co., Ltd. | Electronic apparatus, position specifying system, position specifying method, and storage medium |
CN108228379A (zh) * | 2018-01-24 | 2018-06-29 | 广东远峰汽车电子有限公司 | 日志统计方法、收集服务器、分布式服务器及汇总服务器 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2015178065A1 (ja) | 2017-04-20 |
EP3147831A1 (en) | 2017-03-29 |
US10165412B2 (en) | 2018-12-25 |
US20170026801A1 (en) | 2017-01-26 |
EP3147831B1 (en) | 2020-09-02 |
JP6572886B2 (ja) | 2019-09-11 |
EP3147831A4 (en) | 2017-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10951602B2 (en) | Server based methods and systems for conducting personalized, interactive and intelligent searches | |
CN107172590B (zh) | 基于移动终端的活动状态信息处理方法、装置及移动终端 | |
JP6572886B2 (ja) | 情報処理装置および情報処理方法 | |
US10353476B2 (en) | Efficient gesture processing | |
US20170132821A1 (en) | Caption generation for visual media | |
JP5904021B2 (ja) | 情報処理装置、電子機器、情報処理方法、及びプログラム | |
US10016165B2 (en) | Information processing apparatus, information processing method, and program | |
US8655740B2 (en) | Information providing apparatus and system | |
US9299350B1 (en) | Systems and methods for identifying users of devices and customizing devices to users | |
US9582755B2 (en) | Aggregate context inferences using multiple context streams | |
CN115273252A (zh) | 使用多模态信号分析进行命令处理 | |
JPWO2018116862A1 (ja) | 情報処理装置および方法、並びにプログラム | |
CN102456141A (zh) | 用于识别用户背景的用户装置和方法 | |
CN107408258A (zh) | 在运载工具中呈现广告 | |
JP6742380B2 (ja) | 電子装置 | |
CN107004124B (zh) | 使用生物信号识别用户的方法和设备 | |
CN107241697A (zh) | 用于移动终端的用户行为确定方法、装置及移动终端 | |
CN110799946B (zh) | 多应用用户兴趣存储器管理 | |
WO2015178066A1 (ja) | 情報処理装置および情報処理方法 | |
CN107368553B (zh) | 基于活动状态提供搜索建议词的方法及装置 | |
US9420048B2 (en) | Mobile device, method of activating application, and program | |
KR20180014626A (ko) | 차량 탑승 인식 방법 및 이를 구현한 전자 장치 | |
US20210248490A1 (en) | Method and system for providing a graphical user interface using machine learning and movement of the user or user device | |
JP2018133696A (ja) | 車載装置、コンテンツ提供システムおよびコンテンツ提供方法 | |
JP7478610B2 (ja) | 情報検索装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15796885 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016520961 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15302771 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2015796885 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015796885 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |