WO2022208905A1 - Information processing device, information processing method, information processing program, and information processing system - Google Patents

Information processing device, information processing method, information processing program, and information processing system Download PDF

Info

Publication number
WO2022208905A1
WO2022208905A1 PCT/JP2021/021259 JP2021021259W WO2022208905A1 WO 2022208905 A1 WO2022208905 A1 WO 2022208905A1 JP 2021021259 W JP2021021259 W JP 2021021259W WO 2022208905 A1 WO2022208905 A1 WO 2022208905A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information processing
state
estimation unit
estimates
Prior art date
Application number
PCT/JP2021/021259
Other languages
French (fr)
Japanese (ja)
Inventor
明珍 丁
麻衣 今村
英夫 長坂
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Priority to PCT/JP2021/043548 priority Critical patent/WO2022208999A1/en
Priority to PCT/JP2022/007705 priority patent/WO2022209473A1/en
Priority to PCT/JP2022/013213 priority patent/WO2022210111A1/en
Priority to JP2023511338A priority patent/JPWO2022210649A1/ja
Priority to PCT/JP2022/015297 priority patent/WO2022210649A1/en
Publication of WO2022208905A1 publication Critical patent/WO2022208905A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/02Synthesis of acoustic waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present disclosure relates to an information processing device, an information processing method, an information processing program, and an information processing system that control output to a user.
  • Patent Document 1 There is a technology that recognizes utterances and environmental sounds, and selects and outputs content such as music based on the recognized sounds.
  • the technology to recognize speech and environmental sounds is applicable only to environments with sounds. Therefore, a user who does not want to make noise or a situation where he/she does not want to make noise may not be able to select appropriate content. Also, natural language processing requires high computational power, making it difficult to process locally.
  • an object of the present disclosure is to provide an information processing device, an information processing method, an information processing program, and an information processing system that appropriately control output to the user regardless of the situation.
  • An information processing device includes: a user state estimation unit that estimates a user state; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; an output control unit that controls output based on the environmental state; Equipped with
  • a user position estimating unit that estimates a user position based on a detection value of a sensor unit of the wearable device worn by the user
  • a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location
  • the user state estimation unit may estimate the user state based on the location attribute.
  • the user position estimation unit may estimate the user position using PDR (Pedestrian Dead Reckoning).
  • the environment estimation unit may estimate the environmental state based on the location attribute.
  • the output for the user can be controlled so that the user can focus on the work, and when the user is at the resting place, the output for the user can be controlled so that the user can relax.
  • the sensor unit of the wearable device may include at least one of an acceleration sensor, a gyro sensor, a compass, and a biosensor.
  • the user position estimation unit an angle correction unit that calculates a correction value of the azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user; an angle estimation unit that estimates an azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user and the correction value; a user position estimation unit that estimates the user position using the azimuth angle; may have
  • the angle at which the wearable device is worn differs for each user. Therefore, the angles of the sensor axes of the acceleration sensor and the gyro sensor are different for each user. Therefore, the user position estimation unit can estimate the angle of the sensor axis of the sensor unit for each user, and use this as a correction value to estimate the direction (angle) with high accuracy without depending on individual differences.
  • the user position estimation unit estimates a moving route of the user position
  • the location attribute estimation unit may estimate the location attribute after movement based on the movement route.
  • the output when the user is at the desk during telework, the output can be controlled so that the user can focus on the work, and when the user is at the resting place, the output can be controlled so that the user can relax.
  • the location attribute estimating unit may store a plurality of movement routes, and may estimate the location attribute after movement by matching the estimated movement route with the plurality of held movement routes.
  • the location attribute estimator may output a warning when matching fails a predetermined number of times. As a result, when the user moves from his/her home to a different indoor space (for example, a coworking space) and continues to detect a movement route that is completely different from the plurality of movement routes held, location attributes after movement can be detected. is estimated from the new moving route.
  • a different indoor space for example, a coworking space
  • the location attribute estimation unit may perform the matching using DTW (dynamic time warping).
  • the location attribute estimation unit may estimate the location attribute by determining the user's stay time at the location where the user is.
  • the user state estimation unit may estimate the user state based on the acquired context.
  • the context may include at least one of location information of the user and terminal information of the information processing device.
  • the user state estimation unit may estimate the user state based on the detected value of the sensor unit of the wearable device and/or the location attribute.
  • the user state may indicate a plurality of activity states of the user.
  • the user state indicates four levels of activity: break time, neutral, DND (Do Not Disturb), and offline. Break time is the most relaxed activity state, Neutral is the normal activity state, DND is the relatively busy activity state, and Offline is the busiest activity state.
  • the output control unit is A content control unit that reproduces content selected based on the environmental state, and/or a notification control unit that controls the number of notifications to the user based on the environmental state.
  • the content control unit may reproduce content that allows the user to focus and content that allows the user to relax.
  • the notification control unit may reduce or eliminate the number of notifications so that the user can focus, or keep the number of notifications normal if the user is relaxing.
  • An information processing method includes: Estimate the user state, estimating an environmental state to be presented to a user based on the user state; The output is controlled based on the environmental conditions.
  • An information processing program includes the processor of the information processing device, a user state estimation unit that estimates a user state; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; It operates as an output control unit that controls the output based on the environmental conditions.
  • An information processing system includes: wearable devices and a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; an output control unit that controls output based on the environmental state; an information processing device having Equipped with
  • 1 shows a configuration of an information processing system according to an embodiment of the present disclosure
  • 1 schematically shows a worn wearable device; Schematically shows individual differences in wearing wearable devices.
  • the concept of angle correction is shown schematically.
  • 4 shows an operation flow of an angle correction unit; Schematically shows a user's movement.
  • the concept of angle correction is shown schematically. Specific processing of the angle correction unit will be shown.
  • a specific calculation example is shown. Shows the relationship between initial frames. Shows how to specify the natural front. It is a figure for demonstrating the process of a place estimation part.
  • 4 shows an application example of the processing of the location estimator.
  • 4 shows a recognition example of the processing of the place estimating unit.
  • 4 shows an operation flow of a location estimation unit; 4 shows a supplemental operational flow of the location estimator; The following shows the operation when different walking styles are identified for the same route.
  • 4 shows a modification of a method for estimating a place by a place estimating unit; It is a flow for estimating the environmental state presented to the user from the context.
  • the operation of the user state estimator is shown. It shows the mapping relationship between context and user state.
  • Fig. 3 shows how the user state estimator determines the user state.
  • the operation of the environment estimator is shown.
  • 4 shows the operation of the content control unit of the output control unit;
  • the operation of the notification control unit of the output control unit is shown.
  • 1 shows the configuration of a content reproduction system according to the present embodiment; 4 shows an example of a GUI of a preset application. 4 shows an operational flow of a content playback control application; 1 shows an example of a table used to select a content providing application;
  • FIG. 1 shows the configuration of an information processing system according to one embodiment of the present disclosure.
  • the information processing system 10 has an information processing device 100 and a wearable device 200 .
  • the information processing device 100 is a terminal device used by an end user, such as a smartphone, tablet computer, or personal computer. Information processing apparatus 100 is connected to a network such as the Internet.
  • the wearable device 200 is a device worn on the user's head.
  • the wearable device 200 is typically a wireless earphone (FIG. 2), but may be a wireless headphone, a wired headphone, a wired earphone, an HMD (Head Mount Display) for AR (Augmented Reality) or VR (Virtual Reality), or the like. There may be.
  • FIG. 2 shows an open-ear earphone that does not completely cover the ear canal, it may be a canal-type earphone, a hearing aid, or a sound collector that closes the ear canal.
  • the information processing apparatus 100 and the wearable device 200 are connected to various types of proximity such as Bluetooth (registered trademark) (specifically, BLE (Bluetooth Low Energy) GATT (Generic Attribute Profile)) and Wi-Fi (registered trademark). They are communicably connected to each other by long-distance wireless communication.
  • Wearable device 200 has sensor section 210 .
  • the sensor unit 210 includes an acceleration sensor 211 that detects acceleration, a gyro sensor 212 that detects angular velocity, and a compass 213 that detects azimuth.
  • the sensor unit 210 further includes a biosensor 214 such as a heartbeat sensor, blood flow sensor, electroencephalogram sensor, or the like.
  • the wearable device 200 supplies the detection value of the sensor unit 210 to the information processing device 100 .
  • the information processing apparatus 100 has a context acquisition unit 110 and a PDR (Pedestrian Dead Reckoning) unit 120 (user position estimating unit), location estimating unit 130 (location attribute estimating unit), user state estimating unit 140, environment estimating unit 150, and output control unit 160.
  • PDR Registered Dead Reckoning
  • the context acquisition unit 110 acquires the user's context.
  • the user's context includes location information and terminal information.
  • the context is, for example, a sensor value obtained from the sensor unit 210, user's schedule information obtained from a calendar application, or the like.
  • the context acquisition unit 110 has a device such as a GPS sensor 111 and a beacon transmitter/receiver 112 that acquires location information as a context.
  • Context acquisition section 110 further includes terminal information acquisition section 113 that acquires terminal information as a context.
  • the terminal information acquisition unit 113 acquires screen lock information (locked, unlocked), user behavior information (run, bicycle, stationary, walking, riding, etc.), location (specific location such as home, office, etc.) as terminal information that is context.
  • the PDR section 120 (user position estimation section) estimates the user position based on the detection values (acceleration, angular velocity and azimuth angle) of the sensor section 210 of the wearable device 200 worn by the user.
  • PDR section 120 has angle correction section 121 , angle estimation section 122 , and user position estimation section 123 .
  • the angle correction unit 121 calculates a correction value for the user's azimuth angle based on the detection values (acceleration, angular velocity, and azimuth angle) of the sensor unit 210 of the wearable device 200 worn by the user.
  • the angle estimation unit 122 estimates the azimuth angle of the user based on the detection values (acceleration, angular velocity, and azimuth angle) of the sensor unit 210 of the wearable device 200 worn by the user and the correction value.
  • the user position estimation unit 123 estimates the user position using the corrected azimuth angle.
  • PDR Pedestrian Dead Reckoning
  • the PDR unit 120 detects changes in the user position from room to room, that is, movement of the user position, based on acceleration, angular velocity, and azimuth angle detected by the acceleration sensor 211, gyro sensor 212, and compass 213. Estimate a route.
  • the location estimation unit 130 estimates the attribute of the user's location (location attribute) based on the change in the user's position estimated by the PDR unit 120 . In other words, based on the moving route estimated by the PDR unit 120, the location attribute after the user moves is estimated.
  • a location attribute is, for example, a division within a building that is even finer than the building itself.
  • the location attribute is living room, bedroom, toilet, kitchen, washroom, etc. within one house.
  • the location attribute is a desk, conference room, etc. within one co-working space.
  • the location attribute is not limited to this, and the location attribute may indicate the building itself or the like, or may indicate both the building itself and the section within the building.
  • User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state.
  • a user state indicates a user's multi-level activity state. For example, the user state indicates four levels of activity: break time, neutral, DND (Do Not Disturb) and offline. Break time is the most relaxed activity state, Neutral is the normal activity state, DND is the relatively busy activity state, and Offline is the busiest activity state. In addition to the four levels described above, it may be possible to set an arbitrary number of levels on the system, or allow the user to set the number of levels as appropriate.
  • the environment estimation unit 150 estimates the environmental state to be presented to the user based on the user state estimated by the user state estimation unit 140 .
  • the environment estimation unit 150 may further estimate the environmental state presented to the user based on the location attributes estimated by the location estimation unit 130 .
  • the environmental state presented to the user is, for example, an environmental state in which the user can focus (concentrate) or an environmental state in which the user can relax.
  • the output control unit 160 controls output based on the environmental state estimated by the environment estimation unit 150 .
  • the output control unit 160 has a content control unit 161 and a notification control unit 162 .
  • the content control unit 161 reproduces content (music, video, etc.) selected based on the environmental state estimated by the environment estimation unit 150 .
  • the content control unit 161 notifies the DSP (Digital Service Provider) of the environmental state via the network, and the DSP selects content based on this environmental state (for example, content that the user can focus on, content that the user can relax, etc.) content, etc.) may be received and reproduced.
  • the notification control unit 162 controls the number of notifications to the user based on environmental conditions.
  • the notification control unit 162 reduces or eliminates the number of notifications (e.g., notifications of new arrivals of applications or messages) so that the user can focus, or sets the number of notifications to normal if the user is relaxing. may be processed.
  • Fig. 2 schematically shows the worn wearable device.
  • the wearable device 200 is typically a wireless earphone.
  • a wearable device 200 which is a wireless earphone, has a speaker 221, a driver unit 222, and a sound conduit 223 connecting them.
  • the speaker 221 is inserted into the ear canal to position the wearable device 200 against the ear, and the driver unit 222 is located behind the ear.
  • a sensor section 210 including an acceleration sensor 211 and a gyro sensor 212 is built in a driver unit 222 .
  • Fig. 3 schematically shows individual differences in wearable devices worn.
  • the angle of the driver unit 222 of the wearable device 200 with respect to the front of the face differs for each user. Therefore, the angles of the sensor axes of the acceleration sensor 211 and the gyro sensor 212 of the sensor unit 210 built in the driver unit 222 with respect to the front of the face differ for each user.
  • (a) shows the case where the user wears the wearable device 200 shallowly hooked on the ear
  • (b) shows the case where the user wears the wearable device 200 deeply fixed to the ear.
  • the difference between the angle of the user's sensor axis with respect to the front face of (a) and the angle of the user's sensor axis with respect to the front of the face of (b) may be 30° or more. Therefore, the PDR unit 120 estimates the angle of the sensor axis of the sensor unit 210 with respect to the front of the face for each user, and uses this as a correction value to accurately estimate the orientation (angle) of the face without depending on individual differences.
  • FIG. 4 schematically shows the concept of angle correction.
  • Azimuth E is obtained from the three-dimensional posture obtained by integrating sensor values obtained by the gyro sensor 212 that detects angular velocity.
  • the Azimuth Offset differs for each user and cannot be measured just by wearing the device, so it is necessary to estimate the Azimuth Offset for each user.
  • Coordinate system (1) is a global frame (fixed), and is composed of a vertical Z-axis extending overhead, an X-axis connecting both ears and positive in the right direction, and a Y-axis orthogonal to the X-axis and Z-axis.
  • a coordinate system (2) is a sensor frame, and is a coordinate system (X E , Y E , Z E ) that is fixed with respect to the sensor unit 210 of the wearable device 200 .
  • Azimuth Offset which is a correction value, indicates the amount of rotation of the coordinate system (2) with respect to the coordinate system (1).
  • FIG. 5 shows the operation flow of the angle corrector.
  • FIG. 6 schematically shows user movements.
  • FIG. 7 schematically shows the concept of angle correction.
  • FIG. 8 shows specific processing of the angle corrector.
  • FIG. 9 shows a specific calculation example.
  • the user wears the wearable device 200 and moves the head downward so as to look diagonally downward from the front ((a) of FIG. 6) ((b) of FIG. 6) (step S101).
  • the angle correction unit 121 calculates Pitch and Roll with respect to the global frame coordinate system (X, Y, Z) from the acceleration value when moving the head downward (step S102).
  • the angle correction unit 121 starts collecting angular velocity values of the gyro sensor 212 . Let the time at this time be t0 (step S103) (process (2) in FIG. 8). Next, the user slowly moves his or her head up so as to look up diagonally from the front without blurring left and right ((c) in FIG. 6) (step S104).
  • the angle correction unit 121 continues collecting angular velocity values of the gyro sensor 212 (step S105). When the user raises his or her head to the limit, the angle corrector 121 stops collecting the angular velocity values of the gyro sensor 212 . The time at this time is set to t1 (step S106, YES).
  • R Z ( ⁇ ), R X ( ⁇ ), and R Y ( ⁇ ) are the rotation matrices of the Z-axis, Y-axis, and X-axis, respectively.
  • RotMat *axis is set to [ rX ,ry, rz ] T (step S107). If r Z deviates from the threshold value (if the difference from 0 is large), the angle correction unit 121 fails and redoes the process (step S108, NO). If r Z is within the threshold, the process proceeds to the next step (step S108, YES).
  • the angle corrector 121 obtains a correction value (Azimuth Offset ) from rX and rY (step S109) (process (5) in FIG. 8).
  • the angle correction unit 121 obtains a rotation matrix (RotMat) from Azimuth Offset , Pitch and Roll (step S110). This RotMat is based on the front face axis.
  • FIG. 10 shows the relationship between initial frames.
  • Fig. 11 shows a method of specifying a natural front view.
  • R t0 which is the posture of the right sensor (Right Sensor Pose) is obtained by the method of FIG.
  • Rt2 in the new attitude can be obtained from Rt0 and the acceleration sensor value in the new attitude by the method of FIG.
  • FIG. 12 is a diagram for explaining the processing of the location estimation unit.
  • (1) is the route from the living room to the bedroom
  • (2) is the route from the bedroom to the living room
  • (3) is the route from the living room to the toilet
  • (4) is A route from the toilet to the living room, (5) from the living room to the kitchen, and (6) from the kitchen to the living room.
  • the user wears the wearable device 200 and starts working in the living room. After a while, after going to the toilet, I returned to my seat after washing my hands in the washroom. After a while, I moved to the kitchen, got a drink, and returned to the living room.
  • the movement pattern here is as follows. From the living room to the toilet (route (3)). From the toilet to the living room (route (4)). From the living room to the kitchen (route (5)). From the kitchen to the living room (route (6)).
  • the place estimation unit 130 stores these four patterns and their order. The next time the user moves, the movement pattern is matched with the stored pattern. If the matching is successful, the place estimating unit 130 can specify the post-movement place, and if the matching is unsuccessful, the place estimating unit 130 adds it to the route list as a new pattern.
  • the route list includes movement patterns (top row) of "(1) living room to bedroom, (2) bedroom to living room, (5) living room to kitchen", and "(2) bedroom to living room, (5) living room
  • the location estimation unit 130 holds a plurality of movement routes, and matches the movement routes estimated by the PDR unit 120 with the plurality of held movement routes to obtain location attributes after movement (living room , bedroom, toilet, kitchen, washroom, etc.) can be estimated. Also, the location estimation unit 130 may estimate location attributes by determining how long the user stays at the location where the user is. By determining the staying time in addition to the moving route, the location attribute can be estimated more accurately.
  • FIG. 13 shows an application example of the processing of the location estimation unit.
  • the coordinate system of FIG. 13 shows the transition of the user position with the origin as the starting point and the user position plotted periodically (eg, every second) as it progresses from the origin (starting point) to another room.
  • the axis (1) indicates the moving route from the living room (origin) to the bedroom.
  • the axis (2) indicates the movement path (distance) from the bedroom (origin) to the living room.
  • the axis (3) indicates the moving route from the living room (origin) to the toilet.
  • the axis (4) indicates the moving route from the toilet (origin) to the living room.
  • FIG. 14 shows a recognition example of processing by the location estimation unit.
  • the location estimation unit 130 attaches labels indicating attributes when learning routes. As a result, the label indicating the attribute can be automatically displayed when the matching is successful. Next, the operation of the location estimation unit 130 will be described more specifically.
  • FIG. 15 shows the operation flow of the location estimation unit.
  • the PDR unit 120 estimates the change of the user position from room to room, that is, the movement route of the user position (step S201).
  • the place estimating unit 130 detects that the user has stopped based on the change in the user's position detected and estimated by the PDR unit 120 (step S202, YES).
  • the location estimation unit 130 increments (+1) the stop counter (step S203).
  • Matching is performed with a plurality of moving routes (step S205). If the matching is successful (step S206, YES), the place estimating unit 130 identifies the post-movement place (step S207). On the other hand, if the matching fails (step S206, NO), the location estimating unit 130 adds it to the route list as a new pattern (step S208).
  • FIG. 16 shows a supplementary operation flow of the location estimation unit.
  • step S206 NO
  • step S209 YES
  • step S208 if enough new travel routes are accumulated in the route list to the extent that matching is successful (step S208), matching is successful (step S206, YES), and the location after travel can be identified ( step S207).
  • the place estimation unit 130 When the matching failure continues for a predetermined number of times (step S209, YES), the place estimation unit 130 outputs a warning indicating that there is a possibility of another place not registered in the route list (step S210). This makes it possible to notify the user that the location attribute after movement will be estimated from the new movement route.
  • FIG. 17 shows the operation when different walking styles are identified for the same route.
  • DTW dynamic time warping
  • DTW dynamic time warping
  • FIG. 18 shows a modification of the method for estimating the location by the location estimating unit.
  • the location estimation unit 130 may estimate the attribute of the location where the user is located (location attribute), especially outdoors, based on the location information acquired by the GPS sensor 111 and the beacon transmitter/receiver 112 .
  • the place estimation unit 130 may estimate the attribute of the place where the user is (place attribute) based on the biometric information acquired by the biosensor 214 . For example, if it is known that the user is falling asleep based on the biometric sensor 214 (heartbeat sensor or the like), the location estimation unit 130 may estimate the bedroom as the location attribute.
  • FIG. 19 is a flow for estimating the environmental state presented to the user from the context.
  • the context acquisition unit 110 acquires the user's context.
  • User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state.
  • the environment estimation unit 150 estimates the environmental state (focus (concentration), relaxation, etc.) to be presented to the user.
  • FIG. 20 shows the operation of the user state estimation unit.
  • User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state.
  • the user's context includes location information and terminal information.
  • Terminal information includes screen lock information (lock, unlock), user behavior information (run, bicycle, stationary, walking, riding, etc.), location (specific location such as home or office, unspecified location), calendar application information ( Scheduled meeting, no meeting), time information (during work time, outside work time), phone application information (during a call), voice recognition application information (during speaking), automatic DND (Do Not Disturb) setting (during time frame, time out of frame), manual DND settings (on, offline), etc.
  • a user state indicates a user's multi-level activity state. For example, the user state indicates four levels of activity: break time, neutral, DND (Do Not Disturb) and offline. Break time is the most relaxed activity state, Neutral is the normal activity state, DND is the relatively busy activity state, and Offline is the busiest activity state.
  • FIG. 21 shows the mapping relationship between context and user state.
  • the user state estimation unit 140 estimates the user state by mapping the context to the user state. For example, if the screen lock information as the context is unlocked, the user state estimation unit 140 estimates that the user state is DND, and if the screen lock information is locked, the user state is estimated to be neutral. The user state estimating unit 140 also estimates user states for other contexts. Also, the context is not limited to that shown in FIG. 21, and any context may be used as long as it represents some kind of context.
  • FIG. 22 shows how the user state estimation unit determines the user state.
  • the user state estimation unit 140 estimates the user state as offline if even one of the contexts includes offline.
  • the user state estimation unit 140 estimates the user state as DND if there are no offline contexts and at least one context includes DND.
  • the user state estimation unit 140 estimates the user state as neutral if there is no offline, DND and break time for a plurality of contexts.
  • the user state estimating unit 140 estimates the user state as the break time if there is no offline or DND and the break time is included.
  • FIG. 23 shows the operation of the environment estimation unit.
  • the environment estimation unit 150 estimates the environmental state to be presented to the user based on the user state estimated by the user state estimation unit 140 and the location attribute estimated by the location estimation unit 130 .
  • the environmental state presented to the user is, for example, an environmental state in which the user can focus (concentrate) or an environmental state in which the user can relax.
  • the environment estimating unit 150 estimates that the environmental state presented to the user is the focus when the time period is at work, the user state is neutral, the action is stay, and the location is desk. (2) If the time zone is working and the user state is break time, the environment estimation unit 150 estimates that the environmental state to be presented to the user is relaxed. (3) If the time zone is non-work and the user state is break time, the environment estimation unit 150 estimates that the environmental state to be presented to the user is relaxed.
  • FIG. 24 shows the operation of the content control section of the output control section.
  • the content control unit 161 of the output control unit 160 reproduces content (music, video, etc.) selected based on the environmental state estimated by the environment estimation unit 150 .
  • the content control unit 161 notifies the DSP (Digital Service Provider) of the environmental state via the network, and the DSP selects content based on this environmental state (content that allows the user to focus, content that allows the user to relax). content) is received and played back.
  • the content control unit 161 plays music that helps the user concentrate, and if the user state is relaxed, the content control unit 161 plays music that helps the user relax.
  • the content control unit 161 reproduces sleep-promoting music if the user state is relaxed, and stops the music when the user falls asleep.
  • FIG. 25 shows the operation of the notification control section of the output control section.
  • the notification control unit 162 of the output control unit 160 controls the number of notifications to the user based on the environmental conditions. For example, the notification control unit 162 may reduce or eliminate the number of notifications (notifications of new arrivals of applications or messages) so that the user can focus, or may keep the number of notifications normal if the user is relaxing. For example, if the user is at work and the user state is focused, the notification control unit 162 reduces the number of notifications, and if the user state is relaxed, the notification control unit 162 issues the normal number of notifications.
  • the present embodiment it is possible to output content that encourages focus (concentration) and relaxation based on the user's location in the house and other user contexts. It is possible to appropriately control the output to the user regardless of the situation such as a situation where it is desired not to make a sound. For example, based on user context, if the user is at their desk while teleworking, we can output focusable content, and if they are at their resting place, we can play relaxing music.
  • the present embodiment it is possible to identify the position inside the house using the sensor unit 210 (the acceleration sensor 211, the gyro sensor 212, and the compass 213) attached to the wearable device 200 without any external equipment. can. Specifically, by storing the pattern of moving places and their order, it is possible to identify the place after the user moves from the N patterns of the most recent moves.
  • Telework has become commonplace, and users are spending more time at home, not only relaxing, but also focusing on work. At this time, it is thought that there are more users who do not want to make noise and situations in which they do not want to make noise than in the past when telework was not widespread. Therefore, as in the present embodiment, it will be more and more useful to specify the location in the house, estimate the environmental state to be presented to the user, and control the output to the user without the need to speak. .
  • the user state is estimated by mapping the context obtained from each sensor information to the user state, so the user state can be estimated without speaking and making a sound.
  • the context obtained from each sensor information is mapped to the user state, the amount of calculation is much smaller than that of natural language processing, and local processing is easy.
  • FIG. 26 shows the configuration of a content reproduction system according to this embodiment.
  • the content reproduction system 20 has an information processing device 100 and a wearable device 200 .
  • the information processing apparatus 100 loads a content reproduction control application 300, a content providing application 400, and a preset application 500, in which a processor such as a CPU of a control circuit is recorded in a ROM, into a RAM and executes them.
  • a processor such as a CPU of a control circuit is recorded in a ROM
  • the content reproduction control application 300 may be installed in the wearable device 200 instead of the information processing apparatus 100 and executed by the wearable device 200 .
  • the wearable device 200 is, as described above, wireless earphones (see FIG. 2), wireless headphones, wired headphones, wired earphones, or the like.
  • the wearable device 200 has a sensor section 210 and an input device 220 .
  • the sensor unit 210 includes an acceleration sensor 211, a gyro sensor 212, a compass 213, and a biosensor 214 such as a heart rate sensor, a blood flow sensor, an electroencephalogram sensor (see FIG. 1).
  • Wearable device 200 inputs the detection value of sensor unit 210 to content reproduction control application 300 and content providing application 400 .
  • the input device 220 is a touch sensor, a physical button, a non-contact sensor, or the like, and inputs a contact or non-contact operation by the user.
  • the input device 220 is provided on the outer surface of the driver unit 222 (see FIG. 2) of the wearable device 200, for example.
  • the content providing application 400 provides content.
  • a content providing application 400 is an application group including a plurality of different content providing applications 401 and 402 .
  • a plurality of different content providing applications 401 and 402 respectively provide content (specifically, audio content) of different genres such as music, environmental sounds, healing sounds, and radio programs.
  • the content providing application 400 is simply referred to when the different content providing applications 401 and 402 are not distinguished.
  • the content reproduction control application 300 includes the context acquisition unit 110, the PDR (Pedestrian Dead Reckoning) unit 120 (user position estimation unit), the location estimation unit 130 (location attribute estimation unit), and the user state estimation unit 140. , the environment estimation unit 150, and the content control unit 161 of the output control unit 160 (see FIG. 1).
  • the content control unit 161 selects the content providing application 400 based on the environmental state estimated by the environment estimation unit 150 or based on different operations input by the user to the input device 220 of the wearable device 200 .
  • the content control unit 161 generates a cue for the content providing application 400 to select content based on the environmental state, outputs the generated cue to the selected content providing application 400, and instructs the content providing application 400 to provide the content based on the cue.
  • the content is reproduced from the wearable device 200 by making the selection.
  • the preset application 500 pre-assigns a plurality of different operations input by the user to the input device 220 of the wearable device 200 to a plurality of different functions related to services provided by the content providing application 400 .
  • the preset application 500 pre-assigns a selection of different content providing applications 401,402.
  • a plurality of different operations input by the user to the input device 220 of the wearable device 200 are assigned in advance to selection of a plurality of different content providing applications 401 and 402.
  • Preset application 500 may be independent of content reproduction control application 300 or may be included in content reproduction control application 300 .
  • FIG. 27 shows an example of the GUI of the preset application.
  • the preset application 500 has, for example, a playback control GUI 710, a volume control GUI 720, and a quick access control GUI 730. Note that the GUI provided by the preset application 500 and the combination of settable functions and operations differ depending on the model of the wearable device 200 .
  • the user can use the playback control GUI 710 to assign a plurality of different operations input by the user to the input devices 220 of the left and right wearable devices 200 to each function during content playback. For example, the user assigns a single-tap operation of the wearable device 200 on the right side to play and pause, assigns a double-tap operation to play the next song, assigns a triple-tap operation to play the previous song, and assigns a long press operation to the voice assistant. Can be assigned to activate a function. Note that the functions assigned to each operation may be functions other than those described above, and the functions may be assigned to each operation by default.
  • the user can use the volume control GUI 720 to assign a plurality of different operations that the user inputs to the input devices 220 of the left and right wearable devices 200 to each function of the volume control. For example, the user can assign a single-tap operation of the left wearable device 200 to volume up and a long press operation to volume down.
  • the user uses the quick access control GUI 730 to convert a plurality of different operations that the user inputs to the input devices 220 of the left and right wearable devices 200 into a quick access function that selects and activates a plurality of different content providing applications 401 and 402. can be assigned. For example, the user can assign a double tap operation on the left wearable device 200 to launch the content providing application 401 and a triple tap operation to launch the content providing application 402 .
  • the preset application 500 can perform a plurality of different operations input by the user to the input devices 220 of the left and right wearable devices 200 not only through playback control and volume control while the content providing application 400 is running, but also through the content providing application 400 . can be assigned to the selection and activation of
  • FIG. 28 shows the operational flow of the content playback control application.
  • the context acquisition unit 110 acquires the user's context.
  • User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state (four-level activity state: break time, neutral, DND (Do Not Disturb) and offline).
  • the user state estimation unit 150 estimates the environmental state (focus (concentration), relaxation, etc.) to be presented to the user (see FIG. 19).
  • the content control unit 161 of the output control unit 160 detects an appropriate timing to start reproducing content based on the environmental state estimated by the environment estimation unit 150 (step S301).
  • the content control unit 161 of the output control unit 160 selects the content providing application 400 .
  • the content control unit 161 selects the content providing application 400 based on different operations input by the user to the input device 220 of the wearable device 200 .
  • the content control unit 161 selects the content providing application 401 if the operation input by the user to the input device 220 of the wearable device 200 is a double tap, and selects the content providing application 402 if it is a triple tap.
  • the content control unit 161 selects the content providing application 400 based on the environmental state (scenario described later) estimated by the environment estimation unit 150 (step S302).
  • the content control unit 161 can be set by the user (for example, by setting the content providing application 400 in advance according to the situation) such that the scenario will not fire even under the same conditions if the refusal is repeated. Based on this, the content providing application 400 may be selected.
  • FIG. 29 shows an example of a table used for selecting content providing applications.
  • the content control unit 161 refers to the table 600 and selects the content providing application 400 .
  • Table 600 has ID 601 , scenario 602 , user context 603 and queue 604 .
  • a scenario 602 corresponds to the environmental state estimated by the environment estimation unit 150 .
  • the user context 603 corresponds to the user state estimated by the user state estimation unit 140 based on the user's context acquired by the context acquisition unit 110 .
  • a queue 604 is a queue for the content providing application 400 to select content.
  • selection flag 605 of content providing application 401 and selection flag 606 of content providing application 402 are recorded in nine records of Music_01 to 09 with ID 601, respectively.
  • a record in which only the selection flag 605 is recorded means that the content providing application 401 is selected in the scenario 602 (environmental state).
  • both of the selection flags 605 and 606 mean that either one of the content providing applications 401 and 402 is selected under different conditions in the scenario 602 (environmental state).
  • the content control unit 161 may learn in advance and select the content providing application 400 that is frequently executed at the current time, the content providing application 400 that is frequently used, and the like.
  • the content control unit 161 of the output control unit 160 generates a cue 604 for the selected content providing application 400 to select content based on the scenario 602 (environmental state) (step S303). .
  • the content control unit 161 outputs the generated cue to the selected content providing application 400, causes the content providing application 400 to select content based on the cue, and reproduces the content from the wearable device 200 (step S304).
  • the content providing application 400 selects a plurality of content candidates based on the cue from the content reproduction control application 300, and based on the detected value input from the sensor unit 210 of the wearable device 200, the content to be reproduced from the plurality of candidates. may be selected.
  • the content providing application 400 may select content with a fast tempo that matches the user's running speed based on the detected value input from the sensor unit 210 .
  • the content control unit 161 of the content reproduction control application 300 detects the timing to start reproducing another content based on the environmental state (step S301), selects the content providing application 400 (steps S302, This step can be omitted), the queue 604 is generated (step S303), and the content is reproduced from the wearable device 200 (step S304).
  • the content reproduction control application 300 has user information (that is, user context 603 (user state), scenario 602 (environmental state)) that the content providing application 400 cannot know. Therefore, the content reproduction control application 300 can know cases where it is desirable to change the content being reproduced by the content providing application 400 .
  • the content reproduction control application 300 knows (that is, the user context 603 (user state) and the scenario 602 (environmental state)), it sends a cue to the content providing application 400 to change the content being reproduced. By transmitting, it is possible to provide the user with more desirable contents (music, healing sounds, etc.).
  • the content control unit 161 of the content reproduction control application 300 generates a cue for the content providing application 400 to stop (rather than change) the reproduction of the content based on the scenario 602 (environmental state) (step S303). is output to the content providing application, and the content providing application 400 is caused to stop the reproduction of the content based on the cue (step S304). For example, there are cases where it is better to stop the music due to a state change such as the start of a meeting.
  • the content playback control application 300 detects these states and sends a stop command to the content providing application 400 .
  • the content providing application 400 generates content with a fast tempo that matches the running speed of the user based on the detected values input from the sensor unit 210, for example, according to predetermined values of heart rate and acceleration. You can select and play.
  • the content providing application 400 actively reproduces the content based on the detection values input from the sensor unit 210 without receiving a cue from the content control unit 161 of the content reproduction control application 300. Attributes of content (tempo, pitch, etc.) can be selected and the selected content can be played back. In short, during content playback, the content providing application 400 can actively change the content to be played back.
  • the content reproduction control application 300 selects the content providing application 400 and outputs a cue to the content providing application 400 . Therefore, it is not necessary for the content providing application 400 to consider content reproduction conflicts between a plurality of different content providing applications 401 and 402 .
  • the content reproduction control application 300 generates a cue for the content providing application 400 to select content based on the environmental state, which is the user's sensitive information. Therefore, the content providing application 400 does not share the environmental state, which is the user's sensitive information, with the content providing application 400 from the content reproduction control application 300. The reflected content can be played back. Therefore, it is possible to improve the user experience while reducing the security risk.
  • the content reproduction control application 300 selects the content providing application 400, and the selected content providing application 400 reproduces the content. Furthermore, the preset application 500 allows the content reproduction control application 300 to select the content providing application 400 based on different operations input by the user to the input device 220 of the wearable device 200 . This makes it possible to provide a user experience that integrates the services of a plurality of different content providing applications 401 and 402 without requiring active selection by the user.
  • the present disclosure may have the following configurations.
  • a user state estimation unit that estimates a user state
  • an environment estimation unit that estimates an environmental state to be presented to the user based on the user state
  • an output control unit that controls output based on the environmental state
  • An information processing device comprising: (2) The information processing device according to (1) above, a user position estimating unit that estimates a user position based on a detection value of a sensor unit of the wearable device worn by the user; a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location; further comprising The information processing apparatus, wherein the user state estimation unit estimates the user state based on the location attribute.
  • the user position estimation unit an angle correction unit that calculates a correction value of the azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user; an angle estimation unit that estimates an azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user and the correction value; a user position estimation unit that estimates the user position using the azimuth angle; An information processing device.
  • the user position estimation unit estimates a moving route of the user position, The information processing apparatus, wherein the location attribute estimation unit estimates the location attribute after movement based on the movement route.
  • the location attribute estimation unit stores a plurality of movement routes, and estimates the location attribute after movement by matching the estimated movement route with the plurality of held movement routes.
  • DTW dynamic time warping
  • the information processing device according to any one of (1) to (11) above, further comprising a context acquisition unit that acquires the context of the user; The information processing apparatus, wherein the user state estimation unit estimates the user state based on the acquired context.
  • the context includes at least one of location information of the user and terminal information of the information processing device.
  • Information processing apparatus wherein the user state estimation unit estimates the user state based on the detection value of the sensor unit of the wearable device and/or the location attribute.
  • the information processing apparatus indicates a plurality of activity states of the user.
  • the output control unit is An information processing apparatus comprising: a content control unit that reproduces content selected based on the environmental state; and/or a notification control unit that controls the number of notifications to the user based on the environmental state.
  • (17) Estimate the user state, estimating an environmental state to be presented to a user based on the user state; controlling output based on the environmental conditions; Information processing methods.
  • the processor of the information processing device a user state estimation unit that estimates a user state; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; An information processing program operated as an output control unit that controls output based on the environmental state.
  • wearable devices and a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; an output control unit that controls output based on the environmental state; an information processing device having An information processing system comprising (20) the processor of the information processing device, a user state estimation unit that estimates a user state; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; A non-transitory computer-readable recording medium recording an information processing program operated as an output control unit that controls output based on the environmental state.
  • the present disclosure may have the following configurations.
  • wearable devices and a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state of the user based on the user state;
  • a content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue.
  • a content control unit that reproduces the content
  • a content playback control application having an information processing device having a control circuit that executes
  • a content playback system comprising: (2) The content reproduction system according to (1) above, the control circuit of the information processing device executes a plurality of different content providing applications; The content reproduction system, wherein the content control unit selects a predetermined content providing application for reproducing the content based on the environmental state. (3) The content reproduction system according to (1) or (2) above, the control circuit of the information processing device executes a plurality of different content providing applications; The wearable device has an input device, The content reproduction system, wherein the content control unit selects a predetermined content providing application for reproducing the content based on different operations input by a user to the wearable device.
  • the wearable device has a sensor unit
  • the content playback control application is a user position estimation unit that estimates a user position based on a detection value input from a sensor unit of the wearable device worn by the user; a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location; further having The content reproduction system, wherein the user state estimation unit estimates the user state based on the location attribute.
  • the sensor unit of the wearable device includes at least one of an acceleration sensor, a gyro sensor, a compass, and a biosensor.
  • the content reproduction system according to (6) or (7) above The content providing application selects a plurality of content candidates based on the cue, and selects content to be played back from the plurality of candidates based on the detection value input from the sensor unit. (9) The content reproduction system according to any one of (6) to (8) above, A content reproduction system wherein the content providing application selects attributes of content to be reproduced based on the detection value input from the sensor unit and reproduces the selected content during reproduction of the content.
  • the content control unit generates a cue for the content providing application to stop playing the content based on the environmental state, outputs the cue to the content providing application, and instructs the content providing application to stop the reproduction of the content based on the cue.
  • a content reproduction system that stops the reproduction of the content.
  • the content playback control application is further comprising a context acquisition unit that acquires the context of the user; The content reproduction system, wherein the user state estimation unit estimates the user state based on the acquired context.
  • a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
  • a content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue.
  • a content control unit that reproduces the content
  • a content playback control application having An information processing device comprising a control circuit for executing (13) The control circuit of the information processing device, a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
  • a content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue.
  • a content reproduction control application that operates as a content control unit that reproduces the content.
  • the control circuit of the information processing device a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
  • a content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue.
  • a non-transitory computer-readable recording medium recording a content reproduction control application that operates as a content control unit that reproduces the content.
  • information processing system 100 information processing device 110 context acquisition unit 111 GPS sensor 112 beacon transceiver 113 terminal information acquisition unit 120 PDR unit 121 angle correction unit 122 angle estimation unit 123 user position estimation unit 130 location estimation unit 140 user state estimation unit 150 Environment estimation unit 160 Output control unit 161 Content control unit 162 Notification control unit 200 Wearable device 210 Sensor unit 211 Acceleration sensor 212 Gyro sensor 213 Compass 214 Biosensor

Abstract

[Problem] To provide an information processing device, an information processing method, an information processing program, and an information processing system that appropriately control output to a user regardless of situation. [Solution] An information processing device comprising: a user's state estimation unit for estimating a user's state; an environment estimation unit for estimating an environmental state to be presented to the user on the basis of the user's state; and an output control unit for controlling output on the basis of the environmental state.

Description

情報処理装置、情報処理方法、情報処理プログラム及び情報処理システムInformation processing device, information processing method, information processing program, and information processing system
 本開示は、ユーザに対する出力を制御する情報処理装置、情報処理方法、情報処理プログラム及び情報処理システムに関する。 The present disclosure relates to an information processing device, an information processing method, an information processing program, and an information processing system that control output to a user.
 発話や環境音を音声認識して、認識した音に基づき楽曲等のコンテンツを選択し出力する技術がある(特許文献1)。 There is a technology that recognizes utterances and environmental sounds, and selects and outputs content such as music based on the recognized sounds (Patent Document 1).
米国特許第10891948号明細書U.S. Pat. No. 1,089,1948 米国特許第9398361号明細書U.S. Pat. No. 9,398,361
 発話や環境音を音声認識する技術は、音がする環境のみ適用可能である。このため、音を立てたくないユーザや、音を立てたくない状況では、適切なコンテンツを選択できないおそれがある。また、自然言語処理には高い計算能力が必要であるため、ローカルで処理することが難しい。  The technology to recognize speech and environmental sounds is applicable only to environments with sounds. Therefore, a user who does not want to make noise or a situation where he/she does not want to make noise may not be able to select appropriate content. Also, natural language processing requires high computational power, making it difficult to process locally.
 以上のような事情に鑑み、本開示の目的は、状況に拠らず適切にユーザに対する出力を制御する情報処理装置、情報処理方法、情報処理プログラム及び情報処理システムを提供することにある。 In view of the circumstances as described above, an object of the present disclosure is to provide an information processing device, an information processing method, an information processing program, and an information processing system that appropriately control output to the user regardless of the situation.
 本開示の一形態に係る情報処理装置は、
 ユーザ状態を推定するユーザ状態推定部と、
 前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
 前記環境状態に基づき出力を制御する出力制御部と、
 を具備する。
An information processing device according to one aspect of the present disclosure includes:
a user state estimation unit that estimates a user state;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
an output control unit that controls output based on the environmental state;
Equipped with
 本実施形態によれば、状況に拠らず適切にユーザに対する出力を制御することができる。 According to this embodiment, it is possible to appropriately control the output to the user regardless of the situation.
 前記ユーザが装着したウェアラブルデバイスが有するセンサ部の検出値に基づき、ユーザ位置を推定するユーザ位置推定部と、
 前記ユーザ位置に基づき、ユーザがいる場所の属性である場所属性を推定する場所属性推定部と、
 をさらに具備し、
 前記ユーザ状態推定部は、前記場所属性に基づき、前記ユーザ状態を推定してもよい。
a user position estimating unit that estimates a user position based on a detection value of a sensor unit of the wearable device worn by the user;
a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location;
further comprising
The user state estimation unit may estimate the user state based on the location attribute.
 前記ユーザ位置推定部は、PDR(Pedestrian Dead Reckoning)を用いて前記ユーザ位置を推定してもよい。 The user position estimation unit may estimate the user position using PDR (Pedestrian Dead Reckoning).
 本実施形態によれば、ユーザの家屋内での場所や、他のユーザコンテクストに基づき、音を立てたくない状況等の状況に拠らず適切にユーザに対する出力を制御することができる。 According to this embodiment, it is possible to appropriately control the output to the user based on the user's location in the house and other user contexts, regardless of situations such as situations in which users do not want to make noise.
 前記環境推定部は、前記場所属性に基づき、前記環境状態を推定してもよい。 The environment estimation unit may estimate the environmental state based on the location attribute.
 例えば、ユーザがテレワーク中に仕事席にいる場合は、作業にフォーカスできるようにユーザに対する出力を制御し、ユーザが休憩場所にいる場合はリラックスできるようにユーザに対する出力を制御できる。 For example, when the user is at the desk during teleworking, the output for the user can be controlled so that the user can focus on the work, and when the user is at the resting place, the output for the user can be controlled so that the user can relax.
 前記ウェアラブルデバイスが有する前記センサ部は、加速度センサ、ジャイロセンサ、コンパス、生体センサの内、少なくとも一つを含んでもよい。 The sensor unit of the wearable device may include at least one of an acceleration sensor, a gyro sensor, a compass, and a biosensor.
 屋外と違い家屋内は比較的に場所が狭く、特定の位置を推定するためには高精度のビーコンやカメラのような外付け設備が必要になるのが一般的である。これに対して、本実施形態によれば、外付け装備なく、ウェアラブルデバイスに装着された加速度センサ、ジャイロセンサ及び/又はコンパスを使い家屋内の位置を特定することができる。  In contrast to outdoors, indoor spaces are relatively small, and external equipment such as high-precision beacons and cameras are generally required to estimate a specific position. On the other hand, according to the present embodiment, it is possible to specify the position inside the house using the acceleration sensor, the gyro sensor and/or the compass attached to the wearable device without any external equipment.
 前記ユーザ位置推定部は、
 前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値に基づき、前記ユーザの方位角の補正値を算出する角度補正部と、
 前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値と、前記補正値とに基づき、前記ユーザの方位角を推定する角度推定部と、
 前記方位角を利用して前記ユーザ位置を推定するユーザ位置推定部と、
 を有してもよい。
The user position estimation unit
an angle correction unit that calculates a correction value of the azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user;
an angle estimation unit that estimates an azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user and the correction value;
a user position estimation unit that estimates the user position using the azimuth angle;
may have
 ウェアラブルデバイスの装着角度は、ユーザ毎に異なる。このため、加速度センサ及びジャイロセンサのセンサ軸の角度は、ユーザ毎に異なる。そこで、ユーザ位置推定部は、ユーザ毎にセンサ部のセンサ軸の角度を推定し、これを補正値として、個人差に依存することなく精度高く向き(角度)を推定することができる。 The angle at which the wearable device is worn differs for each user. Therefore, the angles of the sensor axes of the acceleration sensor and the gyro sensor are different for each user. Therefore, the user position estimation unit can estimate the angle of the sensor axis of the sensor unit for each user, and use this as a correction value to estimate the direction (angle) with high accuracy without depending on individual differences.
 前記ユーザ位置推定部は、前記ユーザ位置の移動経路を推定し、
 前記場所属性推定部は、前記移動経路に基づき、移動後の前記場所属性を推定してもよい。
The user position estimation unit estimates a moving route of the user position,
The location attribute estimation unit may estimate the location attribute after movement based on the movement route.
 本実施形態によれば、例えば、ユーザがテレワーク中に仕事席にいる場合は、作業にフォーカスできるよう出力を制御し、ユーザが休憩場所にいる場合はリラックスできるよう出力を制御することができる。 According to this embodiment, for example, when the user is at the desk during telework, the output can be controlled so that the user can focus on the work, and when the user is at the resting place, the output can be controlled so that the user can relax.
 前記場所属性推定部は、複数の移動経路を保持し、推定された前記移動経路を保持された前記複数の移動経路とマッチングすることにより、移動後の前記場所属性を推定してもよい。 The location attribute estimating unit may store a plurality of movement routes, and may estimate the location attribute after movement by matching the estimated movement route with the plurality of held movement routes.
 本実施形態によれば、場所を移動するパターンとその順番を記憶しておいて、直近に移動したパターンからユーザの移動後の場所を特定することができる。 According to the present embodiment, it is possible to store the pattern of moving places and their order, and specify the place after the user moves from the most recent moving pattern.
 前記場所属性推定部は、マッチングが所定回数失敗すると、警告を出力してもよい。
 これにより、ユーザが自分の家から違う屋内(例えば、コワーキングスペース等)に移動し、保持している複数の移動経路とは全く異なる移動経路が検出され続ける場合等に、移動後の場所属性を新たな移動経路から推定する旨をユーザに通知することもできる。
The location attribute estimator may output a warning when matching fails a predetermined number of times.
As a result, when the user moves from his/her home to a different indoor space (for example, a coworking space) and continues to detect a movement route that is completely different from the plurality of movement routes held, location attributes after movement can be detected. is estimated from the new moving route.
 前記場所属性推定部は、前記マッチングをDTW(dynamic time warping、動的時間伸縮法)を用いて行ってもよい。 The location attribute estimation unit may perform the matching using DTW (dynamic time warping).
 前記場所属性推定部は、前記ユーザがいる場所での前記ユーザの滞在時間を判断することにより、前記場所属性を推定してもよい。 The location attribute estimation unit may estimate the location attribute by determining the user's stay time at the location where the user is.
 移動経路に加えて滞在時間を判断することで、より精度よく場所属性を推定できる。 By determining the length of stay in addition to the route of travel, it is possible to estimate location attributes with higher accuracy.
 ユーザのコンテクストを取得するコンテクスト取得部をさらに具備し、
 前記ユーザ状態推定部は、取得された前記コンテクストに基づき、前記ユーザ状態を推定してもよい。
 前記コンテクストは、前記ユーザの位置情報と前記情報処理装置の端末情報の少なくともいずれかを含んでもよい。
further comprising a context acquisition unit that acquires the context of the user;
The user state estimation unit may estimate the user state based on the acquired context.
The context may include at least one of location information of the user and terminal information of the information processing device.
 場所属性だけでなくユーザのコンテクストに基づきユーザ状態を推定することで、より精度よくユーザ状態を推定できる。 By estimating the user state based not only on location attributes but also on the user's context, it is possible to estimate the user state more accurately.
 前記ユーザ状態推定部は、前記ウェアラブルデバイスが有する前記センサ部の前記検出値及び/又は前記場所属性に基づき、前記ユーザ状態を推定してもよい。 The user state estimation unit may estimate the user state based on the detected value of the sensor unit of the wearable device and/or the location attribute.
 これにより、より精度よくユーザ状態を推定できる。 This makes it possible to estimate the user's state more accurately.
 前記ユーザ状態は、前記ユーザの複数の活動状態を示してもよい。 The user state may indicate a plurality of activity states of the user.
 例えば、ユーザ状態は、ブレイクタイム、ニュートラル、DND(Do Not Disturb)及びオフラインの4レベルの活動状態を示す。ブレイクタイムが最もリラックスした活動状態、ニュートラルが通常の活動状態、DNDが比較的忙しい活動状態、オフラインが最も忙しい活動状態である。 For example, the user state indicates four levels of activity: break time, neutral, DND (Do Not Disturb), and offline. Break time is the most relaxed activity state, Neutral is the normal activity state, DND is the relatively busy activity state, and Offline is the busiest activity state.
 前記出力制御部は、
 前記環境状態に基づき選択されたコンテンツを再生するコンテンツ制御部、及び/又は
 前記環境状態に基づき前記ユーザへの通知の回数を制御する通知制御部
 を有してもよい。
The output control unit is
A content control unit that reproduces content selected based on the environmental state, and/or a notification control unit that controls the number of notifications to the user based on the environmental state.
 コンテンツ制御部は、ユーザがフォーカスできるようなコンテンツ、ユーザがリラックスできるようなコンテンツを再生してもよい。通知制御部は、ユーザがフォーカスできるように通知回数を減らす又は無くす、ユーザがリラックス中であれば通知の回数を通常通りとしてもよい。 The content control unit may reproduce content that allows the user to focus and content that allows the user to relax. The notification control unit may reduce or eliminate the number of notifications so that the user can focus, or keep the number of notifications normal if the user is relaxing.
 本開示の一形態に係る情報処理方法は、
 ユーザ状態を推定し、
 前記ユーザ状態に基づきユーザに提示する環境状態を推定し、
 前記環境状態に基づき出力を制御する。
An information processing method according to one aspect of the present disclosure includes:
Estimate the user state,
estimating an environmental state to be presented to a user based on the user state;
The output is controlled based on the environmental conditions.
 本開示の一形態に係る情報処理プログラムは、
 情報処理装置のプロセッサを、
 ユーザ状態を推定するユーザ状態推定部と、
 前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
 前記環境状態に基づき出力を制御する出力制御部
 として動作させる。
An information processing program according to one aspect of the present disclosure includes
the processor of the information processing device,
a user state estimation unit that estimates a user state;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
It operates as an output control unit that controls the output based on the environmental conditions.
 本開示の一形態に係る情報処理システムは、
 ウェアラブルデバイスと、
 前記ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
 前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
 前記環境状態に基づき出力を制御する出力制御部と、
 を有する情報処理装置と、
 を具備する。
An information processing system according to one aspect of the present disclosure includes:
wearable devices and
a user state estimation unit that estimates a user state of a user wearing the wearable device;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
an output control unit that controls output based on the environmental state;
an information processing device having
Equipped with
本開示の一実施形態に係る情報処理システムの構成を示す。1 shows a configuration of an information processing system according to an embodiment of the present disclosure; 装着されたウェアラブルデバイスを模式的に示す。1 schematically shows a worn wearable device; 装着されたウェアラブルデバイスの個人差を模式的に示す。Schematically shows individual differences in wearing wearable devices. 角度補正の概念を模式的に示す。The concept of angle correction is shown schematically. 角度補正部の動作フローを示す。4 shows an operation flow of an angle correction unit; ユーザの動きを模式的に示す。Schematically shows a user's movement. 角度補正の概念を模式的に示す。The concept of angle correction is shown schematically. 角度補正部の具体的な処理を示す。Specific processing of the angle correction unit will be shown. 具体的な計算例を示す。A specific calculation example is shown. 初期フレーム同士の関係を示す。Shows the relationship between initial frames. 自然な正面を指定する方法を示す。Shows how to specify the natural front. 場所推定部の処理を説明するための図である。It is a figure for demonstrating the process of a place estimation part. 場所推定部の処理の適用例を示す。4 shows an application example of the processing of the location estimator. 場所推定部の処理の認識例を示す。4 shows a recognition example of the processing of the place estimating unit. 場所推定部の動作フローを示す。4 shows an operation flow of a location estimation unit; 場所推定部の補足的な動作フローを示す。4 shows a supplemental operational flow of the location estimator; 経路が同じで違う歩き方として識別する場合の動作を示す。The following shows the operation when different walking styles are identified for the same route. 場所推定部が場所を推定する方法の変形例を示す。4 shows a modification of a method for estimating a place by a place estimating unit; コンテクストからユーザに提示する環境状態を推定するフローである。It is a flow for estimating the environmental state presented to the user from the context. ユーザ状態推定部の動作を示す。The operation of the user state estimator is shown. コンテクスト及びユーザ状態のマッピング関係を示す。It shows the mapping relationship between context and user state. ユーザ状態推定部がユーザ状態を判断する方法を示す。Fig. 3 shows how the user state estimator determines the user state. 環境推定部の動作を示す。The operation of the environment estimator is shown. 出力制御部のコンテンツ制御部の動作を示す。4 shows the operation of the content control unit of the output control unit; 出力制御部の通知制御部の動作を示す。The operation of the notification control unit of the output control unit is shown. 本実施形態に係るコンテンツ再生システムの構成を示す。1 shows the configuration of a content reproduction system according to the present embodiment; プリセットアプリケーションのGUIの一例を示す。4 shows an example of a GUI of a preset application. コンテンツ再生制御アプリケーションの動作フローを示す。4 shows an operational flow of a content playback control application; コンテンツ提供アプリケーションを選択するために用いられるテーブルの一例を示す。1 shows an example of a table used to select a content providing application;
 以下、図面を参照しながら、本開示の実施形態を説明する。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
 1.情報処理システムの構成 1. Information processing system configuration
 図1は、本開示の一実施形態に係る情報処理システムの構成を示す。 FIG. 1 shows the configuration of an information processing system according to one embodiment of the present disclosure.
 情報処理システム10は、情報処理装置100と、ウェアラブルデバイス200とを有する。 The information processing system 10 has an information processing device 100 and a wearable device 200 .
 情報処理装置100は、例えば、スマートフォン、タブレットコンピュータ又はパーソナルコンピュータ等の、エンドユーザが使用する端末装置である。情報処理装置100は、インターネット等のネットワークに接続される。 The information processing device 100 is a terminal device used by an end user, such as a smartphone, tablet computer, or personal computer. Information processing apparatus 100 is connected to a network such as the Internet.
 ウェアラブルデバイス200は、ユーザの頭に装着されるデバイスである。ウェアラブルデバイス200は、典型的には、ワイヤレスイヤホン(図2)であるが、ワイヤレスヘッドフォンや有線ヘッドフォン、有線イヤホン、AR(Augmented Reality)またはVR(Virtual Reality)用のHMD(Head Mount Display)等であってもよい。また、図2では耳穴を完全にふさがないオープンイヤー型のイヤホンであるが、例えば耳孔を塞ぐタイプのカナル型イヤホンや補聴器、集音器等であっても良い。情報処理装置100とウェアラブルデバイス200とは、例えば、Bluetooth(登録商標)(具体的には、BLE(Bluetooth Low Energy) GATT(Generic Attribute Profile))やWi-Fi(登録商標)等の種々の近距離無線通信により相互に通信可能に接続される。ウェアラブルデバイス200は、センサ部210を有する。センサ部210は、加速度を検出する加速度センサ211と、角速度を検出するジャイロセンサ212と、方位角を検出するコンパス213とを含む。センサ部210は、さらに、心拍センサ、血流センサ、脳波センサ等の生体センサ214を含む。ウェアラブルデバイス200は、センサ部210の検出値を、情報処理装置100に供給する。 The wearable device 200 is a device worn on the user's head. The wearable device 200 is typically a wireless earphone (FIG. 2), but may be a wireless headphone, a wired headphone, a wired earphone, an HMD (Head Mount Display) for AR (Augmented Reality) or VR (Virtual Reality), or the like. There may be. In addition, although FIG. 2 shows an open-ear earphone that does not completely cover the ear canal, it may be a canal-type earphone, a hearing aid, or a sound collector that closes the ear canal. The information processing apparatus 100 and the wearable device 200 are connected to various types of proximity such as Bluetooth (registered trademark) (specifically, BLE (Bluetooth Low Energy) GATT (Generic Attribute Profile)) and Wi-Fi (registered trademark). They are communicably connected to each other by long-distance wireless communication. Wearable device 200 has sensor section 210 . The sensor unit 210 includes an acceleration sensor 211 that detects acceleration, a gyro sensor 212 that detects angular velocity, and a compass 213 that detects azimuth. The sensor unit 210 further includes a biosensor 214 such as a heartbeat sensor, blood flow sensor, electroencephalogram sensor, or the like. The wearable device 200 supplies the detection value of the sensor unit 210 to the information processing device 100 .
 情報処理装置100は、制御回路のCPU等のプロセッサがROMに記録された情報処理プログラムをRAMにロードして実行することにより、コンテクスト取得部110と、PDR(Pedestrian Dead Reckoning)部120(ユーザ位置推定部)と、場所推定部130(場所属性推定部)と、ユーザ状態推定部140と、環境推定部150と、出力制御部160として動作する。 The information processing apparatus 100 has a context acquisition unit 110 and a PDR (Pedestrian Dead Reckoning) unit 120 (user position estimating unit), location estimating unit 130 (location attribute estimating unit), user state estimating unit 140, environment estimating unit 150, and output control unit 160.
 コンテクスト取得部110は、ユーザのコンテクストを取得する。ユーザのコンテクストは、位置情報及び端末情報を含む。ここで、コンテクストとは、例えばセンサ部210から取得したセンサ値や、カレンダーアプリから取得したユーザの予定情報等である。コンテクスト取得部110は、GPSセンサ111及びビーコン送受信機112等の、コンテクストとして位置情報を取得する装置を有する。コンテクスト取得部110は、さらに、コンテクストとして端末情報を取得する端末情報取得部113を有する。端末情報取得部113は、コンテクストである端末情報として、画面ロック情報(ロック、アンロック)、ユーザの行動情報(ラン、自転車、静止、徒歩、乗車中等)、場所(家やオフィス等の特定場所、不特定場所)、カレンダーアプリ情報(会議予定あり、無し)、時間情報(ワークタイム中、ワークタイム外)、電話アプリ情報(電話中)、音声認識アプリ情報(発話中)、自動DND(Do Not Disturb)設定(時間枠内、時間枠外)、手動DND設定(オン、オフライン)等を取得する。 The context acquisition unit 110 acquires the user's context. The user's context includes location information and terminal information. Here, the context is, for example, a sensor value obtained from the sensor unit 210, user's schedule information obtained from a calendar application, or the like. The context acquisition unit 110 has a device such as a GPS sensor 111 and a beacon transmitter/receiver 112 that acquires location information as a context. Context acquisition section 110 further includes terminal information acquisition section 113 that acquires terminal information as a context. The terminal information acquisition unit 113 acquires screen lock information (locked, unlocked), user behavior information (run, bicycle, stationary, walking, riding, etc.), location (specific location such as home, office, etc.) as terminal information that is context. , unspecified place), calendar application information (scheduled, no meeting), time information (during work time, outside work time), phone application information (during a call), voice recognition application information (during speaking), automatic DND (Do Not Disturb) setting (within time frame, outside time frame), manual DND setting (on, offline), etc.
 PDR部120(ユーザ位置推定部)は、ユーザが装着したウェアラブルデバイス200が有するセンサ部210の検出値(加速度、角速度及び方位角)に基づき、ユーザ位置を推定する。具体的には、PDR部120は、角度補正部121と、角度推定部122と、ユーザ位置推定部123とを有する。角度補正部121は、ユーザが装着したウェアラブルデバイス200が有するセンサ部210の検出値(加速度、角速度及び方位角)に基づき、ユーザの方位角の補正値を算出する。角度推定部122は、ユーザが装着したウェアラブルデバイス200が有するセンサ部210の検出値(加速度、角速度及び方位角)と、補正値とに基づき、ユーザの方位角を推定する。ユーザ位置推定部123は、補正後の方位角を利用してユーザ位置を推定する。PDR(歩行者自律航法、Pedestrian Dead Reckoning)とは、複数の自律動作するセンサからの検出値に基き、ある基準点からの相対的な位置を測位する技術である。本例では、PDR部120は、加速度センサ211、ジャイロセンサ212、コンパス213からの検出値である加速度、角速度及び方位角に基づき、ユーザ位置の部屋から部屋への変化、即ち、ユーザ位置の移動経路を推定する。 The PDR section 120 (user position estimation section) estimates the user position based on the detection values (acceleration, angular velocity and azimuth angle) of the sensor section 210 of the wearable device 200 worn by the user. Specifically, PDR section 120 has angle correction section 121 , angle estimation section 122 , and user position estimation section 123 . The angle correction unit 121 calculates a correction value for the user's azimuth angle based on the detection values (acceleration, angular velocity, and azimuth angle) of the sensor unit 210 of the wearable device 200 worn by the user. The angle estimation unit 122 estimates the azimuth angle of the user based on the detection values (acceleration, angular velocity, and azimuth angle) of the sensor unit 210 of the wearable device 200 worn by the user and the correction value. The user position estimation unit 123 estimates the user position using the corrected azimuth angle. PDR (Pedestrian Dead Reckoning) is a technique for positioning relative to a certain reference point based on detection values from a plurality of autonomously operating sensors. In this example, the PDR unit 120 detects changes in the user position from room to room, that is, movement of the user position, based on acceleration, angular velocity, and azimuth angle detected by the acceleration sensor 211, gyro sensor 212, and compass 213. Estimate a route.
 場所推定部130(場所属性推定部)は、PDR部120が推定したユーザ位置の変化に基づき、ユーザがいる場所の属性(場所属性)を推定する。言い換えれば、PDR部120が推定した移動経路に基づき、ユーザの移動後の場所属性を推定する。場所属性は、例えば、1つの建物自体等よりさらに細かい、1つの建物内の区分である。例えば、場所属性は、1つの家屋内の、リビングルーム、寝室、トイレ、キッチン、洗面所等である。あるいは、場所属性は、1つのコーワーキングスペース内の、デスク、会議室等である。ただし、これには限定されず、場所属性が建物自体等を示すものであってもよいし、建物自体と建物内の区分両方であっても良い。 The location estimation unit 130 (location attribute estimation unit) estimates the attribute of the user's location (location attribute) based on the change in the user's position estimated by the PDR unit 120 . In other words, based on the moving route estimated by the PDR unit 120, the location attribute after the user moves is estimated. A location attribute is, for example, a division within a building that is even finer than the building itself. For example, the location attribute is living room, bedroom, toilet, kitchen, washroom, etc. within one house. Alternatively, the location attribute is a desk, conference room, etc. within one co-working space. However, the location attribute is not limited to this, and the location attribute may indicate the building itself or the like, or may indicate both the building itself and the section within the building.
 ユーザ状態推定部140は、コンテクスト取得部110が取得したコンテクストと、ウェアラブルデバイス200が有するセンサ部210の検出値(加速度、角速度及び方位角)と、場所推定部130が推定した場所属性とに基づき、ユーザ状態を推定する。ユーザ状態は、ユーザの複数レベルの活動状態を示す。例えば、ユーザ状態は、ブレイクタイム、ニュートラル、DND(Do Not Disturb)及びオフラインの4レベルの活動状態を示す。ブレイクタイムが最もリラックスした活動状態、ニュートラルが通常の活動状態、DNDが比較的忙しい活動状態、オフラインが最も忙しい活動状態である。また、前述した4つのレベル別だけでなく、システム上で任意の数のレベル別に設定したり、ユーザ側でレベル数を適宜設定できるようにしてもよい。 User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state. A user state indicates a user's multi-level activity state. For example, the user state indicates four levels of activity: break time, neutral, DND (Do Not Disturb) and offline. Break time is the most relaxed activity state, Neutral is the normal activity state, DND is the relatively busy activity state, and Offline is the busiest activity state. In addition to the four levels described above, it may be possible to set an arbitrary number of levels on the system, or allow the user to set the number of levels as appropriate.
 環境推定部150は、ユーザ状態推定部140が推定したユーザ状態に基づき、ユーザに提示する環境状態を推定する。環境推定部150は、さらに、場所推定部130が推定した場所属性に基づき、ユーザに提示する環境状態を推定してもよい。ユーザに提示する環境状態は、例えば、ユーザがフォーカス(集中)できるような環境状態や、ユーザがリラックスできるような環境状態である。 The environment estimation unit 150 estimates the environmental state to be presented to the user based on the user state estimated by the user state estimation unit 140 . The environment estimation unit 150 may further estimate the environmental state presented to the user based on the location attributes estimated by the location estimation unit 130 . The environmental state presented to the user is, for example, an environmental state in which the user can focus (concentrate) or an environmental state in which the user can relax.
 出力制御部160は、環境推定部150が推定した環境状態に基づき出力を制御する。具体的には、出力制御部160は、コンテンツ制御部161及び通知制御部162を有する。コンテンツ制御部161は、環境推定部150が推定した環境状態に基づき選択されたコンテンツ(楽曲、動画等)を再生する。例えば、コンテンツ制御部161は、ネットワークを介してDSP(Digital Service Provider)に環境状態を通知し、DSPがこの環境状態に基づき選択したコンテンツ(例えばユーザがフォーカスできるようなコンテンツや、ユーザがリラックスできるようなコンテンツ等)を受信して再生すればよい。通知制御部162は、環境状態に基づきユーザへの通知の回数を制御する。例えば、通知制御部162は、ユーザがフォーカスできるように通知(アプリやメッセージの新着通知等)の回数を減らす又は無くす、ユーザがリラックス中であれば通知の回数を通常通りとする等のような処理をしてもよい。 The output control unit 160 controls output based on the environmental state estimated by the environment estimation unit 150 . Specifically, the output control unit 160 has a content control unit 161 and a notification control unit 162 . The content control unit 161 reproduces content (music, video, etc.) selected based on the environmental state estimated by the environment estimation unit 150 . For example, the content control unit 161 notifies the DSP (Digital Service Provider) of the environmental state via the network, and the DSP selects content based on this environmental state (for example, content that the user can focus on, content that the user can relax, etc.) content, etc.) may be received and reproduced. The notification control unit 162 controls the number of notifications to the user based on environmental conditions. For example, the notification control unit 162 reduces or eliminates the number of notifications (e.g., notifications of new arrivals of applications or messages) so that the user can focus, or sets the number of notifications to normal if the user is relaxing. may be processed.
 2.PDR部の角度補正部の動作 2. Operation of the angle correction section of the PDR section
 図2は、装着されたウェアラブルデバイスを模式的に示す。 Fig. 2 schematically shows the worn wearable device.
 ウェアラブルデバイス200は、典型的には、ワイヤレスイヤホンである。ワイヤレスイヤホンであるウェアラブルデバイス200は、スピーカ221と、ドライバユニット222と、これらを接続する音導管223とを有する。スピーカ221が耳孔に挿入されてウェアラブルデバイス200を耳に対して位置決めし、ドライバユニット222は耳の裏に位置する。加速度センサ211及びジャイロセンサ212を含むセンサ部210は、ドライバユニット222内に内蔵される。 The wearable device 200 is typically a wireless earphone. A wearable device 200, which is a wireless earphone, has a speaker 221, a driver unit 222, and a sound conduit 223 connecting them. The speaker 221 is inserted into the ear canal to position the wearable device 200 against the ear, and the driver unit 222 is located behind the ear. A sensor section 210 including an acceleration sensor 211 and a gyro sensor 212 is built in a driver unit 222 .
 図3は、装着されたウェアラブルデバイスの個人差を模式的に示す。 Fig. 3 schematically shows individual differences in wearable devices worn.
 ウェアラブルデバイス200のドライバユニット222の顔正面に対する角度は、ユーザ毎に異なる。このため、ドライバユニット222に内蔵されたセンサ部210の加速度センサ211及びジャイロセンサ212のセンサ軸の顔正面に対する角度は、ユーザ毎に異なる。例えば、(a)は、ユーザがウェアラブルデバイス200を耳に浅く引っ掛けて装着した場合を示し、(b)は、ユーザがウェアラブルデバイス200を耳に深く固定して装着した場合を示す。(a)のユーザのセンサ軸の顔正面に対する角度と、(b)のユーザのセンサ軸の顔正面に対する角度との差は、30°以上になることもある。そこで、PDR部120は、ユーザ毎にセンサ部210のセンサ軸の顔正面に対する角度を推定し、これを補正値として、個人差に依存することなく精度高く顔の向き(角度)を推定する。 The angle of the driver unit 222 of the wearable device 200 with respect to the front of the face differs for each user. Therefore, the angles of the sensor axes of the acceleration sensor 211 and the gyro sensor 212 of the sensor unit 210 built in the driver unit 222 with respect to the front of the face differ for each user. For example, (a) shows the case where the user wears the wearable device 200 shallowly hooked on the ear, and (b) shows the case where the user wears the wearable device 200 deeply fixed to the ear. The difference between the angle of the user's sensor axis with respect to the front face of (a) and the angle of the user's sensor axis with respect to the front of the face of (b) may be 30° or more. Therefore, the PDR unit 120 estimates the angle of the sensor axis of the sensor unit 210 with respect to the front of the face for each user, and uses this as a correction value to accurately estimate the orientation (angle) of the face without depending on individual differences.
 図4は、角度補正の概念を模式的に示す。 FIG. 4 schematically shows the concept of angle correction.
 ウェアラブルデバイス200の方位角(azimuth)において、センサ部210が取得したセンサ値による更新値AzimuthEと、装着時の顔正面からの向きの差分AzimuthOffsetとには、「Azimuth=AzimuthE+AzimuthOffset」の関係がある。ここで、AzimuthEは、角速度を検出するジャイロセンサ212が取得したセンサ値を積算した3次元姿勢から求められる。一方、AzimuthOffsetは、ユーザ毎に異なり、装着だけでは測定ができないため、ユーザ毎にAzimuthOffsetを推定する必要がある。 In the azimuth angle (azimuth) of the wearable device 200, the updated value Azimuth E based on the sensor value acquired by the sensor unit 210 and the difference Azimuth Offset in the orientation from the front of the face when worn are expressed as "Azimuth = Azimuth E + Azimuth Offset ” has a relationship. Here, Azimuth E is obtained from the three-dimensional posture obtained by integrating sensor values obtained by the gyro sensor 212 that detects angular velocity. On the other hand, the Azimuth Offset differs for each user and cannot be measured just by wearing the device, so it is necessary to estimate the Azimuth Offset for each user.
 姿勢を推定するために、両耳を水平にした状態で、2つの座標系を定義する。座標系(1)は、グローバルフレーム(固定)であり、頭上に延びる垂線Z軸と、両耳を繋いで右方向を正とするX軸と、X軸及びZ軸と直交するY軸とからなる座標系である。座標系(2)は、センサフレームであり、ウェアラブルデバイス200のセンサ部210に対して固定的な座標系(XE,YE,ZE)である。補正値である姿勢差(AzimuthOffset)は、座標系(1)に対する座標系(2)の回転量を示す。 To estimate the pose, we define two coordinate systems with both ears horizontal. Coordinate system (1) is a global frame (fixed), and is composed of a vertical Z-axis extending overhead, an X-axis connecting both ears and positive in the right direction, and a Y-axis orthogonal to the X-axis and Z-axis. is a coordinate system. A coordinate system (2) is a sensor frame, and is a coordinate system (X E , Y E , Z E ) that is fixed with respect to the sensor unit 210 of the wearable device 200 . Azimuth Offset , which is a correction value, indicates the amount of rotation of the coordinate system (2) with respect to the coordinate system (1).
 図5は、角度補正部の動作フローを示す。図6は、ユーザの動きを模式的に示す。図7は、角度補正の概念を模式的に示す。図8は、角度補正部の具体的な処理を示す。図9は、具体的な計算例を示す。 FIG. 5 shows the operation flow of the angle corrector. FIG. 6 schematically shows user movements. FIG. 7 schematically shows the concept of angle correction. FIG. 8 shows specific processing of the angle corrector. FIG. 9 shows a specific calculation example.
 ユーザはウェアラブルデバイス200を装着し、正面を向いた状態(図6の(a))から、正面斜め下方向を見るように頭を下に動かす(図6の(b))(ステップS101)。角度補正部121は、頭を下に動かすときの加速度値から、グローバルフレーム座標系(X,Y,Z)に対するPitch及びRollを算出する(ステップS102)。角度補正部121は、ジャイロセンサ212の角速度値の収集を開始する。このときの時間をt0とする(ステップS103)(図8の(2)の処理)。ユーザは、次に、左右がぶれないように、正面斜め上を見上げるように頭を上にゆっくり動かす(図6の(c))(ステップS104)。角度補正部121は、ジャイロセンサ212の角速度値の収集を続ける(ステップS105)。ユーザが限界まで頭を上げると、角度補正部121は、ジャイロセンサ212の角速度値の収集を停止する。このときの時間をt1とする(ステップS106、YES)。 The user wears the wearable device 200 and moves the head downward so as to look diagonally downward from the front ((a) of FIG. 6) ((b) of FIG. 6) (step S101). The angle correction unit 121 calculates Pitch and Roll with respect to the global frame coordinate system (X, Y, Z) from the acceleration value when moving the head downward (step S102). The angle correction unit 121 starts collecting angular velocity values of the gyro sensor 212 . Let the time at this time be t0 (step S103) (process (2) in FIG. 8). Next, the user slowly moves his or her head up so as to look up diagonally from the front without blurring left and right ((c) in FIG. 6) (step S104). The angle correction unit 121 continues collecting angular velocity values of the gyro sensor 212 (step S105). When the user raises his or her head to the limit, the angle corrector 121 stops collecting the angular velocity values of the gyro sensor 212 . The time at this time is set to t1 (step S106, YES).
 角度補正部121は、収集したジャイロセンサ212の角速度値から、回転軸[αX,αy,αz,]Tを求める。この回転軸はセンサ軸を基準としたものである。次に角度補正部121は、t0のときの回転行列(rotation matrix:RotMat)を、「RotMat at t0=RZ(yaw)*RX(pitch)*RY(roll)」と定義する。このRotMatは顔正面を基準としたものである。RZ(・)、RX(・)、RY(・)はそれぞれZ軸、Y軸、X軸の回転行列である。顔正面を基準としたpitchとrollは加速度センサから求められるが、yawは不知である。角度補正部121は、RotMat*axis=[1;0;0]の関係からyawを算出できる(図8の(4)の処理)。まずRotMat*axisを[rX,ry,rz]Tとする(ステップS107)。角度補正部121は、rZが閾値から外れれば(0との差が大きければ)失敗であり、処理をやり直す(ステップS108、NO)。rZが閾値以内であれば次に進む(ステップS108、YES)。角度補正部121は、rX、rYから補正値(AzimuthOffset)を求める(ステップS109)(図8の(5)の処理)。角度補正部121は、AzimuthOffset、Pitch及びRollから回転行列(rotation matrix:RotMat)を求める(ステップS110)。このRotMatは顔正面軸を基準としたものである。 The angle correction unit 121 obtains the rotation axis [α X , α y , α z , ] T from the collected angular velocity values of the gyro sensor 212 . This rotation axis is based on the sensor axis. Next, the angle correction unit 121 defines a rotation matrix (RotMat) at t0 as "RotMat at t0=R Z (yaw)*R X (pitch)*R Y (roll)". This RotMat is based on the front face. R Z (·), R X (·), and R Y (·) are the rotation matrices of the Z-axis, Y-axis, and X-axis, respectively. Pitch and roll with reference to the front of the face can be obtained from the accelerometer, but yaw is unknown. The angle correction unit 121 can calculate yaw from the relationship of RotMat*axis=[1;0;0] (process (4) in FIG. 8). First, RotMat *axis is set to [ rX ,ry, rz ] T (step S107). If r Z deviates from the threshold value (if the difference from 0 is large), the angle correction unit 121 fails and redoes the process (step S108, NO). If r Z is within the threshold, the process proceeds to the next step (step S108, YES). The angle corrector 121 obtains a correction value (Azimuth Offset ) from rX and rY (step S109) (process (5) in FIG. 8). The angle correction unit 121 obtains a rotation matrix (RotMat) from Azimuth Offset , Pitch and Roll (step S110). This RotMat is based on the front face axis.
 次に、顔が下を向いた状態でなく、自然な正面を指定する方法を説明する。 Next, I will explain how to specify the natural front instead of the face looking down.
 図10は、初期フレーム同士の関係を示す。 FIG. 10 shows the relationship between initial frames.
 頭を下げた初期姿勢(Head Center Pose)をI3x3(Identity Matrix)とする。センサの姿勢(右耳に付けることを想定しRight Sensor Poseとする)をRt0(図8のRotMat)とする。 Let the initial posture with the head lowered (Head Center Pose) be I 3x3 (Identity Matrix). Let R t0 (RotMat in FIG. 8) be the posture of the sensor (Right Sensor Pose assuming that it is attached to the right ear).
 図11は、自然な正面を指定する方法を示す。 Fig. 11 shows a method of specifying a natural front view.
 図9の方法で右センサの姿勢(Right Sensor Pose)であるRt0を求めておけば、顔のpitchを変えたい(即ち、顔を少し上げた姿勢を原点にしたい)場合には、計算をする必要がなく、図9の方法でRt0と新しい姿勢での加速度センサ値から新しい姿勢での関係式Rt2が求められる。 If R t0 which is the posture of the right sensor (Right Sensor Pose) is obtained by the method of FIG. The relational expression Rt2 in the new attitude can be obtained from Rt0 and the acceleration sensor value in the new attitude by the method of FIG.
 3.場所推定部の動作 3. How the location estimator works
 図12は、場所推定部の処理を説明するための図である。 FIG. 12 is a diagram for explaining the processing of the location estimation unit.
 ユーザは家屋内でウェアラブルデバイス200を装着したまま日常な移動を行う。場所推定部130はその移動パターンと順番を記憶しておく。家屋内でユーザが移動する場所やその移動順番は、有限な数で決まっていることが多い。場所推定部130は、直近のN(例えば、N=3)個の移動パターンから場所を特定する。 The user moves daily while wearing the wearable device 200 in the house. The place estimating unit 130 stores the movement pattern and order. In many cases, a finite number of places in a house where a user moves and the order in which they move are determined. The place estimation unit 130 identifies a place from the most recent N (for example, N=3) movement patterns.
 図12において、(1)はリビング(Living)から寝室(Room)に行く経路、(2)は寝室からリビングに戻る経路、(3)はリビングからトイレ(Toilet)に行く経路、(4)はトイレ(Toilet)からリビングに戻る経路、(5)はリビングからキッチン(Kitchen)に行く経路、(6)はキッチンからリビングに戻る経路である。
 例えば、ウェアラブルデバイス200を装着し、リビングで仕事を始める。しばらくしてトイレに行った後、洗面所で手を洗った後に席に戻る。またしばらくしてキッチンに移動して飲み物を取ってリビングに戻る。ここでの移動パターンは次のような移動パターンとなる。リビングからトイレ(経路(3))。トイレからリビング(経路(4))。リビングからキッチン(経路(5))。キッチンからリビング(経路(6))。
In FIG. 12, (1) is the route from the living room to the bedroom, (2) is the route from the bedroom to the living room, (3) is the route from the living room to the toilet, and (4) is A route from the toilet to the living room, (5) from the living room to the kitchen, and (6) from the kitchen to the living room.
For example, the user wears the wearable device 200 and starts working in the living room. After a while, after going to the toilet, I returned to my seat after washing my hands in the washroom. After a while, I moved to the kitchen, got a drink, and returned to the living room. The movement pattern here is as follows. From the living room to the toilet (route (3)). From the toilet to the living room (route (4)). From the living room to the kitchen (route (5)). From the kitchen to the living room (route (6)).
 場所推定部130は、この4個のパターンとその順番を記憶しておく。次にユーザが移動したときには、移動パターンを記憶されていたパターンとマッチングする。場所推定部130は、マッチングに成功すれば、移動後の場所が特定でき、マッチングに失敗すれば新しいパターンとして経路リストに追加する。経路リスト(図12の右側)には、直近のN(例えば、N=3)個の移動パターンの学習済みのリストである。例えば、経路リストは、「(1)リビングから寝室、(2)寝室からリビング、(5)はリビングからキッチン」の移動パターン(最上段)、「(2)寝室からリビング、(5)はリビングからキッチン、(6)はキッチンからリビング」(上から2段目)の様に、直近のN(例えば、N=3)個の移動パターンが登録されている。 The place estimation unit 130 stores these four patterns and their order. The next time the user moves, the movement pattern is matched with the stored pattern. If the matching is successful, the place estimating unit 130 can specify the post-movement place, and if the matching is unsuccessful, the place estimating unit 130 adds it to the route list as a new pattern. The route list (on the right side of FIG. 12) is a learned list of the most recent N (eg, N=3) movement patterns. For example, the route list includes movement patterns (top row) of "(1) living room to bedroom, (2) bedroom to living room, (5) living room to kitchen", and "(2) bedroom to living room, (5) living room The most recent N (for example, N=3) movement patterns are registered such as "from kitchen to kitchen, (6) from kitchen to living room" (second row from the top).
 このように、場所推定部130は、複数の移動経路を保持し、PDR部120により推定された移動経路を、保持された複数の移動経路とマッチングすることにより、移動後の場所属性(リビングルーム、寝室、トイレ、キッチン、洗面所等)を推定することができる。また、場所推定部130は、ユーザがいる場所でのユーザの滞在時間を判断することにより、場所属性を推定してもよい。移動経路に加えて滞在時間を判断することで、より精度よく場所属性を推定できる。 In this way, the location estimation unit 130 holds a plurality of movement routes, and matches the movement routes estimated by the PDR unit 120 with the plurality of held movement routes to obtain location attributes after movement (living room , bedroom, toilet, kitchen, washroom, etc.) can be estimated. Also, the location estimation unit 130 may estimate location attributes by determining how long the user stays at the location where the user is. By determining the staying time in addition to the moving route, the location attribute can be estimated more accurately.
 図13は、場所推定部の処理の適用例を示す。 FIG. 13 shows an application example of the processing of the location estimation unit.
 出発地点と目的地点が同じでも歩き方が異なる場合は、マッチングしないことがあり、記憶パターンに追加することで学習が行われる。このため、同じ場所間の移動であっても複数のパターンが学習される。図13の座標系は、原点を出発地点とし、原点(出発地点)から別の部屋に進むときのユーザ位置を定期的に(例えば、1秒ごとに)プロットしたユーザ位置の遷移を示す。(1)の軸は、リビング(原点)から寝室への移動経路を示す。(2)の軸は、寝室(原点)からリビングへの移動経路(距離)を示す。(3)の軸は、リビング(原点)からトイレへの移動経路を示す。(4)の軸は、トイレ(原点)からリビングへの移動経路を示す。 Even if the starting point and destination point are the same, if the walking style is different, matching may not be possible, and learning is performed by adding it to the memory pattern. Therefore, multiple patterns are learned even when moving between the same locations. The coordinate system of FIG. 13 shows the transition of the user position with the origin as the starting point and the user position plotted periodically (eg, every second) as it progresses from the origin (starting point) to another room. The axis (1) indicates the moving route from the living room (origin) to the bedroom. The axis (2) indicates the movement path (distance) from the bedroom (origin) to the living room. The axis (3) indicates the moving route from the living room (origin) to the toilet. The axis (4) indicates the moving route from the toilet (origin) to the living room.
 図14は、場所推定部の処理の認識例を示す。 FIG. 14 shows a recognition example of processing by the location estimation unit.
 場所推定部130は、経路を学習する際に属性を示すラベルを付ける。これにより、マッチングに成功した際に属性を示すラベルを自動で表示することができる。次に、場所推定部130の動作をより具体的に説明する。 The location estimation unit 130 attaches labels indicating attributes when learning routes. As a result, the label indicating the attribute can be automatically displayed when the matching is successful. Next, the operation of the location estimation unit 130 will be described more specifically.
 図15は、場所推定部の動作フローを示す。 FIG. 15 shows the operation flow of the location estimation unit.
 PDR部120は、ユーザ位置の部屋から部屋への変化、即ち、ユーザ位置の移動経路を推定する(ステップS201)。場所推定部130は、PDR部120が検出したが推定したユーザ位置の変化に基づき、ユーザが停止したことを検出する(ステップS202、YES)。場所推定部130は、停止カウンタをインクリメント(+1)する(ステップS203)。場所推定部130は、部屋から部屋への移動がN個(例えば、N=3)以上となると(ステップS204、YES)、直近のN個(例えば、N=3)の経路を、保持している複数の移動経路とマッチングする(ステップS205)。場所推定部130は、マッチングに成功すれば(ステップS206、YES)、移動後の場所を特定する(ステップS207)。一方、場所推定部130は、マッチングに失敗すれば(ステップS206、NO)、新しいパターンとして経路リストに追加する(ステップS208)。 The PDR unit 120 estimates the change of the user position from room to room, that is, the movement route of the user position (step S201). The place estimating unit 130 detects that the user has stopped based on the change in the user's position detected and estimated by the PDR unit 120 (step S202, YES). The location estimation unit 130 increments (+1) the stop counter (step S203). When there are N (eg, N=3) or more movements from room to room (step S204, YES), the location estimating unit 130 holds the most recent N (eg, N=3) routes. Matching is performed with a plurality of moving routes (step S205). If the matching is successful (step S206, YES), the place estimating unit 130 identifies the post-movement place (step S207). On the other hand, if the matching fails (step S206, NO), the location estimating unit 130 adds it to the route list as a new pattern (step S208).
 図16は、場所推定部の補足的な動作フローを示す。 FIG. 16 shows a supplementary operation flow of the location estimation unit.
 ところで、ユーザが自分の家から違う屋内(例えば、コワーキングスペース)に移動し、保持している複数の移動経路とは全く異なる移動経路が検出され続ける場合が考えられる。この場合、場所推定部130は、マッチング失敗(ステップS206、NO)がしばらく続く(ステップS209、YES)。一方、新たな移動経路をマッチングに成功するほどに経路リストに十分蓄積すれば(ステップS208)、マッチングに成功し(ステップS206、YES)、移動後の場所を特定することができるようになる(ステップS207)。場所推定部130は、マッチング失敗が所定回数続いた場合に(ステップS209、YES)、経路リストに登録されていない別の場所の可能性があることを示す警告を出力する(ステップS210)。これにより、移動後の場所属性を、新たな移動経路から推定する旨をユーザに通知できる。 By the way, it is conceivable that the user moves from his/her home to a different indoor space (for example, a coworking space) and continues to detect a movement route that is completely different from the multiple movement routes held. In this case, the location estimation unit 130 continues to fail in matching (step S206, NO) for a while (step S209, YES). On the other hand, if enough new travel routes are accumulated in the route list to the extent that matching is successful (step S208), matching is successful (step S206, YES), and the location after travel can be identified ( step S207). When the matching failure continues for a predetermined number of times (step S209, YES), the place estimation unit 130 outputs a warning indicating that there is a possibility of another place not registered in the route list (step S210). This makes it possible to notify the user that the location attribute after movement will be estimated from the new movement route.
 図17は、経路が同じで違う歩き方として識別する場合の動作を示す。 FIG. 17 shows the operation when different walking styles are identified for the same route.
 上述のように、出発地点と目的地点が同じでも歩き方が異なる場合は、マッチングしないことがあり、記憶パターンに追加することで学習が行われる。その方法を説明する。直近のN個の経路と、データベースのN個ずつ保存されたパターンとの距離を、DTW(dynamic time warping、動的時間伸縮法)で計算し、閾値と比較する。DTW(dynamic time warping、動的時間伸縮法)とは、時系列データ同士の距離・類似度を測る際に用いる手法である。歩き方が異なるとDTWの閾値を超える場合があり、その場合、別のデータとして保存される。 As mentioned above, if the starting point and destination point are the same but the walking style is different, matching may not be possible, and learning is performed by adding it to the memory pattern. Here's how. The distances between the latest N paths and the N patterns stored in the database are calculated by DTW (dynamic time warping) and compared with a threshold. DTW (dynamic time warping) is a technique used to measure the distance and similarity between time-series data. A different walking style may exceed the DTW threshold, in which case it is stored as separate data.
 図18は、場所推定部が場所を推定する方法の変形例を示す。 FIG. 18 shows a modification of the method for estimating the location by the location estimating unit.
 場所推定部130は、GPSセンサ111及びビーコン送受信機112が取得した位置情報に基づいて、特に屋外でのユーザがいる場所の属性(場所属性)を推定してもよい。場所推定部130は、生体センサ214が取得した生体情報に基づいて、ユーザがいる場所の属性(場所属性)を推定してもよい。例えば、生体センサ214(心拍センサ等)に基づきユーザが入眠時であることが分かれば、場所推定部130は、場所属性として寝室を推定すればよい。 The location estimation unit 130 may estimate the attribute of the location where the user is located (location attribute), especially outdoors, based on the location information acquired by the GPS sensor 111 and the beacon transmitter/receiver 112 . The place estimation unit 130 may estimate the attribute of the place where the user is (place attribute) based on the biometric information acquired by the biosensor 214 . For example, if it is known that the user is falling asleep based on the biometric sensor 214 (heartbeat sensor or the like), the location estimation unit 130 may estimate the bedroom as the location attribute.
 4.ユーザ状態推定部の動作 4. Operation of User State Estimator
 図19は、コンテクストからユーザに提示する環境状態を推定するフローである。 FIG. 19 is a flow for estimating the environmental state presented to the user from the context.
 コンテクスト取得部110は、ユーザのコンテクストを取得する。ユーザ状態推定部140は、コンテクスト取得部110が取得したコンテクストと、ウェアラブルデバイス200が有するセンサ部210の検出値(加速度、角速度及び方位角)と、場所推定部130が推定した場所属性とに基づき、ユーザ状態を推定する。環境推定部150は、ユーザ状態推定部140が推定したユーザ状態に基づき、ユーザに提示する環境状態(フォーカス(集中)、リラックス等)を推定する。 The context acquisition unit 110 acquires the user's context. User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state. Based on the user state estimated by the user state estimation unit 140, the environment estimation unit 150 estimates the environmental state (focus (concentration), relaxation, etc.) to be presented to the user.
 図20は、ユーザ状態推定部の動作を示す。 FIG. 20 shows the operation of the user state estimation unit.
 ユーザ状態推定部140は、コンテクスト取得部110が取得したコンテクストと、ウェアラブルデバイス200が有するセンサ部210の検出値(加速度、角速度及び方位角)と、場所推定部130が推定した場所属性とに基づき、ユーザ状態を推定する。ユーザのコンテクストは、位置情報及び端末情報を含む。端末情報は、画面ロック情報(ロック、アンロック)、ユーザの行動情報(ラン、自転車、静止、徒歩、乗車中等)、場所(家やオフィス等の特定場所、不特定場所)、カレンダーアプリ情報(会議予定あり、無し)、時間情報(ワークタイム中、ワークタイム外)、電話アプリ情報(電話中)、音声認識アプリ情報(発話中)、自動DND(Do Not Disturb)設定(時間枠内、時間枠外)、手動DND設定(オン、オフライン)等を含む。ユーザ状態は、ユーザの複数レベルの活動状態を示す。例えば、ユーザ状態は、ブレイクタイム、ニュートラル、DND(Do Not Disturb)及びオフラインの4レベルの活動状態を示す。ブレイクタイムが最もリラックスした活動状態、ニュートラルが通常の活動状態、DNDが比較的忙しい活動状態、オフラインが最も忙しい活動状態である。 User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state. The user's context includes location information and terminal information. Terminal information includes screen lock information (lock, unlock), user behavior information (run, bicycle, stationary, walking, riding, etc.), location (specific location such as home or office, unspecified location), calendar application information ( Scheduled meeting, no meeting), time information (during work time, outside work time), phone application information (during a call), voice recognition application information (during speaking), automatic DND (Do Not Disturb) setting (during time frame, time out of frame), manual DND settings (on, offline), etc. A user state indicates a user's multi-level activity state. For example, the user state indicates four levels of activity: break time, neutral, DND (Do Not Disturb) and offline. Break time is the most relaxed activity state, Neutral is the normal activity state, DND is the relatively busy activity state, and Offline is the busiest activity state.
 図21は、コンテクスト及びユーザ状態のマッピング関係を示す。 FIG. 21 shows the mapping relationship between context and user state.
 ユーザ状態推定部140は、コンテクストをユーザ状態にマッピングすることで、ユーザ状態を推定する。ユーザ状態推定部140は、例えば、コンテクストとしての画面ロック情報がアンロックであればユーザ状態はDND、ロックであればユーザ状態はニュートラルであると推定する。ユーザ状態推定部140は、その他のコンテクストについてもそれぞれユーザ状態を推定する。また、コンテクストは図21に示されているものに限られず、何かしらのコンテクストを表すものであれば、どのようなものであってもよい。 The user state estimation unit 140 estimates the user state by mapping the context to the user state. For example, if the screen lock information as the context is unlocked, the user state estimation unit 140 estimates that the user state is DND, and if the screen lock information is locked, the user state is estimated to be neutral. The user state estimating unit 140 also estimates user states for other contexts. Also, the context is not limited to that shown in FIG. 21, and any context may be used as long as it represents some kind of context.
 図22は、ユーザ状態推定部がユーザ状態を判断する方法を示す。 FIG. 22 shows how the user state estimation unit determines the user state.
 ユーザ状態推定部140は、複数のコンテクストについて1つでもオフラインが含まれれば、ユーザ状態をオフラインと推定する。ユーザ状態推定部140は、複数のコンテクストについてオフラインが無く、1つでもDNDが含まれれば、ユーザ状態をDNDと推定する。ユーザ状態推定部140は、複数のコンテクストについてオフライン、DND及びブレイクタイムが無ければ、ユーザ状態をニュートラルと推定する。ユーザ状態推定部140は、オフライン及びDNDが無くブレイクタイムが含まれれば、ユーザ状態をブレイクタイムと推定する。 The user state estimation unit 140 estimates the user state as offline if even one of the contexts includes offline. The user state estimation unit 140 estimates the user state as DND if there are no offline contexts and at least one context includes DND. The user state estimation unit 140 estimates the user state as neutral if there is no offline, DND and break time for a plurality of contexts. The user state estimating unit 140 estimates the user state as the break time if there is no offline or DND and the break time is included.
 5.環境推定部の動作 5. Operation of environment estimator
 図23は、環境推定部の動作を示す。 FIG. 23 shows the operation of the environment estimation unit.
 環境推定部150は、ユーザ状態推定部140が推定したユーザ状態や場所推定部130が推定した場所属性に基づき、ユーザに提示する環境状態を推定する。ユーザに提示する環境状態は、例えば、ユーザがフォーカス(集中)できるような環境状態や、ユーザがリラックスできるような環境状態である。 The environment estimation unit 150 estimates the environmental state to be presented to the user based on the user state estimated by the user state estimation unit 140 and the location attribute estimated by the location estimation unit 130 . The environmental state presented to the user is, for example, an environmental state in which the user can focus (concentrate) or an environmental state in which the user can relax.
 例えば、(1)環境推定部150は、時間帯が仕事中、ユーザ状態がニュートラル、行動がステイ、場所がデスクであれば、ユーザに提示する環境状態をフォーカスと推定する。(2)環境推定部150は、時間帯が仕事中、ユーザ状態がブレイクタイムであれば、ユーザに提示する環境状態をリラックスと推定する。(3)環境推定部150は、時間帯が仕事外、ユーザ状態がブレイクタイムであれば、ユーザに提示する環境状態をリラックスと推定する。 For example, (1) the environment estimating unit 150 estimates that the environmental state presented to the user is the focus when the time period is at work, the user state is neutral, the action is stay, and the location is desk. (2) If the time zone is working and the user state is break time, the environment estimation unit 150 estimates that the environmental state to be presented to the user is relaxed. (3) If the time zone is non-work and the user state is break time, the environment estimation unit 150 estimates that the environmental state to be presented to the user is relaxed.
 6.出力制御部の動作 6. Operation of the output controller
 図24は、出力制御部のコンテンツ制御部の動作を示す。 FIG. 24 shows the operation of the content control section of the output control section.
 出力制御部160のコンテンツ制御部161は、環境推定部150が推定した環境状態に基づき選択されたコンテンツ(楽曲、動画等)を再生する。例えば、コンテンツ制御部161は、ネットワークを介してDSP(Digital Service Provider)に環境状態を通知し、DSPがこの環境状態に基づき選択したコンテンツ(ユーザがフォーカスできるようなコンテンツ、ユーザがリラックスできるようなコンテンツ)を受信して再生すればよい。例えば、コンテンツ制御部161は、ユーザが仕事中でユーザ状態がフォーカスであれば集中できるような音楽を再生し、ユーザ状態がリラックスであればリラックスできる音楽を再生する。例えば、コンテンツ制御部161は、ユーザの入眠時にユーザ状態がリラックスであれば睡眠を促す音楽を再生し、ユーザが眠ったら、音楽を停止する。 The content control unit 161 of the output control unit 160 reproduces content (music, video, etc.) selected based on the environmental state estimated by the environment estimation unit 150 . For example, the content control unit 161 notifies the DSP (Digital Service Provider) of the environmental state via the network, and the DSP selects content based on this environmental state (content that allows the user to focus, content that allows the user to relax). content) is received and played back. For example, if the user is at work and the user state is focused, the content control unit 161 plays music that helps the user concentrate, and if the user state is relaxed, the content control unit 161 plays music that helps the user relax. For example, when the user falls asleep, the content control unit 161 reproduces sleep-promoting music if the user state is relaxed, and stops the music when the user falls asleep.
 図25は、出力制御部の通知制御部の動作を示す。 FIG. 25 shows the operation of the notification control section of the output control section.
 出力制御部160の通知制御部162は、環境状態に基づきユーザへの通知の回数を制御する。例えば、通知制御部162は、ユーザがフォーカスできるように通知(アプリやメッセージの新着通知等)の回数を減らす又は無くす、ユーザがリラックス中であれば通知の回数を通常通りとすればよい。例えば、通知制御部162は、ユーザが仕事中でユーザ状態がフォーカスであれば通知の回数を減らし、ユーザ状態がリラックスであれば通知の回数は通常通りに行う。 The notification control unit 162 of the output control unit 160 controls the number of notifications to the user based on the environmental conditions. For example, the notification control unit 162 may reduce or eliminate the number of notifications (notifications of new arrivals of applications or messages) so that the user can focus, or may keep the number of notifications normal if the user is relaxing. For example, if the user is at work and the user state is focused, the notification control unit 162 reduces the number of notifications, and if the user state is relaxed, the notification control unit 162 issues the normal number of notifications.
 7.小括 7. Brief Summary
 発話や環境音を音声認識して、認識した音に基づき楽曲等のコンテンツを選択し出力する技術がある。発話や環境音を音声認識する技術は、音がする環境のみ適用可能である。このため、音を立てたくないユーザや、音を立てたくない状況では、適切なコンテンツを選択できないおそれがある。また、自然言語処理には高い計算能力が必要であるため、ローカルで処理することが難しい。 There is technology that recognizes speech and environmental sounds, and selects and outputs content such as music based on the recognized sounds. Technology that recognizes utterances and environmental sounds is applicable only to environments with sounds. Therefore, a user who does not want to make noise or a situation where he/she does not want to make noise may not be able to select appropriate content. Also, natural language processing requires high computational power, making it difficult to process locally.
 これに対して、本実施形態によれば、ユーザの家屋内での場所や、他のユーザコンテクストに基づき、フォーカス(集中)やリラックスを促すコンテンツを出力することができる。音を立てたくない状況等の状況に拠らず適切にユーザに対する出力を制御することができる。例えば、ユーザコンテクストにベースにして、ユーザがテレワーク中に仕事席にいる場合は、フォーカスできるコンテンツを出力し、ユーザが休憩場所にいる場合はリラックスできる音楽を流すことができる。 In contrast, according to the present embodiment, it is possible to output content that encourages focus (concentration) and relaxation based on the user's location in the house and other user contexts. It is possible to appropriately control the output to the user regardless of the situation such as a situation where it is desired not to make a sound. For example, based on user context, if the user is at their desk while teleworking, we can output focusable content, and if they are at their resting place, we can play relaxing music.
 屋外と違い家屋内は比較的に場所が狭く、特定の位置を推定するためには高精度のビーコンやカメラのような外付け設備が必要になるのが一般的である。これに対して、本実施形態によれば、外付け装備なく、ウェアラブルデバイス200に装着されたセンサ部210(加速度センサ211、ジャイロセンサ212、コンパス213)を使い家屋内の位置を特定することができる。具体的には、場所を移動するパターンとその順番を記憶しておいて、直近に移動したN個のパターンからユーザの移動後の場所を特定することができる。  In contrast to outdoors, indoor spaces are relatively small, and external equipment such as high-precision beacons and cameras are generally required to estimate a specific position. On the other hand, according to the present embodiment, it is possible to identify the position inside the house using the sensor unit 210 (the acceleration sensor 211, the gyro sensor 212, and the compass 213) attached to the wearable device 200 without any external equipment. can. Specifically, by storing the pattern of moving places and their order, it is possible to identify the place after the user moves from the N patterns of the most recent moves.
 テレワークが一般的となっており、ユーザは家屋内で、リラックスするだけでなく仕事でフォーカスする時間が長くなっている。このとき、音を立てたくないユーザや、音を立てたくない状況は、テレワークが浸透していなかった従来に比べて多いものと考えられる。このため、本実施形態の様に、発話を必要とすることなく、家屋内の場所を特定し、ユーザに提示する環境状態を推定し、ユーザに対する出力を制御するのは今後益々利用価値が高い。 Telework has become commonplace, and users are spending more time at home, not only relaxing, but also focusing on work. At this time, it is thought that there are more users who do not want to make noise and situations in which they do not want to make noise than in the past when telework was not widespread. Therefore, as in the present embodiment, it will be more and more useful to specify the location in the house, estimate the environmental state to be presented to the user, and control the output to the user without the need to speak. .
 また、本実施形態によれば、各センサ情報から得られたコンテクストをユーザ状態にマッピングすることでユーザ状態を推定するため、発話して音を立てなくてもユーザ状況を推定できる。本実施形態によれば、各センサ情報から得られたコンテクストをユーザ状態にマッピングするため、自然言語処理と比べて計算量が大幅に少なく、ローカルでの処理が容易である。 Also, according to this embodiment, the user state is estimated by mapping the context obtained from each sensor information to the user state, so the user state can be estimated without speaking and making a sound. According to this embodiment, since the context obtained from each sensor information is mapped to the user state, the amount of calculation is much smaller than that of natural language processing, and local processing is easy.
 8.コンテンツ再生システム 8. Content playback system
 図26は、本実施形態に係るコンテンツ再生システムの構成を示す。 FIG. 26 shows the configuration of a content reproduction system according to this embodiment.
 コンテンツ再生システム20は、情報処理装置100と、ウェアラブルデバイス200とを有する。 The content reproduction system 20 has an information processing device 100 and a wearable device 200 .
 情報処理装置100は、制御回路のCPU等のプロセッサがROMに記録されたコンテンツ再生制御アプリケーション300と、コンテンツ提供アプリケーション400と、プリセットアプリケーション500とをRAMにロードして実行する。なお、コンテンツ再生制御アプリケーション300は情報処理装置100ではなく、ウェアラブルデバイス200にインストールされウェアラブルデバイス200が実行してもよい。 The information processing apparatus 100 loads a content reproduction control application 300, a content providing application 400, and a preset application 500, in which a processor such as a CPU of a control circuit is recorded in a ROM, into a RAM and executes them. Note that the content reproduction control application 300 may be installed in the wearable device 200 instead of the information processing apparatus 100 and executed by the wearable device 200 .
 ウェアラブルデバイス200は、上記のように、ワイヤレスイヤホン(図2参照)、ワイヤレスヘッドフォン、有線ヘッドフォン、有線イヤホン等である。ウェアラブルデバイス200は、センサ部210と入力装置220とを有する。センサ部210は、加速度センサ211と、ジャイロセンサ212と、コンパス213と、心拍センサ、血流センサ、脳波センサ等の生体センサ214を含む(図1参照)。ウェアラブルデバイス200は、センサ部210の検出値を、コンテンツ再生制御アプリケーション300と、コンテンツ提供アプリケーション400とに入力する。入力装置220は、タッチセンサ、物理ボタン、非接触センサ等であり、ユーザによる接触または非接触の操作を入力する。入力装置220は、例えば、ウェアラブルデバイス200のドライバユニット222(図2参照)の外表面に設けられる。 The wearable device 200 is, as described above, wireless earphones (see FIG. 2), wireless headphones, wired headphones, wired earphones, or the like. The wearable device 200 has a sensor section 210 and an input device 220 . The sensor unit 210 includes an acceleration sensor 211, a gyro sensor 212, a compass 213, and a biosensor 214 such as a heart rate sensor, a blood flow sensor, an electroencephalogram sensor (see FIG. 1). Wearable device 200 inputs the detection value of sensor unit 210 to content reproduction control application 300 and content providing application 400 . The input device 220 is a touch sensor, a physical button, a non-contact sensor, or the like, and inputs a contact or non-contact operation by the user. The input device 220 is provided on the outer surface of the driver unit 222 (see FIG. 2) of the wearable device 200, for example.
 コンテンツ提供アプリケーション400は、コンテンツを提供する。コンテンツ提供アプリケーション400は、複数の異なるコンテンツ提供アプリケーション401、402を含むアプリケーション群である。例えば、複数の異なるコンテンツ提供アプリケーション401、402は、それぞれ、楽曲、環境音、ヒーリング音、ラジオ番組等の、異なるジャンルのコンテンツ(具体的には、音声コンテンツ)を提供する。複数の異なるコンテンツ提供アプリケーション401、402を区別しないとき、単に、コンテンツ提供アプリケーション400と記載する。 The content providing application 400 provides content. A content providing application 400 is an application group including a plurality of different content providing applications 401 and 402 . For example, a plurality of different content providing applications 401 and 402 respectively provide content (specifically, audio content) of different genres such as music, environmental sounds, healing sounds, and radio programs. The content providing application 400 is simply referred to when the different content providing applications 401 and 402 are not distinguished.
 コンテンツ再生制御アプリケーション300は、上記の、コンテクスト取得部110と、PDR(Pedestrian Dead Reckoning)部120(ユーザ位置推定部)と、場所推定部130(場所属性推定部)と、ユーザ状態推定部140と、環境推定部150と、出力制御部160のコンテンツ制御部161とを有する(図1参照)。コンテンツ制御部161は、環境推定部150が推定した環境状態に基づき、あるいは、ウェアラブルデバイス200の入力装置220にユーザが入力した異なる操作に基づき、コンテンツ提供アプリケーション400を選択する。コンテンツ制御部161は、環境状態に基づきコンテンツ提供アプリケーション400がコンテンツを選択するためのキューを生成し、生成したキューを選択したコンテンツ提供アプリケーション400に出力し、コンテンツ提供アプリケーション400にキューに基づきコンテンツを選択させてコンテンツをウェアラブルデバイス200から再生させる。 The content reproduction control application 300 includes the context acquisition unit 110, the PDR (Pedestrian Dead Reckoning) unit 120 (user position estimation unit), the location estimation unit 130 (location attribute estimation unit), and the user state estimation unit 140. , the environment estimation unit 150, and the content control unit 161 of the output control unit 160 (see FIG. 1). The content control unit 161 selects the content providing application 400 based on the environmental state estimated by the environment estimation unit 150 or based on different operations input by the user to the input device 220 of the wearable device 200 . The content control unit 161 generates a cue for the content providing application 400 to select content based on the environmental state, outputs the generated cue to the selected content providing application 400, and instructs the content providing application 400 to provide the content based on the cue. The content is reproduced from the wearable device 200 by making the selection.
 プリセットアプリケーション500は、ウェアラブルデバイス200の入力装置220にユーザが入力する複数の異なる操作を、コンテンツ提供アプリケーション400が提供するサービスに関する複数の異なる機能に予め割り当てる。例えば、プリセットアプリケーション500は、複数の異なるコンテンツ提供アプリケーション401、402の選択に予め割り当てる。ウェアラブルデバイス200の入力装置220にユーザが入力する複数の異なる操作(例えば、シングルタップ、ダブルタップ、トリプルタップ、ラジオボタンの押下等)を、複数の異なるコンテンツ提供アプリケーション401、402の選択に予め割り当てる。プリセットアプリケーション500は、コンテンツ再生制御アプリケーション300から独立してもよいし、コンテンツ再生制御アプリケーション300に含まれてもよい。 The preset application 500 pre-assigns a plurality of different operations input by the user to the input device 220 of the wearable device 200 to a plurality of different functions related to services provided by the content providing application 400 . For example, the preset application 500 pre-assigns a selection of different content providing applications 401,402. A plurality of different operations input by the user to the input device 220 of the wearable device 200 (for example, single-tap, double-tap, triple-tap, press of a radio button, etc.) are assigned in advance to selection of a plurality of different content providing applications 401 and 402. . Preset application 500 may be independent of content reproduction control application 300 or may be included in content reproduction control application 300 .
 図27は、プリセットアプリケーションのGUIの一例を示す。 FIG. 27 shows an example of the GUI of the preset application.
 プリセットアプリケーション500は、例えば、再生コントロールGUI710、音量コントロールGUI720、クイックアクセスコントロールGUI730を有する。なお、プリセットアプリケーション500が提供するGUI及び設定可能な機能と操作との組み合わせは、ウェアラブルデバイス200の機種により異なる。 The preset application 500 has, for example, a playback control GUI 710, a volume control GUI 720, and a quick access control GUI 730. Note that the GUI provided by the preset application 500 and the combination of settable functions and operations differ depending on the model of the wearable device 200 .
 ユーザは、再生コントロールGUI710を用いて、左右のウェアラブルデバイス200の入力装置220にユーザが入力する複数の異なる操作を、コンテンツ再生時の各機能に割り当てることができる。例えば、ユーザは、右側のウェアラブルデバイス200のシングルタップ操作を再生及び一時停止に割り当て、ダブルタップ操作を次曲の再生に割り当て、トリプルタップ操作を前曲の再生に割り当て、長押し操作を音声アシスタント機能の起動に割り当てることができる。なお、各操作に割り当てられる機能は、前述に挙げた機能以外のものであってもよいし、デフォルトで各操作に機能が割り当てられていてもよい。 The user can use the playback control GUI 710 to assign a plurality of different operations input by the user to the input devices 220 of the left and right wearable devices 200 to each function during content playback. For example, the user assigns a single-tap operation of the wearable device 200 on the right side to play and pause, assigns a double-tap operation to play the next song, assigns a triple-tap operation to play the previous song, and assigns a long press operation to the voice assistant. Can be assigned to activate a function. Note that the functions assigned to each operation may be functions other than those described above, and the functions may be assigned to each operation by default.
 ユーザは、音量コントロールGUI720を用いて、左右のウェアラブルデバイス200の入力装置220にユーザが入力する複数の異なる操作を、音量コントロールの各機能に割り当てることができる。例えば、ユーザは、左側のウェアラブルデバイス200のシングルタップ操作を音量アップに割り当て、長押し操作を音量ダウンに割り当てることができる。 The user can use the volume control GUI 720 to assign a plurality of different operations that the user inputs to the input devices 220 of the left and right wearable devices 200 to each function of the volume control. For example, the user can assign a single-tap operation of the left wearable device 200 to volume up and a long press operation to volume down.
 ユーザは、クイックアクセスコントロールGUI730を用いて、左右のウェアラブルデバイス200の入力装置220にユーザが入力する複数の異なる操作を、複数の異なるコンテンツ提供アプリケーション401、402を選択して起動するクイックアクセス機能に割り当てることができる。例えば、ユーザは、左側のウェアラブルデバイス200のダブルタップ操作をコンテンツ提供アプリケーション401の起動に割り当て、トリプルタップ操作をコンテンツ提供アプリケーション402の起動に割り当てることができる。 The user uses the quick access control GUI 730 to convert a plurality of different operations that the user inputs to the input devices 220 of the left and right wearable devices 200 into a quick access function that selects and activates a plurality of different content providing applications 401 and 402. can be assigned. For example, the user can assign a double tap operation on the left wearable device 200 to launch the content providing application 401 and a triple tap operation to launch the content providing application 402 .
 この様に、プリセットアプリケーション500は、左右のウェアラブルデバイス200の入力装置220にユーザが入力する複数の異なる操作を、コンテンツ提供アプリケーション400を起動中の再生コントロールや音量コントロールだけでなく、コンテンツ提供アプリケーション400の選択及び起動に割り当てることができる。 In this way, the preset application 500 can perform a plurality of different operations input by the user to the input devices 220 of the left and right wearable devices 200 not only through playback control and volume control while the content providing application 400 is running, but also through the content providing application 400 . can be assigned to the selection and activation of
 図28は、コンテンツ再生制御アプリケーションの動作フローを示す。 FIG. 28 shows the operational flow of the content playback control application.
 コンテンツ再生制御アプリケーション300において、コンテクスト取得部110は、ユーザのコンテクストを取得する。ユーザ状態推定部140は、コンテクスト取得部110が取得したコンテクストと、ウェアラブルデバイス200が有するセンサ部210の検出値(加速度、角速度及び方位角)と、場所推定部130が推定した場所属性とに基づき、ユーザ状態(ブレイクタイム、ニュートラル、DND(Do Not Disturb)及びオフラインの4レベルの活動状態)を推定する。ここでは一例として4段階のユーザ状態を上げているが、これ以上でも以下であってもよい。また、ユーザ側で任意の数のユーザ状態を設定できるようにしてもよい。環境推定部150は、ユーザ状態推定部140が推定したユーザ状態に基づき、ユーザに提示する環境状態(フォーカス(集中)、リラックス等)を推定する(図19参照)。出力制御部160のコンテンツ制御部161は、環境推定部150が推定した環境状態に基づき、コンテンツの再生を開始すべき適切なタイミングを検出する(ステップS301)。 In the content reproduction control application 300, the context acquisition unit 110 acquires the user's context. User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state (four-level activity state: break time, neutral, DND (Do Not Disturb) and offline). Here, as an example, there are four levels of user states, but there may be more or less. Alternatively, the user may be allowed to set any number of user states. Based on the user state estimated by the user state estimation unit 140, the environment estimation unit 150 estimates the environmental state (focus (concentration), relaxation, etc.) to be presented to the user (see FIG. 19). The content control unit 161 of the output control unit 160 detects an appropriate timing to start reproducing content based on the environmental state estimated by the environment estimation unit 150 (step S301).
 コンテンツ再生制御アプリケーション300において、出力制御部160のコンテンツ制御部161は、コンテンツ提供アプリケーション400を選択する。例えば、コンテンツ制御部161は、ウェアラブルデバイス200の入力装置220にユーザが入力した異なる操作に基づき、コンテンツ提供アプリケーション400を選択する。例えば、コンテンツ制御部161は、ウェアラブルデバイス200の入力装置220にユーザが入力した操作が、ダブルタップであればコンテンツ提供アプリケーション401を選択し、トリプルタップであればコンテンツ提供アプリケーション402を選択する。あるいは、コンテンツ制御部161は、環境推定部150が推定した環境状態(後述のシナリオ)に基づき、コンテンツ提供アプリケーション400を選択する(ステップS302)。あるいは、コンテンツ制御部161は、拒否などを繰り返すと同じ条件でもシナリオが発火しなくなったりするような学習や、ユーザによる設定(例えば、状況に応じてコンテンツ提供アプリケーション400を予め設定しておく)に基づき、コンテンツ提供アプリケーション400を選択してもよい。 In the content reproduction control application 300 , the content control unit 161 of the output control unit 160 selects the content providing application 400 . For example, the content control unit 161 selects the content providing application 400 based on different operations input by the user to the input device 220 of the wearable device 200 . For example, the content control unit 161 selects the content providing application 401 if the operation input by the user to the input device 220 of the wearable device 200 is a double tap, and selects the content providing application 402 if it is a triple tap. Alternatively, the content control unit 161 selects the content providing application 400 based on the environmental state (scenario described later) estimated by the environment estimation unit 150 (step S302). Alternatively, the content control unit 161 can be set by the user (for example, by setting the content providing application 400 in advance according to the situation) such that the scenario will not fire even under the same conditions if the refusal is repeated. Based on this, the content providing application 400 may be selected.
 図29は、コンテンツ提供アプリケーションを選択するために用いられるテーブルの一例を示す。 FIG. 29 shows an example of a table used for selecting content providing applications.
 例えば、コンテンツ制御部161は、テーブル600を参照し、コンテンツ提供アプリケーション400を選択する。テーブル600は、ID601、シナリオ602、ユーザコンテクスト603、キュー604を有する。シナリオ602は、環境推定部150が推定した環境状態に相当する。ユーザコンテクスト603は、コンテクスト取得部110が取得したユーザのコンテクストに基づき、ユーザ状態推定部140が推定したユーザ状態に相当する。キュー604は、コンテンツ提供アプリケーション400がコンテンツを選択するためのキューである。テーブル600において、ID601のMusic_01~09の9個のレコードに、それぞれ、コンテンツ提供アプリケーション401の選択フラグ605と、コンテンツ提供アプリケーション402の選択フラグ606とが記録される。選択フラグ605のみが記録されているレコードは、そのシナリオ602(環境状態)のときにコンテンツ提供アプリケーション401を選択することを意味する。一方、選択フラグ605、606の両方そのシナリオ602(環境状態)のときにコンテンツ提供アプリケーション401、402の何れかを別の条件により選択することを意味する。例えば、コンテンツ制御部161は、現在の時刻に実行する頻度が高いコンテンツ提供アプリケーション400、使用頻度が高いコンテンツ提供アプリケーション400等を予め学習しておき、選択してもよい。 For example, the content control unit 161 refers to the table 600 and selects the content providing application 400 . Table 600 has ID 601 , scenario 602 , user context 603 and queue 604 . A scenario 602 corresponds to the environmental state estimated by the environment estimation unit 150 . The user context 603 corresponds to the user state estimated by the user state estimation unit 140 based on the user's context acquired by the context acquisition unit 110 . A queue 604 is a queue for the content providing application 400 to select content. In table 600, selection flag 605 of content providing application 401 and selection flag 606 of content providing application 402 are recorded in nine records of Music_01 to 09 with ID 601, respectively. A record in which only the selection flag 605 is recorded means that the content providing application 401 is selected in the scenario 602 (environmental state). On the other hand, both of the selection flags 605 and 606 mean that either one of the content providing applications 401 and 402 is selected under different conditions in the scenario 602 (environmental state). For example, the content control unit 161 may learn in advance and select the content providing application 400 that is frequently executed at the current time, the content providing application 400 that is frequently used, and the like.
 コンテンツ再生制御アプリケーション300において、出力制御部160のコンテンツ制御部161は、シナリオ602(環境状態)に基づき、選択されたコンテンツ提供アプリケーション400がコンテンツを選択するためのキュー604を生成する(ステップS303)。コンテンツ制御部161は、生成したキューを選択したコンテンツ提供アプリケーション400に出力し、コンテンツ提供アプリケーション400にキューに基づきコンテンツを選択させて、コンテンツをウェアラブルデバイス200から再生させる(ステップS304)。例えば、コンテンツ提供アプリケーション400は、コンテンツ再生制御アプリケーション300からのキューに基づきコンテンツの複数の候補を選択し、ウェアラブルデバイス200センサ部210から入力された検出値に基づき、複数の候補から再生すべきコンテンツを選択してもよい。また、コンテンツ提供アプリケーション400は、センサ部210から入力された検出値に基づき、例えば、ユーザのランニング速度に合わせた速いテンポの、コンテンツを選択してもよい。 In the content reproduction control application 300, the content control unit 161 of the output control unit 160 generates a cue 604 for the selected content providing application 400 to select content based on the scenario 602 (environmental state) (step S303). . The content control unit 161 outputs the generated cue to the selected content providing application 400, causes the content providing application 400 to select content based on the cue, and reproduces the content from the wearable device 200 (step S304). For example, the content providing application 400 selects a plurality of content candidates based on the cue from the content reproduction control application 300, and based on the detected value input from the sensor unit 210 of the wearable device 200, the content to be reproduced from the plurality of candidates. may be selected. Also, the content providing application 400 may select content with a fast tempo that matches the user's running speed based on the detected value input from the sensor unit 210 .
 再生開始後、コンテンツ再生制御アプリケーション300のコンテンツ制御部161は、環境状態に基づき、別のコンテンツの再生を開始すべきタイミングを検出し(ステップS301)、コンテンツ提供アプリケーション400を選択し(ステップS302、このステップは省略可)、キュー604を生成し(ステップS303)、コンテンツをウェアラブルデバイス200から再生させる(ステップS304)。言い換えれば、コンテンツ再生制御アプリケーション300は、コンテンツ提供アプリケーション400が知りえないユーザの情報(即ち、ユーザコンテクスト603(ユーザ状態)、シナリオ602(環境状態))を有する。このため、コンテンツ再生制御アプリケーション300は、コンテンツ提供アプリケーション400の再生中のコンテンツを変更することが望ましいケースを知りえる。例えば、通勤中であることや、仕事を終えたことなどをトリガとして、再生中のコンテンツを変更することで、ユーザの気持ちの変化を誘導することができる。コンテンツ再生制御アプリケーション300は、(即ち、ユーザコンテクスト603(ユーザ状態)、シナリオ602(環境状態))を知りえた場合に、コンテンツ提供アプリケーション400に対して、再生中のコンテンツを変更するためのキューを送信することで、より望ましいコンテンツ(楽曲、ヒーリング音等)をユーザに提供することができる。 After starting reproduction, the content control unit 161 of the content reproduction control application 300 detects the timing to start reproducing another content based on the environmental state (step S301), selects the content providing application 400 (steps S302, This step can be omitted), the queue 604 is generated (step S303), and the content is reproduced from the wearable device 200 (step S304). In other words, the content reproduction control application 300 has user information (that is, user context 603 (user state), scenario 602 (environmental state)) that the content providing application 400 cannot know. Therefore, the content reproduction control application 300 can know cases where it is desirable to change the content being reproduced by the content providing application 400 . For example, it is possible to induce a change in the user's feelings by changing the content that is being reproduced using the fact that the user is commuting or the fact that he or she has finished work as a trigger. When the content reproduction control application 300 knows (that is, the user context 603 (user state) and the scenario 602 (environmental state)), it sends a cue to the content providing application 400 to change the content being reproduced. By transmitting, it is possible to provide the user with more desirable contents (music, healing sounds, etc.).
 さらに、コンテンツ再生制御アプリケーション300のコンテンツ制御部161は、シナリオ602(環境状態)に基づきコンテンツ提供アプリケーション400がコンテンツの再生を停止(変更ではなく)するためのキューを生成し(ステップS303)、キューをコンテンツ提供アプリケーションに出力し、コンテンツ提供アプリケーション400にキューに基づきコンテンツの再生を停止させる(ステップS304)。例えば、会議開始などの状態変化により音楽を停止したほうがいいケースがある。コンテンツ再生制御アプリケーション300はそれらの状態を検出しコンテンツ提供アプリケーション400に対して停止命令を送る。 Furthermore, the content control unit 161 of the content reproduction control application 300 generates a cue for the content providing application 400 to stop (rather than change) the reproduction of the content based on the scenario 602 (environmental state) (step S303). is output to the content providing application, and the content providing application 400 is caused to stop the reproduction of the content based on the cue (step S304). For example, there are cases where it is better to stop the music due to a state change such as the start of a meeting. The content playback control application 300 detects these states and sends a stop command to the content providing application 400 .
 また、コンテンツ再生中に、コンテンツ提供アプリケーション400は、センサ部210から入力された検出値に基づき、例えば、心拍数や加速度が所定値に応じて、ユーザのランニング速度に合わせた速いテンポのコンテンツを選択し再生してもよい。言い換えれば、コンテンツ再生中に、コンテンツ提供アプリケーション400は、コンテンツ再生制御アプリケーション300のコンテンツ制御部161からキューを受信すること無く、センサ部210から入力された検出値に基づき、能動的に、再生するコンテンツの属性(テンポ、ピッチ等)を選択し、選択したコンテンツを再生することができる。要するに、コンテンツ再生中に、コンテンツ提供アプリケーション400は、能動的に、再生するコンテンツを変更することができる。 Also, during content reproduction, the content providing application 400 generates content with a fast tempo that matches the running speed of the user based on the detected values input from the sensor unit 210, for example, according to predetermined values of heart rate and acceleration. You can select and play. In other words, during content reproduction, the content providing application 400 actively reproduces the content based on the detection values input from the sensor unit 210 without receiving a cue from the content control unit 161 of the content reproduction control application 300. Attributes of content (tempo, pitch, etc.) can be selected and the selected content can be played back. In short, during content playback, the content providing application 400 can actively change the content to be played back.
 9.結語 9. Conclusion
 本実施形態に係るコンテンツ再生システム20によれば、コンテンツ再生制御アプリケーション300がコンテンツ提供アプリケーション400を選択し、キューをコンテンツ提供アプリケーション400に出力する。このため、コンテンツ提供アプリケーション400が、複数の異なるコンテンツ提供アプリケーション401、402の間でのコンテンツ再生の競合を考慮する必要が無い。 According to the content reproduction system 20 according to this embodiment, the content reproduction control application 300 selects the content providing application 400 and outputs a cue to the content providing application 400 . Therefore, it is not necessary for the content providing application 400 to consider content reproduction conflicts between a plurality of different content providing applications 401 and 402 .
 また、コンテンツ再生制御アプリケーション300は、ユーザの機微(センシティブ)情報である環境状態に基づきコンテンツ提供アプリケーション400がコンテンツを選択するためのキューを生成する。このため、コンテンツ再生制御アプリケーション300からコンテンツ提供アプリケーション400に、ユーザの機微(センシティブ)情報である環境状態を共有することなく、コンテンツ提供アプリケーション400は、ユーザの機微(センシティブ)情報である環境状態を反映したコンテンツを再生することができる。このため、セキュリティリスクを低くしつつ、ユーザエクスペリエンスを向上することができる。 Also, the content reproduction control application 300 generates a cue for the content providing application 400 to select content based on the environmental state, which is the user's sensitive information. Therefore, the content providing application 400 does not share the environmental state, which is the user's sensitive information, with the content providing application 400 from the content reproduction control application 300. The reflected content can be played back. Therefore, it is possible to improve the user experience while reducing the security risk.
 また、コンテンツ再生制御アプリケーション300がコンテンツ提供アプリケーション400を選択し、選択されたコンテンツ提供アプリケーション400がコンテンツを再生する。さらに、プリセットアプリケーション500により、コンテンツ再生制御アプリケーション300は、ウェアラブルデバイス200の入力装置220にユーザが入力した異なる操作に基づき、コンテンツ提供アプリケーション400を選択する。これにより、ユーザが能動的に選択する必要無く、複数の異なるコンテンツ提供アプリケーション401、402のサービスを統合したユーザエクスペリエンスを提供できる。 Also, the content reproduction control application 300 selects the content providing application 400, and the selected content providing application 400 reproduces the content. Furthermore, the preset application 500 allows the content reproduction control application 300 to select the content providing application 400 based on different operations input by the user to the input device 220 of the wearable device 200 . This makes it possible to provide a user experience that integrates the services of a plurality of different content providing applications 401 and 402 without requiring active selection by the user.
 本開示は、以下の各構成を有してもよい。 The present disclosure may have the following configurations.
 (1)
 ユーザ状態を推定するユーザ状態推定部と、
 前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
 前記環境状態に基づき出力を制御する出力制御部と、
 を具備する情報処理装置。
 (2)
 上記(1)に記載の情報処理装置であって、
 前記ユーザが装着したウェアラブルデバイスが有するセンサ部の検出値に基づき、ユーザ位置を推定するユーザ位置推定部と、
 前記ユーザ位置に基づき、ユーザがいる場所の属性である場所属性を推定する場所属性推定部と、
 をさらに具備し、
 前記ユーザ状態推定部は、前記場所属性に基づき、前記ユーザ状態を推定する
 情報処理装置。
 (3)
 上記(2)に記載の情報処理装置であって、
 前記ユーザ位置推定部は、PDR(Pedestrian Dead Reckoning)を用いて前記ユーザ位置を推定する
 情報処理装置。
 (4)
 上記(2)又は(3)に記載の情報処理装置であって、
 前記環境推定部は、前記場所属性に基づき、前記環境状態を推定する
 情報処理装置。
 (5)
 上記(2)乃至(4)の何れか一つに記載の情報処理装置であって、
 前記ウェアラブルデバイスが有する前記センサ部は、加速度センサ、ジャイロセンサ、コンパス、生体センサの内、少なくとも一つを含む
 情報処理装置。
 (6)
 上記(3)乃至(5)の何れか一つに記載の情報処理装置であって、
 前記ユーザ位置推定部は、
 前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値に基づき、前記ユーザの方位角の補正値を算出する角度補正部と、
 前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値と、前記補正値とに基づき、前記ユーザの方位角を推定する角度推定部と、
 前記方位角を利用して前記ユーザ位置を推定するユーザ位置推定部と、
 を有する
 情報処理装置。
 (7)
 上記(3)乃至(6)の何れか一つに記載の情報処理装置であって、
 前記ユーザ位置推定部は、前記ユーザ位置の移動経路を推定し、
 前記場所属性推定部は、前記移動経路に基づき、移動後の前記場所属性を推定する
 情報処理装置。
 (8)
 上記(7)に記載の情報処理装置であって、
 前記場所属性推定部は、複数の移動経路を保持し、推定された前記移動経路を保持された前記複数の移動経路とマッチングすることにより、移動後の前記場所属性を推定する
 情報処理装置。
 (9)
 上記(8)に記載の情報処理装置であって、
 前記場所属性推定部は、マッチングが所定回数失敗すると、警告を出力する
 情報処理装置。
 (10)
 上記(8)又は(9)に記載の情報処理装置であって、
 前記場所属性推定部は、前記マッチングをDTW(dynamic time warping、動的時間伸縮法)を用いて行う
 情報処理装置。
 (11)
 上記(1)乃至(10)の何れか一つに記載の情報処理装置であって、
 前記場所属性推定部は、前記ユーザがいる場所での前記ユーザの滞在時間を判断することにより、前記場所属性を推定する
 情報処理装置。
 (12)
 上記(1)乃至(11)の何れか一つに記載の情報処理装置であって、
 ユーザのコンテクストを取得するコンテクスト取得部をさらに具備し、
 前記ユーザ状態推定部は、取得された前記コンテクストに基づき、前記ユーザ状態を推定する
 情報処理装置。
 (13)
 上記(12)に記載の情報処理装置であって、
 前記コンテクストは、前記ユーザの位置情報と前記情報処理装置の端末情報の少なくともいずれかを含む
 情報処理装置。
 (14)
 上記(1)乃至(13)の何れか一つに記載の情報処理装置であって、
 前記ユーザ状態推定部は、前記ウェアラブルデバイスが有する前記センサ部の前記検出値及び/又は前記場所属性に基づき、前記ユーザ状態を推定する
 情報処理装置。
 (15)
 上記(1)乃至(14)の何れか一つに記載の情報処理装置であって、
 前記ユーザ状態は、前記ユーザの複数の活動状態を示す
 情報処理装置。
 (16)
 上記(1)乃至(15)の何れか一つに記載の情報処理装置であって、
 前記出力制御部は、
 前記環境状態に基づき選択されたコンテンツを再生するコンテンツ制御部、及び/又は
 前記環境状態に基づき前記ユーザへの通知の回数を制御する通知制御部
 を有する
 情報処理装置。
 (17)
 ユーザ状態を推定し、
 前記ユーザ状態に基づきユーザに提示する環境状態を推定し、
 前記環境状態に基づき出力を制御する、
 情報処理方法。
 (18)
 情報処理装置のプロセッサを、
 ユーザ状態を推定するユーザ状態推定部と、
 前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
 前記環境状態に基づき出力を制御する出力制御部
 として動作させる情報処理プログラム。
 (19)
 ウェアラブルデバイスと、
 前記ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
 前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
 前記環境状態に基づき出力を制御する出力制御部と、
 を有する情報処理装置と、
 を具備する情報処理システム。
 (20)
 情報処理装置のプロセッサを、
 ユーザ状態を推定するユーザ状態推定部と、
 前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
 前記環境状態に基づき出力を制御する出力制御部
 として動作させる情報処理プログラム
 を記録した非一過性のコンピュータ読み取り可能な記録媒体。
(1)
a user state estimation unit that estimates a user state;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
an output control unit that controls output based on the environmental state;
An information processing device comprising:
(2)
The information processing device according to (1) above,
a user position estimating unit that estimates a user position based on a detection value of a sensor unit of the wearable device worn by the user;
a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location;
further comprising
The information processing apparatus, wherein the user state estimation unit estimates the user state based on the location attribute.
(3)
The information processing device according to (2) above,
The information processing apparatus, wherein the user position estimation unit estimates the user position using PDR (Pedestrian Dead Reckoning).
(4)
The information processing device according to (2) or (3) above,
The information processing apparatus, wherein the environment estimation unit estimates the environmental state based on the location attribute.
(5)
The information processing device according to any one of (2) to (4) above,
The information processing apparatus, wherein the sensor unit of the wearable device includes at least one of an acceleration sensor, a gyro sensor, a compass, and a biosensor.
(6)
The information processing device according to any one of (3) to (5) above,
The user position estimation unit
an angle correction unit that calculates a correction value of the azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user;
an angle estimation unit that estimates an azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user and the correction value;
a user position estimation unit that estimates the user position using the azimuth angle;
An information processing device.
(7)
The information processing device according to any one of (3) to (6) above,
The user position estimation unit estimates a moving route of the user position,
The information processing apparatus, wherein the location attribute estimation unit estimates the location attribute after movement based on the movement route.
(8)
The information processing device according to (7) above,
The location attribute estimation unit stores a plurality of movement routes, and estimates the location attribute after movement by matching the estimated movement route with the plurality of held movement routes.
(9)
The information processing device according to (8) above,
The information processing apparatus, wherein the location attribute estimation unit outputs a warning when matching fails a predetermined number of times.
(10)
The information processing device according to (8) or (9) above,
The information processing device, wherein the location attribute estimation unit performs the matching using DTW (dynamic time warping).
(11)
The information processing device according to any one of (1) to (10) above,
The information processing apparatus, wherein the location attribute estimation unit estimates the location attribute by determining the length of stay of the user at the location where the user is.
(12)
The information processing device according to any one of (1) to (11) above,
further comprising a context acquisition unit that acquires the context of the user;
The information processing apparatus, wherein the user state estimation unit estimates the user state based on the acquired context.
(13)
The information processing device according to (12) above,
The context includes at least one of location information of the user and terminal information of the information processing device.
(14)
The information processing device according to any one of (1) to (13) above,
Information processing apparatus, wherein the user state estimation unit estimates the user state based on the detection value of the sensor unit of the wearable device and/or the location attribute.
(15)
The information processing device according to any one of (1) to (14) above,
The information processing apparatus, wherein the user state indicates a plurality of activity states of the user.
(16)
The information processing device according to any one of (1) to (15) above,
The output control unit is
An information processing apparatus comprising: a content control unit that reproduces content selected based on the environmental state; and/or a notification control unit that controls the number of notifications to the user based on the environmental state.
(17)
Estimate the user state,
estimating an environmental state to be presented to a user based on the user state;
controlling output based on the environmental conditions;
Information processing methods.
(18)
the processor of the information processing device,
a user state estimation unit that estimates a user state;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
An information processing program operated as an output control unit that controls output based on the environmental state.
(19)
wearable devices and
a user state estimation unit that estimates a user state of a user wearing the wearable device;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
an output control unit that controls output based on the environmental state;
an information processing device having
An information processing system comprising
(20)
the processor of the information processing device,
a user state estimation unit that estimates a user state;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
A non-transitory computer-readable recording medium recording an information processing program operated as an output control unit that controls output based on the environmental state.
 さらに、本開示は、以下の各構成を有してもよい。 Furthermore, the present disclosure may have the following configurations.
 (1)
 ウェアラブルデバイスと、
  前記ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
  前記ユーザ状態に基づき前記ユーザの環境状態を推定する環境推定部と、
  前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
 を有するコンテンツ再生制御アプリケーションと、
 を実行する制御回路を有する情報処理装置と、
 を具備するコンテンツ再生システム。
 (2)
 上記(1)に記載のコンテンツ再生システムであって、
 前記情報処理装置の前記制御回路は、複数の異なるコンテンツ提供アプリケーションを実行し、
 前記コンテンツ制御部は、前記環境状態に基づき、前記コンテンツを再生させる所定のコンテンツ提供アプリケーションを選択する
 コンテンツ再生システム。
 (3)
 上記(1)又は(2)に記載のコンテンツ再生システムであって、
 前記情報処理装置の前記制御回路は、複数の異なるコンテンツ提供アプリケーションを実行し、
 前記ウェアラブルデバイスは、入力装置を有し、
 前記コンテンツ制御部は、前記ウェアラブルデバイスにユーザが入力した異なる操作に基づき、前記コンテンツを再生させる所定のコンテンツ提供アプリケーションを選択する
 コンテンツ再生システム。
 (4)
 上記(1)乃至(3)の何れか一つに記載のコンテンツ再生システムであって、
 前記情報処理装置の前記制御回路は、複数の前記異なる操作を前記複数の異なるコンテンツ提供アプリケーションの選択に割り当てるプリセットアプリケーションを実行する
 コンテンツ再生システム。
 (5)
 上記(4)に記載のコンテンツ再生システムであって、
 前記プリセットアプリケーションは、前記コンテンツ再生制御アプリケーションに含まれる
 コンテンツ再生システム。
 (6)
 上記(1)乃至(5)の何れか一つに記載のコンテンツ再生システムであって、
 前記ウェアラブルデバイスは、センサ部を有し、
 前記コンテンツ再生制御アプリケーションは、
  前記ユーザが装着したウェアラブルデバイスが有するセンサ部から入力された検出値に基づき、ユーザ位置を推定するユーザ位置推定部と、
  前記ユーザ位置に基づき、ユーザがいる場所の属性である場所属性を推定する場所属性推定部と、
 をさらに有し、
 前記ユーザ状態推定部は、前記場所属性に基づき、前記ユーザ状態を推定する
 コンテンツ再生システム。
 (7)
 上記(6)に記載のコンテンツ再生システムであって、
 前記ウェアラブルデバイスが有する前記センサ部は、加速度センサ、ジャイロセンサ、コンパス、生体センサの内、少なくとも一つを含む
 コンテンツ再生システム。
 (8)
 上記(6)又は(7)に記載のコンテンツ再生システムであって、
 前記コンテンツ提供アプリケーションは、前記キューに基づきコンテンツの複数の候補を選択し、前記センサ部から入力された前記検出値に基づき前記複数の候補から再生すべきコンテンツを選択する
 コンテンツ再生システム。
 (9)
 上記(6)乃至(8)の何れか一つに記載のコンテンツ再生システムであって、
 前記コンテンツ提供アプリケーションは、コンテンツの再生中に、前記センサ部から入力された前記検出値に基づき、再生すべきコンテンツの属性を選択し、選択したコンテンツを再生する
 コンテンツ再生システム。
 (10)
 上記(1)乃至(9)の何れか一つに記載のコンテンツ再生システムであって、
 前記コンテンツ制御部は、前記環境状態に基づき前記コンテンツ提供アプリケーションが前記コンテンツの再生を停止するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づき前記コンテンツの再生を停止させる
 コンテンツ再生システム。
 (11)
 上記(1)乃至(10)の何れか一つに記載のコンテンツ再生システムであって、
 前記コンテンツ再生制御アプリケーションは、
 ユーザのコンテクストを取得するコンテクスト取得部をさらに具備し、
 前記ユーザ状態推定部は、取得された前記コンテクストに基づき、前記ユーザ状態を推定する
 コンテンツ再生システム。
 (12)
  ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
  前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
  前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
 を有するコンテンツ再生制御アプリケーションと、
 を実行する制御回路
 を具備する情報処理装置。
 (13)
 情報処理装置の制御回路を、
  ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
  前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
  前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部
 として動作させるコンテンツ再生制御アプリケーション。
 (14)
 情報処理装置の制御回路を、
  ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
  前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
  前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部
 として動作させるコンテンツ再生制御アプリケーション
 を記録した非一過性のコンピュータ読み取り可能な記録媒体。
(1)
wearable devices and
a user state estimation unit that estimates a user state of a user wearing the wearable device;
an environment estimation unit that estimates an environmental state of the user based on the user state;
A content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue. a content control unit that reproduces the content;
a content playback control application having
an information processing device having a control circuit that executes
A content playback system comprising:
(2)
The content reproduction system according to (1) above,
the control circuit of the information processing device executes a plurality of different content providing applications;
The content reproduction system, wherein the content control unit selects a predetermined content providing application for reproducing the content based on the environmental state.
(3)
The content reproduction system according to (1) or (2) above,
the control circuit of the information processing device executes a plurality of different content providing applications;
The wearable device has an input device,
The content reproduction system, wherein the content control unit selects a predetermined content providing application for reproducing the content based on different operations input by a user to the wearable device.
(4)
The content reproduction system according to any one of (1) to (3) above,
The content reproduction system, wherein the control circuit of the information processing device executes a preset application that assigns the plurality of different operations to selection of the plurality of different content providing applications.
(5)
The content reproduction system according to (4) above,
The content reproduction system, wherein the preset application is included in the content reproduction control application.
(6)
The content reproduction system according to any one of (1) to (5) above,
The wearable device has a sensor unit,
The content playback control application is
a user position estimation unit that estimates a user position based on a detection value input from a sensor unit of the wearable device worn by the user;
a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location;
further having
The content reproduction system, wherein the user state estimation unit estimates the user state based on the location attribute.
(7)
The content reproduction system according to (6) above,
The content reproduction system, wherein the sensor unit of the wearable device includes at least one of an acceleration sensor, a gyro sensor, a compass, and a biosensor.
(8)
The content reproduction system according to (6) or (7) above,
The content providing application selects a plurality of content candidates based on the cue, and selects content to be played back from the plurality of candidates based on the detection value input from the sensor unit.
(9)
The content reproduction system according to any one of (6) to (8) above,
A content reproduction system wherein the content providing application selects attributes of content to be reproduced based on the detection value input from the sensor unit and reproduces the selected content during reproduction of the content.
(10)
The content reproduction system according to any one of (1) to (9) above,
The content control unit generates a cue for the content providing application to stop playing the content based on the environmental state, outputs the cue to the content providing application, and instructs the content providing application to stop the reproduction of the content based on the cue. A content reproduction system that stops the reproduction of the content.
(11)
The content reproduction system according to any one of (1) to (10) above,
The content playback control application is
further comprising a context acquisition unit that acquires the context of the user;
The content reproduction system, wherein the user state estimation unit estimates the user state based on the acquired context.
(12)
a user state estimation unit that estimates a user state of a user wearing the wearable device;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
A content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue. a content control unit that reproduces the content;
a content playback control application having
An information processing device comprising a control circuit for executing
(13)
The control circuit of the information processing device,
a user state estimation unit that estimates a user state of a user wearing the wearable device;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
A content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue. A content reproduction control application that operates as a content control unit that reproduces the content.
(14)
The control circuit of the information processing device,
a user state estimation unit that estimates a user state of a user wearing the wearable device;
an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
A content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue. A non-transitory computer-readable recording medium recording a content reproduction control application that operates as a content control unit that reproduces the content.
 本技術の各実施形態及び各変形例について上に説明したが、本技術は上述の実施形態にのみ限定されるものではなく、本技術の要旨を逸脱しない範囲内において種々変更を加え得ることは勿論である。 Although the embodiments and modifications of the present technology have been described above, the present technology is not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology. Of course.
 10 情報処理システム
 100 情報処理装置
 110 コンテクスト取得部
 111 GPSセンサ
 112 ビーコン送受信機
 113 端末情報取得部
 120 PDR部
 121 角度補正部
 122 角度推定部
 123 ユーザ位置推定部
 130 場所推定部
 140 ユーザ状態推定部
 150 環境推定部
 160 出力制御部
 161 コンテンツ制御部
 162 通知制御部
 200 ウェアラブルデバイス
 210 センサ部
 211 加速度センサ
 212 ジャイロセンサ
 213 コンパス
 214 生体センサ
10 information processing system 100 information processing device 110 context acquisition unit 111 GPS sensor 112 beacon transceiver 113 terminal information acquisition unit 120 PDR unit 121 angle correction unit 122 angle estimation unit 123 user position estimation unit 130 location estimation unit 140 user state estimation unit 150 Environment estimation unit 160 Output control unit 161 Content control unit 162 Notification control unit 200 Wearable device 210 Sensor unit 211 Acceleration sensor 212 Gyro sensor 213 Compass 214 Biosensor

Claims (19)

  1.  ユーザ状態を推定するユーザ状態推定部と、
     前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
     前記環境状態に基づき出力を制御する出力制御部と、
     を具備する情報処理装置。
    a user state estimation unit that estimates a user state;
    an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
    an output control unit that controls output based on the environmental state;
    An information processing device comprising:
  2.  請求項1に記載の情報処理装置であって、
     前記ユーザが装着したウェアラブルデバイスが有するセンサ部の検出値に基づき、ユーザ位置を推定するユーザ位置推定部と、
     前記ユーザ位置に基づき、ユーザがいる場所の属性である場所属性を推定する場所属性推定部と、
     をさらに具備し、
     前記ユーザ状態推定部は、前記場所属性に基づき、前記ユーザ状態を推定する
     情報処理装置。
    The information processing device according to claim 1,
    a user position estimating unit that estimates a user position based on a detection value of a sensor unit of the wearable device worn by the user;
    a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location;
    further comprising
    The information processing apparatus, wherein the user state estimation unit estimates the user state based on the location attribute.
  3.  請求項2に記載の情報処理装置であって、
     前記ユーザ位置推定部は、PDR(Pedestrian Dead Reckoning)を用いて前記ユーザ位置を推定する
     情報処理装置。
    The information processing device according to claim 2,
    The information processing apparatus, wherein the user position estimation unit estimates the user position using PDR (Pedestrian Dead Reckoning).
  4.  請求項2に記載の情報処理装置であって、
     前記環境推定部は、前記場所属性に基づき、前記環境状態を推定する
     情報処理装置。
    The information processing device according to claim 2,
    The information processing apparatus, wherein the environment estimation unit estimates the environmental state based on the location attribute.
  5.  請求項2に記載の情報処理装置であって、
     前記ウェアラブルデバイスが有する前記センサ部は、加速度センサ、ジャイロセンサ、コンパス、生体センサの内、少なくとも一つを含む
     情報処理装置。
    The information processing device according to claim 2,
    The information processing apparatus, wherein the sensor unit of the wearable device includes at least one of an acceleration sensor, a gyro sensor, a compass, and a biosensor.
  6.  請求項3に記載の情報処理装置であって、
     前記ユーザ位置推定部は、
     前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値に基づき、前記ユーザの方位角の補正値を算出する角度補正部と、
     前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値と、前記補正値とに基づき、前記ユーザの方位角を推定する角度推定部と、
     前記方位角を利用して前記ユーザ位置を推定するユーザ位置推定部と、
     を有する
     情報処理装置。
    The information processing device according to claim 3,
    The user position estimation unit
    an angle correction unit that calculates a correction value of the azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user;
    an angle estimation unit that estimates an azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user and the correction value;
    a user position estimation unit that estimates the user position using the azimuth angle;
    An information processing device.
  7.  請求項3に記載の情報処理装置であって、
     前記ユーザ位置推定部は、前記ユーザ位置の移動経路を推定し、
     前記場所属性推定部は、前記移動経路に基づき、移動後の前記場所属性を推定する
     情報処理装置。
    The information processing device according to claim 3,
    The user position estimation unit estimates a moving route of the user position,
    The information processing apparatus, wherein the location attribute estimation unit estimates the location attribute after movement based on the movement route.
  8.  請求項7に記載の情報処理装置であって、
     前記場所属性推定部は、複数の移動経路を保持し、推定された前記移動経路を保持された前記複数の移動経路とマッチングすることにより、移動後の前記場所属性を推定する
     情報処理装置。
    The information processing device according to claim 7,
    The location attribute estimation unit stores a plurality of movement routes, and estimates the location attribute after movement by matching the estimated movement route with the plurality of held movement routes.
  9.  請求項8に記載の情報処理装置であって、
     前記場所属性推定部は、マッチングが所定回数失敗すると、警告を出力する
     情報処理装置。
    The information processing device according to claim 8,
    The information processing apparatus, wherein the location attribute estimation unit outputs a warning when matching fails a predetermined number of times.
  10.  請求項8に記載の情報処理装置であって、
     前記場所属性推定部は、前記マッチングをDTW(dynamic time warping、動的時間伸縮法)を用いて行う
     情報処理装置。
    The information processing device according to claim 8,
    The information processing device, wherein the location attribute estimation unit performs the matching using DTW (dynamic time warping).
  11.  請求項1に記載の情報処理装置であって、
     前記場所属性推定部は、前記ユーザがいる場所での前記ユーザの滞在時間を判断することにより、前記場所属性を推定する
     情報処理装置。
    The information processing device according to claim 1,
    The information processing apparatus, wherein the location attribute estimation unit estimates the location attribute by determining the length of stay of the user at the location where the user is.
  12.  請求項1に記載の情報処理装置であって、
     ユーザのコンテクストを取得するコンテクスト取得部をさらに具備し、
     前記ユーザ状態推定部は、取得された前記コンテクストに基づき、前記ユーザ状態を推定する
     情報処理装置。
    The information processing device according to claim 1,
    further comprising a context acquisition unit that acquires the context of the user;
    The information processing apparatus, wherein the user state estimation unit estimates the user state based on the acquired context.
  13.  請求項12に記載の情報処理装置であって、
     前記コンテクストは、前記ユーザの位置情報と前記情報処理装置の端末情報の少なくともいずれかを含む
     情報処理装置。
    The information processing device according to claim 12,
    The context includes at least one of location information of the user and terminal information of the information processing device.
  14.  請求項1に記載の情報処理装置であって、
     前記ユーザ状態推定部は、前記ウェアラブルデバイスが有する前記センサ部の前記検出値及び/又は前記場所属性に基づき、前記ユーザ状態を推定する
     情報処理装置。
    The information processing device according to claim 1,
    Information processing apparatus, wherein the user state estimation unit estimates the user state based on the detection value of the sensor unit of the wearable device and/or the location attribute.
  15.  請求項1に記載の情報処理装置であって、
     前記ユーザ状態は、前記ユーザの複数の活動状態を示す
     情報処理装置。
    The information processing device according to claim 1,
    The information processing apparatus, wherein the user state indicates a plurality of activity states of the user.
  16.  請求項1に記載の情報処理装置であって、
     前記出力制御部は、
     前記環境状態に基づき選択されたコンテンツを再生するコンテンツ制御部、及び/又は
     前記環境状態に基づき前記ユーザへの通知の回数を制御する通知制御部
     を有する
     情報処理装置。
    The information processing device according to claim 1,
    The output control unit is
    An information processing apparatus comprising: a content control unit that reproduces content selected based on the environmental state; and/or a notification control unit that controls the number of notifications to the user based on the environmental state.
  17.  ユーザ状態を推定し、
     前記ユーザ状態に基づきユーザに提示する環境状態を推定し、
     前記環境状態に基づき出力を制御する、
     情報処理方法。
    Estimate the user state,
    estimating an environmental state to be presented to a user based on the user state;
    controlling output based on the environmental conditions;
    Information processing methods.
  18.  情報処理装置のプロセッサを、
     ユーザ状態を推定するユーザ状態推定部と、
     前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
     前記環境状態に基づき出力を制御する出力制御部
     として動作させる情報処理プログラム。
    the processor of the information processing device,
    a user state estimation unit that estimates a user state;
    an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
    An information processing program operated as an output control unit that controls output based on the environmental state.
  19.  ウェアラブルデバイスと、
     前記ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
     前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
     前記環境状態に基づき出力を制御する出力制御部と、
     を有する情報処理装置と、
     を具備する情報処理システム。
    wearable devices and
    a user state estimation unit that estimates a user state of a user wearing the wearable device;
    an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
    an output control unit that controls output based on the environmental state;
    an information processing device having
    An information processing system comprising
PCT/JP2021/021259 2021-03-30 2021-06-03 Information processing device, information processing method, information processing program, and information processing system WO2022208905A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/JP2021/043548 WO2022208999A1 (en) 2021-03-30 2021-11-29 Information processing device, information processing method, information processing program, and information processing system
PCT/JP2022/007705 WO2022209473A1 (en) 2021-03-30 2022-02-24 Information processing apparatus, information processing method, information processing program, and information processing system
PCT/JP2022/013213 WO2022210111A1 (en) 2021-03-30 2022-03-22 Information processing device, information processing method, information processing program, and information processing system
JP2023511338A JPWO2022210649A1 (en) 2021-03-30 2022-03-29
PCT/JP2022/015297 WO2022210649A1 (en) 2021-03-30 2022-03-29 Information processing device, information processing method, information processing program, and information processing system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-056342 2021-03-30
JP2021056342 2021-03-30

Publications (1)

Publication Number Publication Date
WO2022208905A1 true WO2022208905A1 (en) 2022-10-06

Family

ID=83457601

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/JP2021/021261 WO2022208906A1 (en) 2021-03-30 2021-06-03 Content reproduction system, information processing device, and content reproduction control application
PCT/JP2021/021259 WO2022208905A1 (en) 2021-03-30 2021-06-03 Information processing device, information processing method, information processing program, and information processing system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/021261 WO2022208906A1 (en) 2021-03-30 2021-06-03 Content reproduction system, information processing device, and content reproduction control application

Country Status (1)

Country Link
WO (2) WO2022208906A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005167343A (en) * 2003-11-28 2005-06-23 Sony Corp Message processing method and message processor
JP2005222111A (en) * 2004-02-03 2005-08-18 Yamaha Corp Portable terminal for av equipment, av equipment and server device
JP2016066389A (en) * 2014-09-22 2016-04-28 ヤマハ株式会社 Reproduction control device and program
JP2018107576A (en) * 2016-12-26 2018-07-05 ヤマハ株式会社 Reproduction control method and system
WO2020208894A1 (en) * 2019-04-12 2020-10-15 ソニー株式会社 Information processing device and information processing method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011141492A (en) * 2010-01-08 2011-07-21 Nec Corp Music download system, music receiving terminal, music download method and program
JP5517761B2 (en) * 2010-06-10 2014-06-11 アルパイン株式会社 Electronic device and operation key assignment method
JP2018504719A (en) * 2014-11-02 2018-02-15 エヌゴーグル インコーポレイテッド Smart audio headphone system
JP2017041136A (en) * 2015-08-20 2017-02-23 ヤフー株式会社 Determination device, determination method, determination program, terminal device, and music piece reproduction program
JP6326573B2 (en) * 2016-11-07 2018-05-23 株式会社ネイン Autonomous assistant system with multi-function earphones

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005167343A (en) * 2003-11-28 2005-06-23 Sony Corp Message processing method and message processor
JP2005222111A (en) * 2004-02-03 2005-08-18 Yamaha Corp Portable terminal for av equipment, av equipment and server device
JP2016066389A (en) * 2014-09-22 2016-04-28 ヤマハ株式会社 Reproduction control device and program
JP2018107576A (en) * 2016-12-26 2018-07-05 ヤマハ株式会社 Reproduction control method and system
WO2020208894A1 (en) * 2019-04-12 2020-10-15 ソニー株式会社 Information processing device and information processing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OKUMA, TAKASHI: "Ego-motion tracking system for seamless combination of indoor navigation and augmented reality exhibits", IEICE TECHNICAL REPORT, vol. 112, no. 25, 2012, pages 23 - 28, XP009539912 *

Also Published As

Publication number Publication date
WO2022208906A1 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
US10915291B2 (en) User-interfaces for audio-augmented-reality
US9900688B2 (en) Beamforming audio with wearable device microphones
JP7456463B2 (en) Information processing device, information processing method, and program
US11016723B2 (en) Multi-application control of augmented reality audio
JP2016218610A (en) Motion estimation device, robot, and motion estimation method
JP2023542968A (en) Hearing enhancement and wearable systems with localized feedback
US20210081163A1 (en) Spatialized augmented reality (ar) audio menu
CN114115515A (en) Method and head-mounted unit for assisting a user
WO2012032714A1 (en) User device, server, and operating conditions setting system
JP7243639B2 (en) Information processing device, information processing method and program
WO2022208905A1 (en) Information processing device, information processing method, information processing program, and information processing system
JP7056155B2 (en) Information transmission equipment, information transmission systems and programs
WO2019069529A1 (en) Information processing device, information processing method, and program
WO2022208999A1 (en) Information processing device, information processing method, information processing program, and information processing system
WO2022209000A1 (en) Content reproduction system, information processing device, and content reproduction control application
WO2022209473A1 (en) Information processing apparatus, information processing method, information processing program, and information processing system
WO2022209474A1 (en) Content reproduction system, information processing device, and content reproduction control application
WO2022210652A1 (en) Content playback system, information processing apparatus, and content playback control application
WO2022210111A1 (en) Information processing device, information processing method, information processing program, and information processing system
US20200280814A1 (en) Augmented reality audio playback control
US10820132B2 (en) Voice providing device and voice providing method
JP6146182B2 (en) Information providing apparatus, information providing system, and information providing program
JP2006338493A (en) Method, device, and program for detecting next speaker
JP7428189B2 (en) Information processing device, control method and control program
CN114710726B (en) Center positioning method and device of intelligent wearable device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21935065

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21935065

Country of ref document: EP

Kind code of ref document: A1