WO2019097674A1 - 車両用操作支援装置 - Google Patents
車両用操作支援装置 Download PDFInfo
- Publication number
- WO2019097674A1 WO2019097674A1 PCT/JP2017/041460 JP2017041460W WO2019097674A1 WO 2019097674 A1 WO2019097674 A1 WO 2019097674A1 JP 2017041460 W JP2017041460 W JP 2017041460W WO 2019097674 A1 WO2019097674 A1 WO 2019097674A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- human
- human relationship
- vehicle
- relationship
- speech
- Prior art date
Links
- 241000282414 Homo sapiens Species 0.000 claims abstract description 303
- 241000282412 Homo Species 0.000 abstract description 13
- 239000000284 extract Substances 0.000 abstract description 6
- 239000000203 mixture Substances 0.000 abstract description 2
- 238000000034 method Methods 0.000 description 62
- 230000008569 process Effects 0.000 description 62
- 238000012545 processing Methods 0.000 description 42
- 238000000605 extraction Methods 0.000 description 19
- 238000011002 quantification Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008520 organization Effects 0.000 description 4
- 241001122315 Polites Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 102000054584 human Y acceptor Human genes 0.000 description 2
- 108700023876 human Y acceptor Proteins 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 210000003746 feather Anatomy 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/037—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/037—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
- B60R16/0373—Voice control
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
- G01C21/3617—Destination input or retrieval using user history, behaviour, conditions or preferences, e.g. predicted or inferred from previous use or current movement
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
Definitions
- the present invention relates to a vehicle operation support device for obtaining recommendation information suitable for an occupant configuration including human relationships and providing support for various vehicle operations performed by the occupant.
- the voice in the car is collected, the seat position of the occupant is specified, the speaker among the occupants present in the car is estimated based on the collected voice and the identified seating position, and the content of the conversation based on the collected voice
- Estimate the occupant's composition based on the estimated seating position, the estimated speaker, and the estimated conversation content, and estimate the occupant's action purpose based on the estimated conversation content and the estimated occupant configuration There is known an on-vehicle apparatus that determines a recommended service to be recommended based on the occupant configuration and the estimated action purpose (Patent Document 1).
- an individual is identified from the voiceprint pattern of the collected voice, and the owner is identified from the sitting position and the boarding frequency for each identified individual, and the individual who can be identified from the collected voice using the conversation keyword Is specified and registered as speaker pattern data, and the occupant configuration is estimated using the registered speaker pattern data.
- the problem to be solved by the present invention is to provide a vehicle operation support device for obtaining, in a short time, recommendation information suitable for an occupant configuration including human relationships, and providing support for various vehicle operations performed by the occupant.
- the present invention acquires conversation voice data of a plurality of persons in advance, identifies a speaker, analyzes the acquired conversation voice data for each identified speaker, extracts a predetermined keyword, and identifies each speaker.
- the wording for each speaker is specified based on the keyword
- the conversation content is specified based on the speech sound data of a plurality of people
- the specified wording and the specified conversation content direct the communication between the plurality of persons.
- Human resources relationships are quantified and determined. Then, when a plurality of people get on, the above problem is solved by determining the support information of the vehicle operation determined to be recommended from the human and the quantified human relationship.
- the present invention since information on the human relationship obtained before getting on the vehicle is used, it is possible to obtain recommended information suitable for the occupant configuration including the human relationship in a short time. We can provide timely support.
- FIG.2A It is a block diagram showing one embodiment of the operation support device for vehicles concerning the present invention. It is a flowchart which shows the process sequence performed by the conversation group estimation part of FIG. It is a planar map for demonstrating the combination extraction process by the utterance position of FIG.2A S22. It is a time chart for demonstrating the combination extraction process by the speech period of FIG.2A step S23. It is a flowchart which shows the process sequence performed by the direct human-relations analysis part of FIG. It is a figure which shows a part of category dictionary used for the analysis process of the category of the content of utterance of FIG.3A step S33.
- the vehicle operation support device 1 of the present embodiment obtains recommendation information suitable for an occupant configuration including a human relationship, and provides support for various vehicle operations performed by the occupant.
- an example is an occupant A and an occupant B constituting an occupant of the vehicle VH1, and when both are in a human relationship of a fishing friend, an in-vehicle navigation device
- the destination suitable for the fishing place is displayed as an option or automatically set as the destination, or the radio of the fishing program is displayed as an option for the on-vehicle audio device, or it is automatically sent. It is a thing.
- a destination such as a business trip site or a restaurant for lunch with respect to the in-vehicle navigation device Is displayed as an option or automatically set as a destination, or a radio of an economic program is displayed as an option or automatically sent to a car audio system.
- Human relationship means the relationship between a specific person and a specific person, which is determined by experiences in current or past social life, and is not particularly limited, but an example is given to facilitate understanding of the present invention. And family members such as parents, children, husbands and wife, relatives such as cousins and cousins, these family members, relatives and other relationships, bosses, subordinates, colleagues, classmates, seniors and juniors in organizations such as companies and schools It can be classified into relationship of status, friends of hobbies and entertainment, boyfriends, girlfriends, lovers and other friends, and others. In the present embodiment, the vehicle occupant configuration means including such human relationships.
- vehicle operation is not particularly limited, but an example of the operation of the vehicle according to the present invention is travel operation of the vehicle (accelerator operation, brake operation, transmission lever operation, steering wheel operation, etc.)
- the operation of the navigation apparatus, the operation of the audio apparatus, the operation of the car air conditioner, the operation of adjusting the seat position, and the like include various operations of the vehicle performed by the occupant including the driver.
- the “recommended information suitable for the occupant configuration” is instruction information for a vehicle or an on-vehicle apparatus for realizing a highly probable or preferable operation which is considered from the human relationship of the occupant in the operation of the vehicle performed by the occupant described above. is there.
- the invention is not particularly limited.
- as the recommended information when the occupant A and the occupant B constituting the occupant are in a human relationship of fishing friend can illustrate instruction information on setting operation of the destination of the in-vehicle navigation device, instruction information on tuning operation of the audio device, and the like.
- the “support” of the vehicle operation includes not only presenting the option to the occupant when the occupant manually operates, but also automatically operating the vehicle operation support device 1 without the occupant manually operating.
- the vehicle operation support device 1 automatically operates the vehicle based on the recommendation information
- the vehicle operation that the occupant should perform can be reduced when the occupant has a favorable feeling with respect to the recommendation information.
- crew can cancel automatic vehicle operation if it carries out different manual operation with respect to the automatic vehicle operation.
- the vehicle operation support device 1 obtains the recommended information suitable for the occupant configuration including the human relationship, and provides the assistance of various vehicle operations performed by the occupant, but before getting on the vehicle
- the human relationship is analyzed or estimated and obtained in advance, and when the user gets on the vehicle, recommended information is obtained in a short time using the human relationship, and provided to support the vehicle operation.
- the vehicle operation support device 1 of the present embodiment includes a human relationship analysis unit 1A, a human relationship storage unit 1B, a support information determination unit 1C, and a vehicle information learning unit 1D. And an operation tendency storage unit 1E.
- the human relationship analysis unit 1A includes the voice acquisition unit 11, the conversation group estimation unit 12, the direct human relationship analysis unit 13, and the indirect human relationship estimation unit 14, and the human relationship storage unit 1B includes the human relationship database 15.
- the support information determination unit 1C includes the support information determination unit 16, the vehicle information learning unit 1D includes the vehicle information learning unit 17, and the operation tendency storage unit 1E includes the operation tendency database 18.
- the vehicle operation support device 1 of the present invention omits the vehicle information learning unit 1D and the operation tendency storage unit 1E as needed, and the human relationship analysis unit 1A, the human relationship storage unit 1B and the support information determination unit 1C. It may consist of
- the vehicle operation support device 1 of the present embodiment is configured by a computer provided with hardware and software, a ROM (Read Only Memory) storing a program, and a CPU (Central Processing) that executes the program stored in the ROM. Unit) and a random access memory (RAM) functioning as an accessible storage device.
- a micro processing unit MPU
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- the human relationship analysis unit 1A, the human relationship storage unit 1B, the support information determination unit 1C, the vehicle information learning unit 1D, and the operation tendency storage unit 1E are described later by software established in the ROM. To achieve each function.
- terminal TD1 As a premise of the vehicle operation support device 1 according to the present embodiment, a plurality of persons who can serve as occupants have terminal devices TD1, TD2, TD3... Each owns.
- This type of terminal TD includes a smartphone, a mobile phone, a removable in-vehicle device, a remote control key of a vehicle (intelligent key (registered trademark), etc.), a voice recognition user interface (Amazon Echo Dot (registered trademark), etc.) It can be used.
- the terminal TD of the present embodiment has a computer function, and transmits a microphone for inputting conversational voice data and the inputted conversational voice data to the human relationship analysis unit 1A of the vehicle operation support device 1 of the present embodiment.
- each terminal TD transmits its own ID, the current position, and the collected conversational speech data to the speech acquisition unit 11 of the human relationship analysis unit 1A via a wireless communication network such as the Internet.
- a wireless communication network such as the Internet.
- the human relationship analysis unit 1A includes a voice acquisition unit 11, a conversation group estimation unit 12, a direct human relationship analysis unit 13, and an indirect human relationship estimation unit 14.
- the voice acquisition unit 11 executes transmission and reception of information with the plurality of terminals TD described above via a wireless communication network such as the Internet, and in particular, conversation voice data collected from the ID and current position of each terminal TD. And (step S21 of FIG. 2A).
- the conversation group estimation unit 12 receives the conversation voice data input to a specific terminal TD based on the ID, the current position, and the collected conversation voice data of each terminal TD input to the voice acquisition unit 11. With respect to who and who are talking, we estimate the group of people who are talking. At this time, the voiceprint data of the owner of each terminal TD registered in advance (or a specific person may be the same, the same applies hereinafter) is collated to identify which speech data is who's voice.
- FIG. 2B is a planar map (latitude-longitude) for explaining the combination extraction process according to the utterance position in step S22 of FIG. 2A. As shown in FIG.
- the voice relation data of the owner of each terminal TD is used as an ID in the human relationship database 15 of the vehicle operation support device 1
- the voice communication data collected in the terminal TD1 is collated with voiceprint data to identify the ID of the voice of the owner of the terminal TD.
- conversation voice data of a person having nothing to do with conversation is input to the conversation voice data collected by the terminal TD.
- the conversation group estimation unit 12 executes combination extraction processing based on position information of the terminal TD and combination extraction processing based on the speech period, so that the conversation speech data actually input to the terminal TD is actually received. Estimate the people in the group you are talking to.
- the combination extraction process based on the position information of the terminal TD is performed by the terminal TD at the same time based on the ID, the current position, and the collected conversation voice data of each terminal TD input to the voice acquisition unit 11
- a temporary conversation group is estimated based on the position information of the terminal TD by extracting a combination in which the collected position information approaches a threshold or more with respect to a plurality of pieces of speech sound data collected in the (Step S22 of FIG. 2A).
- the terminal TD1 picking up the speech data is at the position P, and is located within a radius r from the position P (the three terminals TD2, TD3,. Since TD4 is likely to be a talk group because the distance is short, these are estimated as a tentative talk group. On the other hand, since the terminal TD (terminal indicated by four triangles in the figure) located at a position beyond the radius r from the position P of the terminal TD1 which has picked up the conversation voice data, the distance is far Because it is unlikely to be a conversation group, remove it from the tentative conversation group.
- the combination extraction process according to the speech period is a period during which one or more conversational speech data estimated to belong to the same group extracted by the combination extraction process according to the position information (step S22 in FIG. 2A) Conversation speech data whose overlap rate or overlap time from the speech start to the speech end is less than a predetermined threshold value is extracted, and it is estimated that these are speech groups actually talking (FIG. 2A) Step S23).
- FIG. 2C is a time chart for explaining the combination extraction process by the utterance period in step S23 of FIG. 2A.
- the human speech period of the terminal TD2 when the speech period of the conversational voice data of the four terminals TD1, TD2, TD3, TD4 is indicated by a solid line, the human speech period of the terminal TD2 with respect to the human speech period of the terminal TD1.
- the human utterance periods of the terminals TD3 and TD4 have a large overlapping ratio with respect to the human utterance period of the terminal TD1.
- the overlapping period of human utterance periods of the terminal devices TD3 and TD4 is large.
- the conversation group estimation unit 12 specifies the conversational speech data of the same conversation group estimated in this way, and outputs it to the direct human relationship analysis unit 13 together with the IDs of the terminals TD1 and TD2.
- the direct human relationship analysis unit 13 analyzes the acquired conversational speech data for each identified speaker, extracts a predetermined keyword, and specifies the wording for each speaker based on the keyword for each speaker. Identify the conversation content based on the conversation voice data of multiple people, analyze the direct human relationship between multiple people from the identified wording and the identified conversation content, and quantify and obtain the function I will manage. This analysis is performed based on the speech data belonging to the same speech group estimated by the speech group estimation unit 12 described above (step S31 in FIG. 3A).
- the direct human relationship analysis unit 13 of this embodiment performs keyword extraction processing (step S32 in FIG. 3A), analysis processing of categories of conversation content (step S33 in FIG. 3A), and analysis processing of words (step S34 in FIG. 3A). And the combining process (step S35 of FIG. 3A).
- the keyword extraction process is a process of extracting a plurality of pre-registered keywords (predetermined words) from speech voice data belonging to the same speech group by a known speech detection process, and the analysis process of the category of speech contents is
- the keywords extracted by the keyword extraction process are classified into categories to which the keywords belong.
- the extraction process of these keywords and the analysis process of the category of the conversation content are performed by referring to the category dictionary stored in the human relationship database 15.
- FIG. 3B is a view showing a part of the category dictionary used for the analysis processing of the category of the utterance content in step S33 of FIG. 3A.
- one conversation content category is associated with a plurality of keywords, for example, if the speech sound data includes "Marathon", the conversation classified as "sports” is a line It has been shown that In the process of analyzing the category of conversation content, the occurrence frequency of the category of conversation content associated with the extracted keyword is calculated as shown in FIG. 3C.
- FIG. 3C is a graph showing an example of an analysis result obtained by the analysis processing of the category of the utterance content in step S33 of FIG. 3A.
- the category of conversation content having a high occurrence frequency shown in the figure is specified, and is used in the combination process of step S35 in FIG. 3A.
- the wording analysis process is a process of classifying a keyword extracted by the keyword extraction process into the wording to which the keyword belongs. This process is performed by referring to the wording dictionary stored in the human relationship database 15.
- FIG. 3D is a diagram showing a part of the wording dictionary used in the wording analysis process of step S34 of FIG. 3A.
- the keyword extracted in the analysis process of wording is extracted in step S32 of FIG. 3A, a keyword different from the keyword used in the analysis process of the category of the conversation content in step S33 is used.
- one wording category is associated with a plurality of keywords, and for example, when speech voice data includes " ⁇ ", it is classified as "veritage or polite language". It is indicated that a conversation is taking place.
- wording is the usage of words and the way of saying, and as shown in FIG. 3D, it is possible to exemplify, for example, honorific or polite language, youth language, dialect, ordinary language, abbreviation, etc. it can.
- FIG. 3E is a graph showing an example of an analysis result obtained by the analysis of wording in step S34 of FIG. 3A.
- the categories of the wording having a high occurrence frequency shown in the figure are respectively identified, and are used in the combination processing of step S35 of FIG. 3A.
- the combination process quantifies the human relationship between target humans by combining numerical value of the occurrence frequency calculated by analysis processing of the category of conversation content and the analysis processing of wording, and this is a direct human It stores in the human relationship database 15 as a relationship.
- "human relationship” means, as described above, the relationship between a specific person and a specific person, which is determined by experiences in current or past social life, and is not particularly limited, but the understanding of the present invention is not limited.
- the family such as a parent, a child, a husband, a wife, relatives, such as a cousin, a cousin, these family, relatives, other relationships, superiors, subordinates, colleagues in an organization such as a company or a school. It can be classified into relationships of status within the organization such as classmates, seniors and juniors, friends of hobbies and entertainment, boyfriends, girlfriends, lovers and other friends, and others.
- Such direct human relationship quantification is performed by combining the numerical value of the occurrence frequency calculated by the analysis processing of the category of the conversation content and the analysis processing of the wording.
- the analysis result of the category of the conversation content is high in the frequency of occurrence of the conversation content classified as “work”, and the analysis result of the wording is one as shown in the left and right of FIG. Since the frequency of occurrence where human wording is "normal” is high and the frequency of occurrence where the other human wording is "honorific word” is high, the probability of the relationship between superiors and subordinates in a company organization is 70%, etc.
- quantification is performed using probability values and the like.
- the quantified direct human relationships are accumulated in the human relationship database 15.
- the indirect human relationship estimation unit 14 estimates an indirect human relationship between unanalyzed humans among the humans stored in the human relationship database 15 based on the quantified direct human relationship. Quantify.
- the above-mentioned direct human relationship is called “direct” human relationship because it is analyzed and quantified based on the actually performed speech data, while the indirect human relationship estimator What is estimated at 14 is estimation of the quantification of human relationships between people who have never actually talked, based on the quantified direct relationship data. In that sense, it is called "indirect" human relationship.
- the indirect human relationship estimation unit 14 performs direct human relationship reading processing (step S41 in FIG. 4A), direct human relationship statistical processing (step S42 in FIG. 4A), and unanalyzed human combination extraction processing (step S42 in FIG. 4A).
- Step S43) of FIG. 4A and calculation processing of the indirect human relationship estimated value (step S44 of FIG. 4A) are executed, and the obtained indirect human relationship estimated value is stored in the human relationship database 15. (Step S45 of FIG. 4A).
- the direct human relationship statistical processing is quantified by the direct human relationship analysis unit 13 and calculated by considering the interrelation of the values stored in the human relationship database 15. Specifically, the human relationship database 15 extracts a combination of three persons whose direct human relationship values are known, and two values of three human relationship values among the extracted three people are extracted. Record the one remaining value you assumed. By statistically processing this process with respect to a large number of combinations stored in the human relationship database 15, the probability value P (V3
- FIG. 4B is a diagram showing the human relationship for explaining the statistical processing of the analysis value of the direct human relationship in step S42 of FIG. 4A
- FIG. 4C is an analysis value of the direct human relationship in step S42 of FIG. 4A. It is a graph which shows an example of the result obtained by statistical processing of.
- the value V1 of the direct human relationship between two humans A and B the value V2 of the direct human relationship between two humans B and C
- This probability is a value V3 of the direct human relationship between human C and human A, a value V1 of direct human relationship between human A and human B, and a direct human relationship between human B and human C If the value V2 of is known, it means that it can be calculated by statistical establishment as shown in FIG. 4C.
- FIG. 4D is a diagram showing a human relationship for explaining the calculation process of the indirect human relationship estimated value in step S44 of FIG. 4A.
- one or more persons whose direct human relationships have been quantified for each of the extracted two persons Z and X are extracted as presumed relayers.
- human Y is extracted.
- the human Y has a known value V1 of the direct human relationship with the human X, and the value V2 of the direct human relationship with the human Z is also known.
- the values V1 and V2 of the direct human relationship between two humans Z and X extracted by the unanalyzed human combination extraction processing and the presumed relay person are human Reference is made from the relational database 15. Then, the value Vn of human-human relationship between humans when assuming the values V1 and V2 of the two human relationships referred to is the value that maximizes the probability value V3 obtained by the statistical processing in step S42 of FIG. 4A. Is calculated as an indirect human relationship. In addition, when a plurality of estimated relayers are extracted, it may be determined by the product of the probability value V3 or the majority of the probability value V3 indicating the maximum probability value. The value Vn calculated as the indirect human relationship estimation value is accumulated in the human relationship database 15.
- the human relationship storage unit 1B includes the human relationship database 15, and as described above, the human relationship database 15 includes voiceprint data associated with the ID of the owner of each terminal TD, the category dictionary shown in FIG. 3B, A wording dictionary shown in 3D, a human relationship quantification map used in direct human relationship quantification processing by direct human relationship analysis unit 13, direct human relationship quantified by direct human relationship analysis unit 13 The probability value P (V3
- the vehicle information learning unit 1D includes a vehicle information learning unit 17.
- an occupant information acquisition process step S51 in FIG. 5
- a human relationship reference process step S52 in FIG. 5
- vehicle information Acquisition processing step S53 in FIG. 5
- combining processing step S54 in FIG. 5
- the operation tendency information based on the obtained human relationship is stored in the operation tendency database 18 (step S55 in FIG. 5) .
- the passenger information acquisition process is a process for acquiring who is on the vehicle. For example, an occupant can be identified by connecting the terminal TD to any device mounted on the vehicle, or an occupant can be identified by detecting that the position information of the terminal TD and the position information of the vehicle are close to each other. The occupant can be identified by face recognition with an image acquired from a camera mounted on the vehicle.
- the human relationship reference process is a process of obtaining the value of the human relationship between the occupants with reference to the human relationship database 15 with respect to the occupant acquired by the occupant information acquisition process.
- Acquisition processing of vehicle information is the acquisition of control information of the vehicle, vehicle state and other vehicle information.
- vehicle information such as the destination set by the navigation device, the operation of the audio device, the operation of the car air conditioner, the current position of the vehicle, the movement locus of the vehicle, the current date and time, and the elapsed time after getting on is acquired.
- the combining process is a process of combining the vehicle information acquired by the acquisition process of the vehicle information and the reference process of the human relationship, and storing the combined information in the operation tendency database 18 as operation information on the human relationship.
- the human A and the human B direct human relationship or indirect human relationship is V1
- the destination set by the navigation device is a specific fishing spot
- the human The relationship V1 and the destination are stored in the operation tendency database 18 together with the frequency of occurrence.
- humans A, B, C, etc. may be stored.
- the operation tendency storage unit 1E includes the operation tendency database 18, associates the human relationship obtained by the vehicle information learning unit 17 with the operation information, and stores them.
- the support information determination unit 1C includes a support information determination unit 16, identifies a plurality of occupants who got on the vehicle, and determines a plurality of occupants based on the direct human relationship and the indirect human relationship stored in the human relationship database 15.
- the vehicle operation support information determined to be recommended in accordance with the human relationship between the crew members of the vehicle is determined.
- an occupant information acquisition process step S61 in FIG. 6
- a human relationship reference process step S62 in FIG. 6
- a vehicle information acquisition process step S63 in FIG. 6
- the reference process (step S64 in FIG. 6) and the determination / output process of the support information step S65 in FIG. 6) are performed.
- the occupant information acquisition process is a process similar to the occupant information acquisition process (step S51 of FIG. 5) of the vehicle information learning unit 17, that is, a process of acquiring who is in the vehicle.
- an occupant can be identified by connecting the terminal TD to any device mounted on the vehicle, or an occupant can be identified by detecting that the position information of the terminal TD and the position information of the vehicle are close to each other.
- the occupant can be identified by face recognition with an image acquired from a camera mounted on the vehicle.
- the human relationship reference process is the same process as the human relationship reference process (step S52 in FIG. 5) of the vehicle information learning unit 17, that is, the human relationship database 15 is referred to the occupant acquired by the occupant information acquisition process. And the value of the human relationship between the crew members.
- the acquisition process of the vehicle information is the same process as the acquisition process (step S53 of FIG. 5) of the vehicle information of the vehicle information learning unit 17, that is, the control information of the vehicle, the condition of the vehicle and other vehicle information
- the vehicle information of the vehicle information learning unit 17 that is, the control information of the vehicle, the condition of the vehicle and other vehicle information
- traveling operation of the vehicle performed by the passenger acceleration operation, brake operation, transmission lever operation, steering wheel operation, etc.
- destination set by navigation device operation of audio device, operation of car air conditioner, current position of vehicle
- vehicle Obtain vehicle information such as the movement trajectory of the vehicle, the current date and time, and the elapsed time after getting on the vehicle.
- the determination and output process of the support information determines the support information of the vehicle operation determined to be recommended according to the human relationship between a plurality of occupants.
- the "recommended information suitable for the occupant configuration” means, as described above, an instruction to the vehicle or the on-vehicle apparatus for realizing a highly probable or preferable operation which can be considered from the human relationship of the occupant in the operation of the vehicle performed by the occupant. It is information. For example, as recommended information when the crew members A and B who constitute the crew are in a human relationship of a fishing friend, instruction information on the setting operation of the destination of the in-vehicle navigation device or the tuning of the audio device The instruction information about the operation can be illustrated.
- the human relationship between a specific human and a specific human is analyzed using terminals TD1, TD2 and TD3 possessed by a plurality of persons who can be crew members, and the human relationship analysis unit 1A. It quantifies and accumulates in the human relationship database 15.
- the voice acquisition unit 11 performs transmission and reception of information with a plurality of terminals TD via a wireless communication network such as the Internet, and in particular, the ID of each terminal TD And the current position and the collected speech data (step S21 in FIG. 2A).
- the conversation group estimation unit 12 determines a specific terminal TD based on the ID, the current position, and the collected conversation voice data of each terminal TD input to the voice acquisition unit 11. With regard to the speech data input to the user, it is estimated who is talking with whom, and a group (group) of people talking. At this time, the conversation group estimation unit 12 is collected at the same time by the terminal TD based on the ID, the current position, and the collected conversational voice data of each terminal TD input to the speech acquisition unit 11. A temporary conversation group is estimated based on the position information of the terminal TD by extracting a combination in which collected position information approaches a threshold or more with respect to a plurality of pieces of speech sound data.
- step S23 of FIG. 2A the conversation group estimation unit 12 utters one or more pieces of conversational speech data estimated to belong to the same group extracted by the combination extraction process based on the position information in step S22. Conversational speech data whose overlap rate or overlap time during the period is equal to or less than a predetermined threshold value is extracted, and it is estimated that these are speech groups in which conversation is actually conducted. Then, in step S24, the conversation group estimation unit 12 specifies the conversation speech data of the same conversation group estimated in this way, and outputs it to the direct human relationship analysis unit 13 together with the IDs of the terminals TD1 and TD2.
- the direct human relationship analysis unit 13 analyzes the acquired conversational speech data of the same group for each identified utterer, and extracts a predetermined keyword.
- the wording for each speaker is specified based on the keyword for each speaker, the conversation contents are specified based on the speech sound data of a plurality of persons, and the specified wording and the specified conversation contents are plural. Analyze and quantify the direct human relationships between human beings. This quantified direct human relationship is stored in the human relationship database 15 in step S36.
- the indirect human relationship estimation unit 14 executes the direct human relationship reading process in step S41 of FIG. 4A, and performs the direct human relationship statistical process in step S42, and the process proceeds to step S43.
- a combination extraction process of unanalyzed human beings is executed, and a calculation process of an indirect human relationship estimated value is executed at step S44, and the indirect human relationship estimated value obtained at step S45 is a human relationship It accumulates in the database 15.
- the vehicle information learning unit 1D accumulates the information of what kind of operation of the vehicle is actually performed by the occupant configuration including the human relationship, and serves to determine the support information of the vehicle operation. That is, as shown in FIG. 5, the vehicle information learning unit 17 executes an occupant information acquisition process in step S51, and executes a human relationship reference process accumulated in the human relationship database 15 in step S52. In step S53, acquisition processing of vehicle information is executed, and in step S54, coupling processing is executed, and in step S55, the operation tendency information based on the obtained human relationship is stored in the operation tendency database 18.
- the support information determination unit 1C identifies the plurality of occupants who got on the vehicle, and based on the direct human relationship and the indirect human relationship stored in the human relationship database 15, the human relationship between the plurality of crew members. Determine the support information of the vehicle operation determined to be recommended according to.
- the support information determination unit 16 executes occupant information acquisition processing in step S61 of FIG. 6, performs human-related reference processing in step S62, and acquires vehicle information in step S63. Is executed, reference processing of operation tendency information is executed in step S64, and determination / output processing of support information is executed in step S65.
- the direct human relationships of a plurality of persons who can serve as occupants are accumulated in advance, and when a plurality of persons get on the vehicle, Support for vehicle operation determined to be recommended according to the human relationship between multiple crew members based on the direct human relationship and the indirect human relationship stored in the human relationship database 15 Since the information is determined, it is possible to provide support information for appropriate vehicle operation in a short time after boarding.
- the vehicle operation support device 1 of the present embodiment since the human relationship between people who are not actually talking is quantified and estimated from the human relationship between people who actually talk, it is possible to combine the crew It is possible to avoid an error such as loss of support information for vehicle operation. Further, in the case of this indirect human relationship estimation, the data of the direct human relationship between the persons who actually talked with each other can be statistically processed to obtain the accuracy thereof.
- the vehicle operation actually performed is stored in association with the human relationship at that time, and this is reflected in the support information of the vehicle operation.
- the support information of the vehicle operation determined to be recommended according to can be brought closer to a more realistic operation.
- a speaker of a conversation voice data group whose utterance position is equal to or less than a predetermined distance among a plurality of conversation voice data is extracted to extract the same conversation group.
- a speaker of a conversation voice data group whose utterance period does not overlap for a predetermined time or more among a plurality of conversation voice data to extract the same conversation group As a result, it is possible to improve the accuracy of the identification of the conversation group and the analysis of the human relationship.
- the vehicle operation support device 1 of the present embodiment since the terminal capable of collecting sound other than when a human gets on the vehicle is used to detect conversational voice data of a plurality of persons who can be passengers. It is possible to pick up speech data of a plurality of people on a daily basis.
- the human relationship analysis unit 1A includes the indirect human relationship estimation unit 14, this may be omitted as necessary.
- the above-described vehicle operation support device 1 is configured to include the vehicle information learning unit 1D and the operation tendency storage unit 1E, and also supports the vehicle operation using the operation tendency information in steps S64 to S65 of FIG. Although the information is determined, the vehicle information learning unit 1D and the operation tendency storage unit 1E may be omitted as needed, and the vehicle operation support information may be determined only by the human relationship stored in the human relationship database 15.
- the voice acquisition unit 11, the conversation group estimation unit 12, the direct human relationship analysis unit 13, and the indirect human relationship estimation unit 14 correspond to a human relationship analysis unit according to the present invention
- the human relationship database 15 is
- the support information determination unit 16 corresponds to a support information determination unit according to the present invention
- the vehicle information learning unit 17 corresponds to a vehicle information learning unit according to the present invention
- the operation tendency database 18 corresponds to an operation tendency storage unit according to the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Automation & Control Theory (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
まずは日常的に、乗員となり得る複数の人間が所持する端末機TD1,TD2,TD3と、人間関係解析ユニット1Aとを用いて、特定の人間と特定の人間との人間関係を解析し、これを定量化して人間関係データベース15に蓄積する。具体的には、図2Aに示すように、音声取得部11は、インターネットなどの無線通信回線網を介して、複数の端末機TDと情報の送受信を実行し、特に各端末機TDの、IDと現在位置と収音した会話音声データとを入力する(図2AのステップS21)。次いで、ステップS22にて、会話グループ推定部12は、音声取得部11に入力された各端末機TDの、IDと現在位置と収音した会話音声データとに基づいて、ある特定の端末機TDに入力された会話音声データに関し、誰と誰とが会話しているのか、会話している人間のグループ(集団)を推定する。このとき、会話グループ推定部12は、音声取得部11に入力された各端末機TDの、IDと現在位置と収音した会話音声データとに基づいて、端末機TDによって同時刻に集音された複数の会話音声データに対し、集音された位置情報が閾値以上に接近している組み合わせを抽出することで、端末機TDの位置情報に基づく仮の会話グループの推定を実行する。
1A…人間関係解析ユニット
11…音声取得部
12…会話グループ推定部
13…直接的人間関係解析部
14…間接的人間関係推定部
1B…人間関係記憶ユニット
15…人間関係データベース
1C…支援情報決定ユニット
16…支援情報決定部
1D…車両情報学習ユニット
17…車両情報学習部
1E…操作傾向記憶ユニット
18…操作傾向データベース
TD1,TD2,TD3…端末機
VH1,VH2…車両
A,B,C,X,Y,Z…人間(乗員)
V1,V2,V3…人間関係の解析値
Claims (7)
- 人間関係解析ユニットと、人間関係記憶ユニットと、支援情報決定ユニットと、を備え、人間関係を含む乗員構成に適した推奨情報を求め、乗員が行う車両操作の支援に供する車両用操作支援装置において、
前記人間関係解析ユニットは、
前記乗員となり得る複数の人間の会話音声データを取得して発話者を特定し、
取得した会話音声データを、特定した発話者毎に分析して所定のキーワードを抽出し、
発話者毎の前記キーワードに基づいて当該発話者毎の言葉遣いを特定し、
前記複数の人間の会話音声データに基づいて会話内容を特定し、
特定した言葉遣いと、特定した会話内容とから、前記複数の人間同士の直接的な人間関係を解析して定量化し、
前記人間関係記憶ユニットは、前記乗員となり得る複数の人間の前記直接的な人間関係を予め蓄積し、
前記支援情報決定ユニットは、車両に搭乗した複数の乗員を特定し、前記人間関係記憶ユニットに蓄積された前記直接的な人間関係に基づいて、前記複数の乗員同士の人間関係に応じて推奨されるべく定められた車両操作の支援情報を決定する、車両用操作支援装置。 - 前記人間関係解析ユニットは、定量化された前記直接的な人間関係に基づいて、前記人間関係記憶ユニットに記憶された人間同士のうち、解析されていない人間同士の間接的な人間関係を推定して定量化し、
前記人間関係記憶ユニットは、前記間接的な人間関係をも蓄積し、
前記支援情報決定ユニットは、前記人間関係記憶ユニットに蓄積された前記直接的な人間関係及び前記間接的な人間関係に基づいて、前記複数の乗員同士の人間関係に応じて推奨されるべく定められた車両操作の支援情報を決定する請求項1に記載の車両用操作支援装置。 - 前記人間関係解析ユニットは、定量化された前記直接的な人間関係に基づいて、前記人間関係記憶ユニットに記憶された人間同士のうち、解析されていない人間同士の間接的な人間関係を推定して定量化する場合、
既に定量化された第1の人間と第2の人間との直接的な人間関係V1と、既に定量化された前記第2の人間と第3の人間との直接的な人間関係V2と、既に定量化された前記第3の人間と前記第1の人間との直接的な人間関係V3とを用いて複数の人間関係V1,V2,V3の関係を統計処理し、
解析されていない人間関係V3´を、既に定量化された残りの人間関係V1,V2と前記統計処理された人間関係V1,V2,V3の関係から推定する請求項2に記載の車両用操作支援装置。 - 車両情報学習ユニットと、操作傾向記憶ユニットとをさらに備え、
前記車両情報学習ユニットは、
車両に搭乗した複数の乗員を特定し、
前記人間関係記憶ユニットから前記複数の乗員に関する人間関係を抽出し、
前記車両の操作に関する操作情報を検出し、
抽出された人間関係と検出された操作情報とを関連付け、
前記操作傾向記憶ユニットは、前記関連付けられた人間関係及び操作情報を蓄積し、
前記支援情報決定ユニットは、前記操作傾向記憶ユニットに蓄積された人間関係に関連する操作情報を、前記推奨されるべく定められた車両操作の支援情報として決定する請求項1~3のいずれか一項に記載の車両用操作支援装置。 - 前記人間関係解析ユニットは、複数の人間の会話音声データを取得して発話者を特定する場合に、
複数の会話音声データの発話位置を検出し、
複数の会話音声データのうち、前記発話位置が所定距離以下である会話音声データ群を抽出し、これらの会話音声データ群の発話者を会話グループとし、当該会話グループに属する発話者を特定する請求項1~4のいずれか一項に記載の車両用操作支援装置。 - 前記人間関係解析ユニットは、複数の人間の会話音声データを取得して発話者を特定する場合に、
複数の会話音声データの発話期間を検出し、
複数の会話音声データのうち、前記発話期間が所定時間以上重複しない会話音声データ群を抽出し、これらの会話音声データ群の発話者を会話グループとして推定し、当該会話グループに属する発話者を特定する請求項1~5のいずれか一項に記載の車両用操作支援装置。 - 前記乗員となり得る複数の人間の会話音声データは、前記人間が車両に搭乗する場合以外にも収音可能な端末機により検出し、
検出した会話音声データを前記人間関係解析ユニットに送信する請求項1~6のいずれか一項に記載の車両用操作支援装置。
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112020009783-7A BR112020009783A2 (pt) | 2017-11-17 | 2017-11-17 | dispositivo de assistência à operação do veículo |
JP2019554145A JP7024799B2 (ja) | 2017-11-17 | 2017-11-17 | 車両用操作支援装置 |
MX2020004484A MX2020004484A (es) | 2017-11-17 | 2017-11-17 | Dispositivo de asistencia a la operacion para vehiculos. |
CA3084696A CA3084696C (en) | 2017-11-17 | 2017-11-17 | Vehicle operation assistance device |
RU2020117547A RU2768509C2 (ru) | 2017-11-17 | 2017-11-17 | Устройство помощи в управлении транспортным средством |
US16/764,705 US20210174790A1 (en) | 2017-11-17 | 2017-11-17 | Vehicle operation assistance device |
CN201780096439.0A CN111801667B (zh) | 2017-11-17 | 2017-11-17 | 车辆用操作辅助装置和车辆用操作辅助方法 |
EP17932260.7A EP3712887B1 (en) | 2017-11-17 | 2017-11-17 | Vehicle operation assistance device |
PCT/JP2017/041460 WO2019097674A1 (ja) | 2017-11-17 | 2017-11-17 | 車両用操作支援装置 |
KR1020207014105A KR20200074168A (ko) | 2017-11-17 | 2017-11-17 | 차량용 조작 지원 장치 및 조작 지원 방법 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2017/041460 WO2019097674A1 (ja) | 2017-11-17 | 2017-11-17 | 車両用操作支援装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019097674A1 true WO2019097674A1 (ja) | 2019-05-23 |
Family
ID=66538941
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/041460 WO2019097674A1 (ja) | 2017-11-17 | 2017-11-17 | 車両用操作支援装置 |
Country Status (10)
Country | Link |
---|---|
US (1) | US20210174790A1 (ja) |
EP (1) | EP3712887B1 (ja) |
JP (1) | JP7024799B2 (ja) |
KR (1) | KR20200074168A (ja) |
CN (1) | CN111801667B (ja) |
BR (1) | BR112020009783A2 (ja) |
CA (1) | CA3084696C (ja) |
MX (1) | MX2020004484A (ja) |
RU (1) | RU2768509C2 (ja) |
WO (1) | WO2019097674A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2020240730A1 (ja) * | 2019-05-29 | 2021-09-30 | 三菱電機株式会社 | 受話者推定装置、受話者推定方法、及び受話者推定プログラム |
WO2022244178A1 (ja) * | 2021-05-20 | 2022-11-24 | 三菱電機株式会社 | 受話者推定装置、受話者推定方法、及び受話者推定プログラム |
JP7574927B2 (ja) | 2020-12-25 | 2024-10-29 | 日本電気株式会社 | 話者識別装置、方法およびプログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102249784B1 (ko) * | 2020-01-06 | 2021-05-10 | 주식회사 파우미 | 소형 용기의 내부 코팅용 상방향 노즐 |
CN115376522B (zh) * | 2021-05-21 | 2024-10-01 | 佛山市顺德区美的电子科技有限公司 | 空调器的声纹控制方法、空调器及可读存储介质 |
CN115878070B (zh) * | 2023-03-01 | 2023-06-02 | 上海励驰半导体有限公司 | 一种车载音频播放方法、装置、设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006099195A (ja) * | 2004-09-28 | 2006-04-13 | Sony Corp | 視聴コンテンツ提供システム及び視聴コンテンツ提供方法 |
WO2007105436A1 (ja) * | 2006-02-28 | 2007-09-20 | Matsushita Electric Industrial Co., Ltd. | ウェアラブル端末 |
JP2009232415A (ja) * | 2008-03-25 | 2009-10-08 | Denso Corp | 自動車用情報提供システム |
JP2012133530A (ja) | 2010-12-21 | 2012-07-12 | Denso Corp | 車載装置 |
JP2013182560A (ja) * | 2012-03-05 | 2013-09-12 | Nomura Research Institute Ltd | 人間関係推定システム |
WO2016121174A1 (ja) * | 2015-01-30 | 2016-08-04 | ソニー株式会社 | 情報処理システムおよび制御方法 |
JP2017009826A (ja) * | 2015-06-23 | 2017-01-12 | トヨタ自動車株式会社 | グループ状態判定装置およびグループ状態判定方法 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2229398C1 (ru) * | 2003-09-12 | 2004-05-27 | Общество с ограниченной ответственностью "Альтоника" | Охранно-навигационная система для транспортного средства |
JP4670438B2 (ja) * | 2005-04-01 | 2011-04-13 | ソニー株式会社 | コンテンツおよびそのプレイリストの提供方法 |
US20100042498A1 (en) * | 2008-08-15 | 2010-02-18 | Atx Group, Inc. | Criteria-Based Audio Messaging in Vehicles |
JP2010190743A (ja) * | 2009-02-18 | 2010-09-02 | Equos Research Co Ltd | ナビゲーションシステム及びナビゲーション装置 |
JP2011081763A (ja) * | 2009-09-09 | 2011-04-21 | Sony Corp | 情報処理装置、情報処理方法及び情報処理プログラム |
US20120239400A1 (en) * | 2009-11-25 | 2012-09-20 | Nrc Corporation | Speech data analysis device, speech data analysis method and speech data analysis program |
US11070661B2 (en) * | 2010-09-21 | 2021-07-20 | Cellepathy Inc. | Restricting mobile device usage |
JP5740575B2 (ja) * | 2010-09-28 | 2015-06-24 | パナソニックIpマネジメント株式会社 | 音声処理装置および音声処理方法 |
US8775059B2 (en) * | 2011-10-26 | 2014-07-08 | Right There Ware LLC | Method and system for fleet navigation, dispatching and multi-vehicle, multi-destination routing |
US9008906B2 (en) * | 2011-11-16 | 2015-04-14 | Flextronics Ap, Llc | Occupant sharing of displayed content in vehicles |
US10169464B2 (en) * | 2012-05-14 | 2019-01-01 | Ramesh Sivarajan | System and method for a bidirectional search engine and its applications |
US20150193888A1 (en) * | 2014-01-06 | 2015-07-09 | Linkedln Corporation | Techniques for determining relationship information |
CN105701123B (zh) * | 2014-11-27 | 2019-07-16 | 阿里巴巴集团控股有限公司 | 人车关系的识别方法及装置 |
CN104933201A (zh) * | 2015-07-15 | 2015-09-23 | 蔡宏铭 | 基于同行信息的内容推荐方法及系统 |
US11170451B2 (en) * | 2015-10-02 | 2021-11-09 | Not So Forgetful, LLC | Apparatus and method for providing gift recommendations and social engagement reminders, storing personal information, and facilitating gift and social engagement recommendations for calendar-based social engagements through an interconnected social network |
US10298690B2 (en) * | 2017-01-10 | 2019-05-21 | International Business Machines Corporation | Method of proactive object transferring management |
-
2017
- 2017-11-17 CA CA3084696A patent/CA3084696C/en active Active
- 2017-11-17 JP JP2019554145A patent/JP7024799B2/ja active Active
- 2017-11-17 WO PCT/JP2017/041460 patent/WO2019097674A1/ja unknown
- 2017-11-17 EP EP17932260.7A patent/EP3712887B1/en active Active
- 2017-11-17 MX MX2020004484A patent/MX2020004484A/es unknown
- 2017-11-17 CN CN201780096439.0A patent/CN111801667B/zh active Active
- 2017-11-17 KR KR1020207014105A patent/KR20200074168A/ko not_active Application Discontinuation
- 2017-11-17 BR BR112020009783-7A patent/BR112020009783A2/pt not_active Application Discontinuation
- 2017-11-17 RU RU2020117547A patent/RU2768509C2/ru active
- 2017-11-17 US US16/764,705 patent/US20210174790A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006099195A (ja) * | 2004-09-28 | 2006-04-13 | Sony Corp | 視聴コンテンツ提供システム及び視聴コンテンツ提供方法 |
WO2007105436A1 (ja) * | 2006-02-28 | 2007-09-20 | Matsushita Electric Industrial Co., Ltd. | ウェアラブル端末 |
JP2009232415A (ja) * | 2008-03-25 | 2009-10-08 | Denso Corp | 自動車用情報提供システム |
JP2012133530A (ja) | 2010-12-21 | 2012-07-12 | Denso Corp | 車載装置 |
JP2013182560A (ja) * | 2012-03-05 | 2013-09-12 | Nomura Research Institute Ltd | 人間関係推定システム |
WO2016121174A1 (ja) * | 2015-01-30 | 2016-08-04 | ソニー株式会社 | 情報処理システムおよび制御方法 |
JP2017009826A (ja) * | 2015-06-23 | 2017-01-12 | トヨタ自動車株式会社 | グループ状態判定装置およびグループ状態判定方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3712887A4 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2020240730A1 (ja) * | 2019-05-29 | 2021-09-30 | 三菱電機株式会社 | 受話者推定装置、受話者推定方法、及び受話者推定プログラム |
JP7574927B2 (ja) | 2020-12-25 | 2024-10-29 | 日本電気株式会社 | 話者識別装置、方法およびプログラム |
WO2022244178A1 (ja) * | 2021-05-20 | 2022-11-24 | 三菱電機株式会社 | 受話者推定装置、受話者推定方法、及び受話者推定プログラム |
JPWO2022244178A1 (ja) * | 2021-05-20 | 2022-11-24 | ||
JP7309095B2 (ja) | 2021-05-20 | 2023-07-14 | 三菱電機株式会社 | 受話者推定装置、受話者推定方法、及び受話者推定プログラム |
Also Published As
Publication number | Publication date |
---|---|
RU2020117547A (ru) | 2021-12-17 |
JP7024799B2 (ja) | 2022-02-24 |
RU2768509C2 (ru) | 2022-03-24 |
EP3712887B1 (en) | 2021-09-29 |
EP3712887A1 (en) | 2020-09-23 |
US20210174790A1 (en) | 2021-06-10 |
CA3084696C (en) | 2023-06-13 |
CA3084696A1 (en) | 2019-05-23 |
JPWO2019097674A1 (ja) | 2020-12-03 |
RU2020117547A3 (ja) | 2021-12-17 |
BR112020009783A2 (pt) | 2020-11-03 |
EP3712887A4 (en) | 2021-03-10 |
KR20200074168A (ko) | 2020-06-24 |
MX2020004484A (es) | 2020-08-03 |
CN111801667A (zh) | 2020-10-20 |
CN111801667B (zh) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019097674A1 (ja) | 車両用操作支援装置 | |
CN110741433B (zh) | 使用多个计算设备的对讲式通信 | |
CN103038818B (zh) | 在车载语音识别系统与车外语音识别系统之间的通信系统和方法 | |
US9691047B2 (en) | Observation platform for using structured communications | |
CN103918247B (zh) | 基于背景环境的智能手机传感器逻辑 | |
DE102018125966A1 (de) | System und verfahren zur erfassung von stichworten in einer unterhaltung | |
US9959885B2 (en) | Method for user context recognition using sound signatures | |
US20130060568A1 (en) | Observation platform for performing structured communications | |
CN108847225B (zh) | 一种机场多人语音服务的机器人及其方法 | |
JP6820664B2 (ja) | 受付システムおよび受付方法 | |
JP2010217318A (ja) | 同乗者検索装置および同乗者検索プログラム | |
US20220035840A1 (en) | Data management device, data management method, and program | |
CN112086098B (zh) | 一种驾乘人员分析方法、装置及计算机可读存储介质 | |
CN105869631B (zh) | 语音预测的方法和装置 | |
Fink et al. | The Autonomous Vehicle Assistant (AVA): Emerging technology design supporting blind and visually impaired travelers in autonomous transportation | |
JP2019124976A (ja) | リコメンド装置、リコメンド方法、及びリコメンドプログラム | |
US20220036381A1 (en) | Data disclosure device, data disclosure method, and program | |
JP2018055155A (ja) | 音声対話装置および音声対話方法 | |
JP7448378B2 (ja) | 車両乗合支援システム及び車両乗合支援方法 | |
JP2019105966A (ja) | 情報処理方法及び情報処理装置 | |
US10866781B2 (en) | Information processor | |
JP7123581B2 (ja) | 情報処理方法及び情報処理装置 | |
JP2019190940A (ja) | 情報提供装置 | |
Suhandi et al. | Smart Cane: Public Transportation Code Detection and Identification System for Visually Impaired | |
US12131732B2 (en) | Information processing apparatus, information processing system, and information processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2019554145 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3084696 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 20207014105 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2017932260 Country of ref document: EP Effective date: 20200617 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112020009783 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112020009783 Country of ref document: BR Kind code of ref document: A2 Effective date: 20200515 |