WO2007020808A1 - Content providing device, content providing method, content providing program, and computer readable recording medium - Google Patents

Content providing device, content providing method, content providing program, and computer readable recording medium Download PDF

Info

Publication number
WO2007020808A1
WO2007020808A1 PCT/JP2006/315405 JP2006315405W WO2007020808A1 WO 2007020808 A1 WO2007020808 A1 WO 2007020808A1 JP 2006315405 W JP2006315405 W JP 2006315405W WO 2007020808 A1 WO2007020808 A1 WO 2007020808A1
Authority
WO
WIPO (PCT)
Prior art keywords
passenger
information
content
content providing
behavior
Prior art date
Application number
PCT/JP2006/315405
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroaki Shibasaki
Tadayasu Kaneko
Matsuaki Haruki
Motoyuki Yamashita
Jun Oosugi
Mari Kitada
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Publication of WO2007020808A1 publication Critical patent/WO2007020808A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Definitions

  • Content providing apparatus content providing method, content providing program, and computer-readable recording medium
  • the present invention relates to a content providing apparatus that provides content in a mobile body, a content providing method, a content providing program, and a computer-readable recording medium.
  • the present invention is not limited to the above-described content providing apparatus, content providing method, content providing program, and computer-readable recording medium.
  • a passenger of the moving body can view various types of content.
  • Various types of content are, for example, radio and television broadcasts, music and video recorded on recording media such as CD (Compact Disk) and DVD (Digital Versatile Disk), and passengers are mounted on mobile objects.
  • the content can be viewed through a display device such as a display or an audio device such as a speaker.
  • each passenger's preference for the in-vehicle environment (such as the position and operating status of the in-vehicle device) is used as profile information for each passenger, using an IC (Integrated Circuit) card ID ( IDentification) Store in the card.
  • IC Integrated Circuit
  • IDentification IDentification
  • a navigation apparatus in a moving body such as a vehicle guides the moving body to a destination using the position information of the moving body, the information on the destination, and the map information.
  • the destination point is set based on the user's input, and the user operates the operation unit with a remote control or touch panel to input information on the name and position of the destination point.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2002-104105
  • Patent Document 2 JP 2000-88597
  • the content providing apparatus is a content providing apparatus that provides content in a mobile object, and identifies a passenger of the mobile object. Identifying means, passenger information obtaining means for obtaining information relating to the passenger identified by the identifying means (hereinafter referred to as “passenger information” t), and passenger obtained by the passenger information obtaining means
  • the information processing apparatus includes: a determination unit that determines content to be output based on the information; and an output unit that outputs the content determined by the determination unit.
  • the content providing device is a behavior detecting device that detects the behavior of the passenger in the content providing device that provides content on a mobile object. Means for determining content to be output based on the result detected by the behavior detecting means, and output means for outputting the content determined by the determining means. .
  • the content providing method according to the invention of claim 10 includes a specifying step for specifying a passenger of the moving object and the specifying method for providing the content in the moving object.
  • a passenger information acquisition process for acquiring the passenger information specified by the process, a determination process for determining the content to be output based on the passenger information acquired by the passenger information acquisition process, and the determination process.
  • the content providing method according to the invention of claim 11 is a content providing method for providing content on a mobile object, and is detected by a behavior detecting step for detecting the behavior of the passenger and the behavior detecting step. And a determination step of determining content to be output based on the detection result, and an output step of outputting the content determined by the determination step.
  • a content providing program according to the invention of claim 12 causes a computer to execute the content providing method according to claim 10 or 11.
  • a computer-readable recording medium records the content providing program according to claim 12.
  • FIG. 1 is a block diagram showing an example of a functional configuration of a content providing apparatus according to the first embodiment.
  • FIG. 2 is a flowchart showing details of processing of the content providing apparatus according to the first embodiment.
  • FIG. 3 is a block diagram of an example of a hardware configuration of the navigation device according to the first embodiment.
  • FIG. 4 is an explanatory diagram of an example of the interior of the vehicle in which the navigation device according to the first embodiment is mounted.
  • FIG. 5 shows the use of passenger information in the navigation device according to the first embodiment.
  • FIG. 6 is a diagram illustrating a process using behavior information in the navigation device according to the first embodiment.
  • FIG. 7 is a block diagram showing an example of a functional configuration of the content providing apparatus according to the second embodiment.
  • FIG. 8 is a flowchart showing details of processing of the content providing apparatus according to the second embodiment.
  • FIG. 9 is a flowchart of the process using the passenger information in the navigation device according to the second embodiment.
  • FIG. 10 is a flowchart of the contents of the process using the behavior information in the navigation device according to the second embodiment.
  • FIG. 1 is a block diagram illustrating an example of a functional configuration of the content providing apparatus according to the first embodiment.
  • the content providing apparatus 100 includes a passenger identification unit 101, a passenger information acquisition unit 102, a content determination unit 103, a content output unit 104, and a behavior detection unit 105. ing.
  • the passenger identification unit 101 identifies the passenger.
  • the identification of the passenger is, for example, the identification of the relationship (owner, relative, friend, other person, etc.) with the owner of the moving body in each passenger.
  • the passenger information acquisition unit 102 acquires the passenger information related to the passenger specified by the passenger specifying unit 101.
  • Passenger information is, for example, information including characteristics such as preferences, age, and gender related to the passenger, and the configuration such as the arrangement and number of passengers.
  • the behavior detection unit 105 detects the behavior of the passenger.
  • the passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue and physical condition, and may be information including the arrangement and number of passengers exhibiting the predetermined behavior.
  • the content determination unit 103 determines the content to be output based on at least one of the passenger information acquired by the passenger information acquisition unit 102 and the behavior of the passenger detected by the behavior detection unit 105. .
  • the content to be output is, for example, music or video recorded in advance on a recording medium (not shown), and radio broadcast or television broadcast received by a communication unit (not shown).
  • the content output unit 104 outputs the content determined by the content determination unit 103.
  • the content is output by, for example, a display device such as a display mounted on a moving body or an acoustic device such as a speaker.
  • FIG. 2 is a flowchart showing the contents of the processing of the content providing apparatus according to the first embodiment.
  • content provision The apparatus 100 determines whether or not there is an instruction to provide content (step S201).
  • the content providing instruction may be, for example, a configuration in which a passenger gives an operation unit (not shown).
  • step S201 after waiting for an instruction to provide content, if there is an instruction (step S201: Yes), the passenger identification unit 101 identifies the passenger (step S202).
  • the identification of the passenger is, for example, the identification of the relationship between the owner of the mobile object and the passenger.
  • the occupant information acquisition unit 102 acquires occupant information about the occupant identified in step S202 (step S203).
  • Passenger information is, for example, information including characteristics such as preferences, age, and sex regarding the passenger, and configuration such as the arrangement and number of passengers.
  • the behavior detecting unit 105 detects the behavior of the passenger (step S204).
  • the passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue, and physical condition.
  • the passenger information is acquired (step S203), and the behavior of the passenger to be detected (step S204) is detected (step S204). ), Acquire passenger information (Step S203) Do not perform either step in the order!
  • the content determination unit 103 determines the content to be output based on at least one of the passenger information acquired in step S203 and the passenger behavior detected in step S204 (Ste S205). Then, the content output unit 104 outputs the content determined in step S205 (step S206), and ends the series of processes.
  • step S201 After waiting for an instruction to provide content, if there is an instruction (step S201: Yes), the passenger is identified (step S202), and the passenger information is stored. Although it is configured to acquire (step S203) and detect behavior (step S204), it may be configured to perform the above-described steps S202 to S204 before instructing content provision. For example, at the time of boarding by force, the passenger is identified (step S202), the passenger information is obtained (step S203), the behavior is detected (step S204), and there is an instruction to provide content. Wait for instructions (Step S201: Yes) (Step S205) The configuration is also good! /.
  • the user can input the content by his / her input operation. Even without setting, content can be output based on the passenger information and behavior of the passenger. Therefore, it is possible to efficiently output content suitable for the passenger.
  • Example 1 of the present invention will be described.
  • a navigation device mounted on a moving body such as a vehicle (including a four-wheeled vehicle and a two-wheeled vehicle) will be described.
  • FIG. 3 is a block diagram of an example of a hardware configuration of the navigation device according to the first embodiment.
  • a navigation device 300 is mounted on a moving body such as a vehicle, and includes a navigation control unit 301, a user operation unit 302, a display unit 303, a position acquisition unit 304, and a recording medium. 305, a recording medium decoding unit 306, an audio output unit 307, a communication unit 308, a route search unit 309, a route guidance unit 310, an audio generation unit 311, a speaker 312, a passenger photographing unit 313, And an audio processing unit 314.
  • the navigation control unit 301 controls the entire navigation device 300.
  • the navigation control unit 301 includes, for example, a CPU (Central Processing Unit) that executes predetermined arithmetic processing, a ROM (Read Only Memory) that stores various control programs, and a RAM (Random) that functions as a work area for the CPU. It can be realized by a microcomputer constituted by an Access Memory).
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random
  • the navigation control unit 301 based on the information on the current position acquired by the position acquisition unit 304 and the map information obtained from the recording medium 305 via the recording medium decoding unit 306, upon route guidance, The driving force at which position on the map is calculated, and the calculation result is output to the display unit 303.
  • the navigation control unit 301 performs route guidance.
  • the route search unit 309, the route guidance unit 310, and the voice generation unit 311 input / output information related to route guidance, and output the obtained information to the display unit 303 and the voice output unit 307.
  • the navigation control unit 301 generates passenger identification information and behavior information, which will be described later, based on the passenger image or behavior of the passenger photographed by the passenger photographing unit 313. Then, in accordance with the content reproduction instruction input by the user operating the user operation unit 302, content reproduction such as audio and video is controlled based on the identification information and behavior information of the passenger.
  • the content playback control is performed by, for example, determining the content to be played back from music or video recorded on the recording medium 305 or radio broadcast or television broadcast received by the communication unit 308, and displaying the display unit 303 or the audio processing unit 314. Output to.
  • the user operation unit 302 acquires information input by the user by operating operation means such as a remote control, a switch, and a touch panel, and outputs the acquired information to the navigation control unit 301.
  • Display unit 303 includes, for example, a CRT (Cathode Ray Tube), a TFT liquid crystal display, an organic EL display, a plasma display, and the like.
  • the display unit 303 can be configured by, for example, a video IZF (interface) or a video display device connected to the video IZF.
  • the video IZF is, for example, a graphic controller that controls the entire display device, a buffer memory such as VRAM (Video RAM) that temporarily stores image information that can be displayed immediately, and a graphics controller. It is composed of a control IC that controls the display of the display device based on the image information output from the device.
  • the display unit 303 displays traffic information, map information, information on route guidance, content about video output from the navigation control unit 301, and other various types of information.
  • the position acquisition unit 304 includes a GPS receiver and various sensor forces such as a vehicle speed sensor, an angular velocity sensor, and an acceleration sensor, and acquires information on the current position of the moving object (current position of the navigation device 300).
  • the GPS receiver receives the radio wave from the GPS satellite and determines the geometric position with the GPS satellite. GPS means Global Positioning
  • the GPS receiver receives radio waves from GPS satellites. It consists of an antenna for transmitting, a tuner that demodulates received radio waves, and an arithmetic circuit that calculates the current position based on the demodulated information.
  • the recording medium 305 can be realized by, for example, an HD (Hard Disk), a DVD (Digital Versatile Disk), a CD (Compact Disk), or a memory card. Note that the recording medium 305 may accept writing of information by the recording medium decoding unit 306 and record the written information in a nonvolatile manner.
  • HD Hard Disk
  • DVD Digital Versatile Disk
  • CD Compact Disk
  • the recording medium 305 may accept writing of information by the recording medium decoding unit 306 and record the written information in a nonvolatile manner.
  • map information used for route search and route guidance is recorded in the recording medium 305.
  • the map information recorded in the recording medium 305 includes background data representing features (features) such as buildings, rivers, and the ground surface, and road shape data representing the shape of the road. It is drawn in 2D or 3D on the display screen.
  • the navigation device 300 is guiding a route, the map information read from the recording medium 305 by the recording medium decoding unit 306 and the mark indicating the position of the moving body acquired by the position acquisition unit 304 are displayed on the display unit 303. Will be displayed.
  • the recording medium 305 records registration identification information for identifying the passenger, specific behavior information for determining the behavior of the passenger, and content such as video and music. ing.
  • the registered identification information includes, for example, information obtained by extracting feature points of a passenger image photographed by a camera or the like, such as a face pattern, an eye iris, fingerprint data, or voice data.
  • the specific behavior information includes, for example, information obtained by extracting features related to the specific behavior state such as drowsiness and fatigue, such as eyelid movement, volume level, and heart rate.
  • the content recorded in the recording medium 305 is determined based on, for example, the passenger information of the identified passenger or the specific behavior information of the passenger based on the registered identification information of the passenger! Is recorded in association with the specific behavior state of the occupant and can be read out according to the result of the identification of the occupant and the determination of the behavior in the navigation control unit 301.
  • the passenger information is, for example, information including characteristics such as preference, age, and sex, and the configuration such as the arrangement of passengers and the number of passengers.
  • map information, content, and the like are recorded on the recording medium 305.
  • Map information, content, and the like may be recorded in a server outside the navigation device 300. In this case, the navigation device 300 acquires the map information content from the sano via the network through the communication unit 308, for example. The acquired information is stored in RAM or the like.
  • the recording medium decoding unit 306 controls reading of information on the recording medium 305 and Z writing.
  • the recording medium decoding unit 306 is an HDD (Hard Disk Drive).
  • the audio output unit 307 reproduces a sound such as an internal sound by controlling the output to the connected speaker 312. There may be one speaker 312 or a plurality of speakers.
  • the audio output unit 307 includes, for example, a D ZA converter that performs DZA conversion of audio digital information, an amplifier that amplifies an audio analog signal output from the DZA converter, and an AZD converter that performs AZD conversion of audio analog information. It can consist of
  • the communication unit 308 obtains various types of information from the outside.
  • the communication unit 308 is an FM multiplex tuner, a VICS (registered trademark) Z beacon receiver, a wireless communication device and other communication devices, and other communication via a communication medium such as a mobile phone, PHS, communication card and wireless LAN. Communicate with the device.
  • it may be a device that can communicate by radio broadcast radio waves, television broadcast radio waves, or satellite broadcasts.
  • Information acquired by the communication unit 308 includes traffic information such as traffic jams and traffic regulations that are also distributed by the road traffic information communication system center, traffic information acquired by operators in their own way, and other information on the Internet. Public data and content.
  • the communication unit 308 may request traffic information or content from a server storing traffic information and content nationwide via the network and obtain the requested information.
  • it may be configured to receive video signals or audio signals from radio broadcasts, television broadcasts, or satellite broadcasts.
  • the route search unit 309 uses the map information acquired from the recording medium 305 via the recording medium decoding unit 306, the traffic information acquired via the communication unit 308, etc. Force Searches for the optimal route to the destination.
  • the optimal route is the route that best matches the user's request.
  • the route guidance unit 310 is obtained from the optimal route information searched by the route search unit 309, the position information of the moving body acquired by the position acquisition unit 304, and the recording medium 305 via the recording medium decoding unit 306. Based on the obtained map information, route guidance information for guiding the user to the destination point is generated.
  • the route guidance information generated at this time may be information that considers the traffic jam information received by the communication unit 308.
  • the route guidance information generated by the route guidance unit 310 is output to the display unit 303 via the navigation control unit 301.
  • the sound generation unit 311 generates information of various sounds such as a guide sound. That is, based on the route guidance information generated by the route guidance unit 310, the virtual sound source corresponding to the guidance point is set and the voice guidance information is generated, and this is output as voice via the navigation control unit 301. Output to part 307.
  • the passenger photographing unit 313 photographs a passenger.
  • the passenger may be photographed using either a moving image or a still image.
  • the passenger image and the behavior of the passenger are photographed and output to the navigation control unit 301.
  • the audio processing unit 314 reproduces audio such as music output from the navigation control unit 301 by controlling output to the connected speaker 312.
  • the sound processing unit 314 may have substantially the same configuration as the sound output unit 307.
  • the passenger identification unit 101, the passenger information acquisition unit 102, and the behavior detection unit 105 which are functional configurations of the content providing apparatus 100 according to the first embodiment, are a navigation control unit 301 and a passenger photographing unit 313. Accordingly, the function is realized by the content determination unit 103 by the navigation control unit 301, and the content output unit 104 by the display unit 303 and the speaker 312.
  • FIG. 4 is an explanatory diagram of an example of the inside of the vehicle on which the navigation device according to the first embodiment is mounted.
  • the interior of the vehicle has a driver seat 411, a passenger seat 412, and a rear seat 413, around the driver seat 411 and the passenger seat 412.
  • a display device display unit 303
  • an acoustic device speaker 312
  • an information reproducing device 426a The passenger seat 412 is provided with a display device 421b and an information reproducing device 426b for the passenger of the rear seat 413, and an acoustic device (not shown) is provided behind the rear seat 413. It has been.
  • each information reproducing device 426 (426a, 426b) are provided with a photographing device (passenger photographing unit 313) 423, which can photograph a passenger.
  • a photographing device passingenger photographing unit 313) 423, which can photograph a passenger.
  • Each information reproducing device 426 (426a, 426b) may have a structure that can be attached to and detached from the vehicle.
  • FIG. 5 is a flowchart of the process using the passenger information in the navigation device according to the first embodiment.
  • the navigation apparatus 300 first determines whether or not a content reproduction instruction has been given (step S501).
  • the content reproduction instruction may be configured, for example, by a passenger operating the user operation unit 302.
  • step S501 waiting for the content playback instruction, and if there is an instruction (step S501: Yes), the passenger photographing unit 313 then photographs the passenger image (step S501: Yes).
  • step S502 The passenger image is shot, for example, by taking a still image of the passenger's face.
  • the navigation control unit 301 generates passenger identification information from the passenger image taken in step S502 (step S503).
  • the identification information includes, for example, information obtained by extracting the feature points of the passenger's face, and is collated with the registered identification information recorded on the recording medium 305.
  • the navigation control unit 301 collates the registered identification information pre-registered in the recording medium 305 with the identification information generated in step S503, and the identification information matches. It is determined whether or not (step S504). And the identification information matches If so (Step S504: Yes), the navigation control unit 301 reads from the recording medium 305 the passenger preference information of the registered identification information that matches the identification information (Step S505).
  • the preference information is, for example, information related to the passenger's content preference, and if it is music, it includes information about genres such as Japanese music, Western music, and singer, and if it is video, such as animation and use.
  • the navigation control unit 301 determines the content to be played based on the preference information read in step S 505 (step S 506), and outputs it to the display unit 303 or the audio processing unit 314.
  • the audio processing unit 314 performs audio processing and outputs the content to the speaker 312.
  • the display unit 303 or the speaker 312 reproduces the content (step S507), and the series of processing ends.
  • step S504 if the identification information does not match (step S504: No), it is determined whether or not to register a passenger (step S508).
  • the passenger registration may be configured to display a message indicating that the passenger registration is required on the display unit 303 and to prompt the passenger to determine whether or not to register!
  • step S508 If the passenger is not registered in step S508 (step S508: No), the navigation control unit 301 outputs a content selection request to the display unit 303 (step S510). Also, the selection of content is accepted (step S511). Then, the navigation control unit 301 outputs the content to the display unit 303 or the audio processing unit 314 based on the content selection. Then, the audio processing unit 314 performs audio processing and outputs the content to the speech power 312. The display unit 303 or the speaker 312 reproduces the content (step S507), and the series of processing ends.
  • step S508 is not required! /, Do not register the passenger! / (Step S508: Yes)
  • a message such as prompting the passenger to register is displayed on the display 303.
  • registration identification information and preference information of the passenger are registered (step S509).
  • Registration of registration identification information of a passenger is performed by, for example, extracting feature points from a passenger image photographed by the passenger photographing unit 313.
  • the preference information may be registered by the user operating the user operation unit 302. Then, the process returns to step S504 and the same processing is repeated.
  • the configuration is such that a passenger image is photographed (step S502) after waiting for the content reproduction instruction (step S501: Yes).
  • the passenger image may be taken before the content reproduction instruction is given (step S502). For example, take a picture of the passenger at predetermined intervals during boarding, when starting the vehicle engine, when operating the passenger, and during travel (step S502), and wait for the content playback indication. Moho.
  • the passenger image is taken and the identification information is generated based on the information obtained by extracting the feature points of the passenger's face, but the passenger is not the identification information of the individual passenger.
  • the configuration may be such that the number of people and the configuration are generated as identification information. Specifically, for example, the number and composition of passengers can be identified. If you are riding with a large number of people, content such as live music is played, and if you are a family with children, programs for children, etc. The content may be played back.
  • step S 505 preference information is read from the recording medium 305 as passenger information of the passenger whose identification information matches, but the age and gender of the passenger are read.
  • passenger information of the passenger whose identification information matches, but the age and gender of the passenger are read.
  • it may be configured to play content suitable for age and gender.
  • a child-aged passenger can play content such as a program for children
  • a female-only passenger can play content such as a cooking program.
  • the content is recorded according to the viewing history.
  • the content that was viewed halfway during the last boarding may be replayed from the continuation.
  • FIG. 6 is a flowchart of the contents of the process using the behavior information in the navigation device according to the first embodiment.
  • the navigation apparatus 300 determines whether or not a content reproduction instruction has been given (step S601).
  • the content reproduction instruction may be, for example, a configuration in which the passenger operates the user operation unit 302.
  • step S601 waiting for the content playback instruction, and if there is an instruction (step S601: Yes), next, the passenger photographing unit 313 photographs the behavior of the passenger. (Step 602). For example, the movement of the occupant's eyeball may be photographed.
  • the navigation control unit 301 generates the passenger's behavior information from the movement of the passenger's eye shot in step S602 (step S603).
  • the behavior information includes, for example, information obtained by extracting feature points of the passenger's eye movements, and specific behavior information including characteristics of eye movements such as sleepiness and fatigue recorded in the recording medium 305 in advance. Are to be verified.
  • the navigation control unit 301 collates the specific behavior information registered in advance in the recording medium 305 with the behavior information generated in step S603! /, Then, it is determined whether or not the passenger is in a specific behavior state (step S604). If it is in the specific motion state (step S604: Yes), the navigation control unit 301 determines the content to be played based on the specific motion state (step S605). Specifically, for example, when the passenger shows sleepy behavior, if the passenger is a driver, the content to be played is relatively noisy and the content is determined. It may be configured to determine the content of the song that has arrived.
  • the navigation control unit 301 reads the content determined in step S 605 from the recording medium 305 and outputs it to the display unit 303 or the audio processing unit 314. Then, the audio processing unit 314 performs audio processing and outputs the content to the speaker 312. The display unit 303 or the speaker 312 reproduces the content (step S606), and the series of processing ends.
  • step S604 if the passenger is not in the specific behavior state (step S604: No), the process returns to step S602 and the same process is repeated. If the occupant is not in the specific behavior state, the occupant may be notified to that effect and be prompted to select content. In addition, it is possible to set the content in the case of not being in the specific behavior state in advance and play it back.
  • step S601 the force that is configured to capture the passenger's behavior (step S602) may be configured to capture the passenger's behavior (step S602) before the content playback instruction is issued.
  • brute force take a picture of the passenger's behavior at predetermined intervals during boarding, when the engine of the vehicle is started, when the passenger operates, and during travel (step S602), and a content playback instruction is issued. You can wait.
  • step S602 movement information is generated by photographing movements of the passenger's eyeballs.
  • force may generate behavior information.
  • a passenger opens a window it may be a behavior that makes V feel hot and feels uncomfortable, and even if behavior information is generated on the behavior of the whole body and noise, it generates noise.
  • Yo ⁇ .
  • content output in the first embodiment may control content reproduction for each of the forces that are configured via one or more display units 303 or speaker force 312.
  • a display unit 303 may be provided for each seat of the vehicle, and content suitable for each passenger of each seat may be reproduced.
  • the processing using the passenger information and the processing using the behavior information are processed together with the respective functions of the forces described with reference to Figs. 5 and 6. It is good as composition.
  • the passenger is photographed from the passenger photographing unit 313 such as a camera, and the passenger identification information or the behavior information is generated.
  • the passenger photographing unit 313 such as a camera
  • the passenger identification information or the behavior information is generated.
  • it may be configured to generate identification information that identifies the passenger or passenger behavior information from information acquired by other sensors.
  • the identification information or the behavior information may be generated using, for example, a seating sensor that detects a load distribution or a total load on a seat on which a passenger is seated. Information on the number and physique of the passengers can be obtained by the seating sensor.
  • One or more fingerprint sensors may be provided at predetermined positions in the vehicle. The fingerprint sensor can acquire the passenger's fingerprint information and identify the passenger.
  • a voice sensor such as a microphone may be provided in the car. Voice information such as the volume, sound quality, and pitch of the passenger can be acquired by the voice sensor, so that the passenger can be identified, and the number, gender, sleepiness, etc. can be determined.
  • pulse A human body sensor for measurement may be used. For example, by using information such as the pulse, the physical condition of the passenger can be grasped, and the behavior of the passenger can be determined.
  • a configuration may be adopted in which personal information is acquired from a mobile phone owned by the passenger!
  • information such as the number of people may be acquired by detecting the attachment / detachment of a seat belt or a child seat.
  • the user can input the content by his / her input operation. Even without setting, content can be output based on the passenger information and behavior of the passenger. Therefore, it is possible to efficiently output content suitable for the passenger.
  • the navigation device 300 captures the passenger and generates identification information and behavior information of the passenger.
  • the content to be played can be determined by comparing the identification information, behavior information with the registered identification information, and the specific behavior information. wear.
  • content can be played back based on the passenger's viewing history, for example, the content that the passenger has recently viewed can be prevented from being repeated, or the content that the passenger has watched halfway can be continued. Can be used to provide complete content to passengers. Therefore, even if the passenger does not set the content himself, the content provision can be optimized.
  • FIG. 7 is a block diagram of an example of a functional configuration of the content providing apparatus according to the second embodiment.
  • the content providing apparatus 700 includes a passenger identification unit 701, a passenger information acquisition unit 702, a spot information setting unit 703, a spot information output unit 704, and a behavior detection unit 705. Only configured.
  • the passenger identification unit 701 identifies the passenger.
  • the identification of the passenger is, for example, the identification of the relationship (owner, relative, friend, other person, etc.) with the owner of the moving body in each passenger.
  • the passenger information acquisition unit 702 acquires passenger information related to the passenger specified by the passenger identification unit 701.
  • Passenger information is, for example, information including characteristics such as preferences, age, and sex regarding the passenger, and the configuration of the passenger layout and personnel.
  • the behavior detection unit 705 detects the behavior of the passenger.
  • the passenger's behavior is information including the physical condition of the passenger such as sleepiness, tiredness, and physical condition, for example, and may include information such as the layout of the passenger exhibiting the predetermined behavior and personnel.
  • the point information setting unit 703 sets the point information based on at least one of the passenger information acquired by the passenger information acquisition unit 702 and the behavior of the passenger detected by the behavior detection unit 705. .
  • the point information is, for example, information including a destination point and a stop point in route guidance, which is acquired by a communication unit (not shown) or set with reference to a recording medium (not shown).
  • the point information is set to a destination or a stop-off facility via a communication unit (not shown), making a request for use such as reservation V ⁇ , and the point where the request is accepted is set But!
  • the point information output unit 704 outputs the point information set by the point information setting unit 703.
  • the location information is output, for example, on a display mounted on a moving object. This is performed by an audio device such as a device or a speaker.
  • the point information may be output by outputting a plurality of candidates and allowing the user to select a destination point or a stop-off point.
  • FIG. 8 is a flowchart showing the contents of processing of the content providing apparatus according to the second embodiment.
  • the content providing apparatus 700 determines whether or not a route guidance instruction has been given (step S801).
  • the route guidance may be instructed from the operation unit (not shown) by the passenger, for example.
  • step S801 after waiting for a route guidance instruction to be given (step S801: Yes), the passenger identification unit 701 identifies the passenger (step S802).
  • the identification of the passenger is, for example, the identification of the relationship between the owner of the mobile object and the passenger.
  • the occupant information acquisition unit 702 acquires occupant information related to the occupant identified in step S802 (step S803).
  • Passenger information is, for example, information including characteristics such as preferences, age, and sex regarding the passenger, as well as the configuration of the passengers and personnel.
  • the behavior detection unit 705 detects the behavior of the passenger (step S804).
  • the passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue, and physical condition.
  • passenger information is acquired (step S803) and the passenger's behavior is detected (step S804).
  • the passenger's behavior is detected (step S804). ), Acquire passenger information (Step S803) Do not perform either step in the order!
  • the location information setting unit 703 determines the location information based on at least one of the passenger information acquired in step S803 and the passenger behavior detected in step S804. Is set (step S805).
  • the spot information output unit 704 outputs the spot information set in step S805 (step S806), and the series of processing ends.
  • a configuration that performs processing such as acquisition of passenger information and detection of behavior after a route guidance instruction is given.
  • it is not limited to a route guidance instruction.
  • a configuration that is performed simultaneously with the start of movement of the moving body or a configuration that is performed at predetermined intervals while the moving body is moving may be employed.
  • the user can set the point information based on the passenger information and behavior of the passenger without setting the point information by his input operation. Can be output. Therefore, it is possible to output point information suitable for the passenger efficiently and accurately.
  • Example 2 of the present invention will be described below.
  • the content providing device of the present invention is implemented by the navigation device 300 shown in FIG. 3
  • the interior of the vehicle equipped with the navigation device 300 is almost the same as that in FIG.
  • the map information recorded on the recording medium 305 includes destination point information and stop-off point information.
  • Destination information and stop-point information are based on, for example, the passenger's registered identification information, and the passenger information determined based on the passenger information of the identified passenger and the specific behavior information of the passenger. Is associated with a specific behavior state. Then, the destination point information and the stop point information are read according to the result of the passenger identification and behavior determination in the navigation control unit 301.
  • Passenger information is, for example, information including characteristics such as preference, age, and gender, and the layout of passengers and personnel.
  • the specific behavior state is, for example, the physical state of the passenger such as sleepiness, fatigue, or physical condition.
  • the destination point information and the stop point information include facility information such as restaurants and amusement facilities.
  • the facility information is, for example, discount information according to the number of passengers, age, etc., recommended information, vacancy information of the facility, etc., and the facility power can also be acquired by the communication unit 308 described later.
  • the current position acquired by the position acquisition unit 304 or the departure point designated by the user from the user operation unit 302 is set as the departure point of the route searched by the route search unit 309.
  • the passenger information of the passenger and the specific behavior of the passenger described above are set.
  • a configuration may be adopted in which the destination point is set from destination point information associated with the state.
  • a stop point may be set on the route searched by the route search unit 309.
  • the setting of the stop point is, for example, a point specified by the user from the user operation unit 302 or the stop point information based on the stop point information associated with the above-described passenger information of the passenger or the specific behavior state of the passenger. May be set.
  • the setting of the destination point and the stop point may be configured such that a request for use of the facility is made via the communication unit 308 as necessary, and the point where the request is accepted is set. Requests regarding the use of facilities, for example, to confirm availability of passengers, ask for availability, reserve restaurants with discounts by passengers, and reserve waiting times. You may make reservations to amusement facilities that can be reduced, and set the destination or stopover point for facilities that you have confirmed or reserved.
  • the passenger identification unit 701, the passenger information acquisition unit 702, and the behavior detection unit 705, which are functional configurations of the content providing apparatus 700 according to the second embodiment, are a navigation control unit 301 and a passenger photographing unit 313.
  • the point information setting unit 703 realizes the function by the route search unit 309
  • the point information output unit 704 realizes the function by the display unit 303 and the speaker 312.
  • FIG. 9 is a flowchart of the process using the passenger information in the navigation device according to the second embodiment.
  • a process for setting a destination point using passenger information will be described.
  • the navigation apparatus 300 first determines whether or not a route guidance request has been made (step S901).
  • the route guidance request is, for example, when the passenger is a user It may be configured to operate the operation unit 302.
  • step S901 after waiting for a route guidance request, if there is a request (step S901: Yes), then the passenger photographing unit 313 photographs a passenger image (step S902).
  • the passenger image is taken by taking a still image of the face of the passenger.
  • the navigation control unit 301 generates passenger identification information from the passenger image captured in step S902 (step S903).
  • the identification information includes, for example, information obtained by extracting the feature points of the passenger's face, and is collated with the registered identification information recorded on the recording medium 305.
  • the navigation control unit 301 collates the registered identification information pre-registered in the recording medium 305 with the identification information generated in step S903, so that the identification information matches. It is determined whether or not (step S904). If the identification information matches (step S 904: Yes), the navigation control unit 301 obtains the destination point associated with the passenger information of the passenger of the registered identification information that matches the identification information from the recording medium 305. Information is read (step S905).
  • the destination point information associated with the passenger information relates to the passenger's preference, for example, and may be configured to read destination point information such as a restaurant preferred by the passenger.
  • information indicating that the passenger has stopped in the past may be managed in association with the destination information, and the destination information may be removed and the other destination information may be read.
  • the determination of “stopped in the past” can be made, for example, when a destination point of the vehicle is set in the past and the vehicle has arrived at the destination point.
  • the passenger may set their own evaluation for the destination information that the passenger has visited in the past, and read the high-V destination information of the evaluation!
  • the “specific evaluation setting” may be configured such that the passenger inputs an original score for the destination point felt by the passenger and sets the target point having a high score as the high evaluation destination point.
  • the route search unit 309 sets a destination point based on the destination point information read in step S905 (step S906).
  • the route guidance unit 310 generates route guidance information for guiding the user to the destination set in step S906 and outputs the route guidance information to the navigation control unit 301. . Then, the navigation control unit 301 controls the navigation device 300 based on the route guidance information, guides the route (step S907), and ends the series of processes.
  • step S904 If the identification information does not match in step S904 (step S904: No), it is determined whether or not to register a passenger (step S908).
  • the passenger registration may be configured to display a message indicating that the passenger registration is required on the display unit 303 and to prompt the passenger to determine whether or not to register!
  • step S908 when the passenger is not registered (step S908: No), the navigation control unit 301 outputs a destination selection request to the display unit 303 (step S910).
  • the destination selection is accepted from (Step S911).
  • the route search unit 309 sets a destination point based on the selection of the destination point in step S911 (step S906).
  • the route guidance unit 310 generates route guidance information for guiding the user to the destination set in step S906 and outputs the route guidance information to the navigation control unit 301.
  • the navigation control unit 301 controls the navigation device 300 based on the route guidance information, guides the route (step S907), and ends the series of processes.
  • step S908 If the passenger is to be registered in step S908 (step S908: Yes), the registration identification information of the passenger is displayed, for example, by prompting the display 303 to prompt the passenger to register. Is registered (step S909). Registration of registration identification information of a passenger is performed by, for example, extracting feature points from a passenger image photographed by the passenger photographing unit 313. In addition, the passenger information may be registered by operating the user operation unit 302 by the user. Then, the process returns to step S904 and the same processing is repeated.
  • a configuration is adopted in which a passenger image is photographed (step S902) after waiting for a route guidance request and when there is a request (step S901: Yes).
  • the passenger image may be taken before the route guidance is requested (step S902).
  • the passenger image may be taken at the time of boarding, when the engine of the vehicle is started, or when the passenger is operated (step S902), and a route guidance request may be waited.
  • the passenger image is taken and the identification information is generated based on the information obtained by extracting the feature points of the passenger's face, but the passenger is not the identification information of the individual passenger.
  • generates a structure as identification information may be sufficient. Specifically, for example, the number and composition of passengers should be identified so that the boarding of many people and boarding with families with children can be identified.
  • the destination point information associated with the passenger information is, for example, related to the passenger's preference, but is also associated with age, gender, placement, personnel, history, etc. It is good also as composition which has. More specifically, if the passenger is an elderly person, a restaurant for the elderly, a facility that the passenger has visited in the past, or a shop that has a reputation for the passenger in the past. In addition, discount shops according to the number of passengers, shops with recommended services, facilities for families for families, and amusement facilities for friends are linked to destination information. .
  • the setting of the destination point in step S906 may be configured such that a request for using the facility is made via the communication unit 308 as necessary, and the point where the request is accepted is set. More specifically, depending on the number of passengers, you may make reservations for restaurants and amusement facilities and set the power.
  • FIG. 10 is a flowchart of the contents of the process using the behavior information in the navigation device according to the second embodiment.
  • the behavior information is used for V and the process of setting a stop point is described.
  • the navigation apparatus 300 determines whether or not there is a request for route guidance (step S1001).
  • the route guidance request may be configured, for example, by the passenger operating the user operation unit 302! /.
  • step S1001 a request for route guidance is waited for and if there is an instruction (step S1001: Yes), then the passenger photographing unit 313 photographs the behavior of the passenger (step S1002). For example, the movement of the passenger's eyeball may be imaged.
  • the navigation control unit 301 generates the passenger's behavior information from the movement of the passenger's eye shot in step S1002 (step S1003).
  • the behavior information includes, for example, information obtained by extracting feature points of the movement of the passenger's eyeball, and is recorded in advance.
  • the specific behavior information including the characteristics of eye movements such as sleepiness and fatigue recorded in the medium 305 is checked.
  • the navigation control unit 301 compares the specific behavior information registered in advance on the recording medium 305 with the behavior information generated in step S1003, and the passenger identifies It is determined whether or not the force is in the behavior state (step S1004). If it is in the specific behavior state (step S 1004: Yes), the navigation control unit 301 reads the stop point information associated with the specific behavior state based on the specific behavior state (step S 1005). Specifically, for example, when the passenger shows sleepy behavior, if the passenger is a driver, the point where the passenger can take a break can be read as the stop point information from the recording medium 305.
  • the route search unit 309 sets a stop point based on the stop point information read in step S1005 (step S1006). Then, the route guidance unit 310 generates route guidance information for guiding the user to the stop point set in step S 1006 and outputs the route guidance information to the navigation control unit 301. Then, the navigation control unit 301 controls the navigation device 300 based on the route guidance information, guides the route (step S 1007), and ends the series of processes.
  • step S 1004 if the passenger is not in the specific behavior state (step S 1004: No), the process returns to step S 1002 and the same processing is repeated. If the passenger is not in a specific motion state, the passenger may be notified of this fact and prompted to set a stop point. In addition, when the state is not a specific behavior state, a route may be guided in advance by setting a break point as a stop point every predetermined time.
  • the configuration is such that the behavior of the passenger is photographed (Step S1002) after waiting for the request for route guidance (Step S1001: Yes). It may be configured so that the behavior of the passenger is photographed before the force route guidance is requested (step S1002). For example, intensify and photograph the passenger's behavior at predetermined intervals during boarding, when the vehicle engine is started, when the passenger is operating, and during travel (step S 1 002). You may wait. Alternatively, the behavior of passengers can be photographed at predetermined intervals, and behaviors such as fatigue and sleepiness can be promptly guided to resting points. As good as.
  • behavior information is generated by photographing the movement of the passenger's eyeballs in step S1002, but the opening and closing of the window, the movement of the entire body of the passenger, the volume in the vehicle, etc. Forces may also generate behavior information. For example, if a passenger opens a window, the behavior may be hot and uncomfortable, and the behavior information related to the behavior may be generated based on the movement and volume of the entire body. ,.
  • the setting of the stop point in step S1006 may be configured such that a request for use of the facility is made via the communication unit 308 as necessary, and the point where the request is accepted is set. More specifically, depending on the number of passengers, you may make a reservation for a restaurant to take a break and set the power.
  • the processing using the passenger information and the processing using the behavior information are processed together with the functions of the respective forces described with reference to Figs. 9 and 10. It is good as composition.
  • the passenger is photographed from the passenger photographing unit 313 such as a camera, and the identification information of the passenger or the behavior information is generated.
  • the passenger photographing unit 313 such as a camera
  • the identification information of the passenger or the behavior information is generated.
  • it may be configured to generate identification information that identifies the passenger or passenger behavior information from information acquired by other sensors.
  • Identification information or behavior information may be generated using, for example, a seating sensor that detects a load distribution and a total load on a seat on which a passenger is seated.
  • a seating sensor that detects a load distribution and a total load on a seat on which a passenger is seated.
  • One or more fingerprint sensors may be provided at predetermined positions in the vehicle. The fingerprint sensor can obtain the passenger's fingerprint information and identify the passenger.
  • a voice sensor such as a microphone may be provided in the vehicle. Voice information such as the volume, sound quality, and pitch of the passenger can be obtained by the voice sensor, so that the passenger can be identified, and the personnel, gender, sleepiness, etc. can be determined.
  • the user can input the ground by his / her input operation. Even without setting point information, point information can be output based on the passenger information and behavior of the passenger. Therefore, it is possible to output point information suitable for the passenger efficiently and accurately.
  • the passenger is photographed to generate the identification information and behavior information of the passenger.
  • the identification point, behavior information, registered identification information, and specific behavior information can be compared to set the target point or stopover point, it is possible to set the optimal point for the passenger's situation that the passenger does not set himself. Can be planned.
  • stop point is set in accordance with the behavior of the passenger, if the driver is rested by drowsiness, guiding him to a place where he can take a break can lead to relieving drowsiness and driving safely. it can.
  • the destination point or stop point can be set based on the history of the destination point or stop point of the passenger, for example, the passenger may be repeatedly guided to the most recent point. Can be prevented. On the other hand, it is also possible to guide to a point that is reputable to the passenger. Therefore, the point setting can be optimized even if the passenger does not set the destination point or the stop point by himself / herself.
  • the content providing method described in the first and second embodiments is realized by executing a program prepared in advance on a computer such as a personal computer or a workstation. Can do.
  • This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM ⁇ MO, and a DVD, and the recording medium power is read by the computer. Therefore, it is executed.
  • the program may be a transmission medium that can be distributed via a network such as the Internet.

Abstract

A content providing device (100) for providing, in a mobile body, contents. An occupant specifying section (101) specifies an occupant of the mobile body, and an occupant information acquisition section (102) acquires information on the occupant (hereinafter referred to as “occupant information”) specified by the occupant specifying section (101). Then a content determination section (103) determines, based on the occupant information acquired by the occupant information acquisition section (102), contents to be outputted, and a content output section (104) outputs the contents determined by the content determination section (103).

Description

明 細 書  Specification
コンテンツ提供装置、コンテンツ提供方法、コンテンツ提供プログラムおよ びコンピュータに読み取り可能な記録媒体  Content providing apparatus, content providing method, content providing program, and computer-readable recording medium
技術分野  Technical field
[0001] この発明は、移動体においてコンテンツを提供するコンテンツ提供装置、コンテンツ 提供方法、コンテンツ提供プログラムおよびコンピュータに読み取り可能な記録媒体 に関する。ただし、この発明は、上述したコンテンツ提供装置、コンテンツ提供方法、 コンテンッ提供プログラムおよびコンピュータに読み取り可能な記録媒体には限られ ない。  The present invention relates to a content providing apparatus that provides content in a mobile body, a content providing method, a content providing program, and a computer-readable recording medium. However, the present invention is not limited to the above-described content providing apparatus, content providing method, content providing program, and computer-readable recording medium.
背景技術  Background art
[0002] 従来より、車両などの移動体において、移動体の搭乗者は、様々な種類のコンテン ッを視聴できる。様々な種類のコンテンツは、たとえば、ラジオ放送やテレビ放送、 C D (Compact Disk)や DVD (Digital Versatile Disk)などの記録媒体に記録さ れた音楽や映像などで、搭乗者は、移動体に搭載されたディスプレイなどの表示装 置やスピーカなどの音響装置を介してコンテンツを視 ¾できる。  Conventionally, in a moving body such as a vehicle, a passenger of the moving body can view various types of content. Various types of content are, for example, radio and television broadcasts, music and video recorded on recording media such as CD (Compact Disk) and DVD (Digital Versatile Disk), and passengers are mounted on mobile objects. The content can be viewed through a display device such as a display or an audio device such as a speaker.
[0003] 一方、コンテンツの視聴に関して、各搭乗者における車内環境 (車載機器の位置や 動作状況など)の好みを、各搭乗者のプロファイル情報として、 IC (Integrated Cir cuit)カードを用いた ID (IDentification)カードに記憶させる。そして、 IDカードか ら各搭乗者のプロファイル情報を読み取って、車内環境の調整をおこなう提案がされ ている(たとえば、下記特許文献 1参照。 ) o  [0003] On the other hand, with regard to content viewing, each passenger's preference for the in-vehicle environment (such as the position and operating status of the in-vehicle device) is used as profile information for each passenger, using an IC (Integrated Circuit) card ID ( IDentification) Store in the card. A proposal has been made to adjust the in-vehicle environment by reading the profile information of each passenger from the ID card (see, for example, Patent Document 1 below).
[0004] また、従来より、車両などの移動体におけるナビゲーシヨン装置は、移動体の位置 情報と目的地点の情報および地図情報を用いて、移動体を目的地点まで誘導する。 目的地点の設定は、利用者の入力に基づいておこなわれ、利用者は、リモコンやタツ チパネルなどにより操作部を操作して、目的地点の名称や位置に関する情報を入力 する。  [0004] Conventionally, a navigation apparatus in a moving body such as a vehicle guides the moving body to a destination using the position information of the moving body, the information on the destination, and the map information. The destination point is set based on the user's input, and the user operates the operation unit with a remote control or touch panel to input information on the name and position of the destination point.
[0005] 近年では、目的地点の入力に際して、目的地関連情報の入力項目としてジャンル の表示をおこない、利用者が複数の階層を迪り目的地点を選択する構成において、 入力項目の最終決定に至るまでに選択した入力項目を学習して、利用者のニーズ に的確に対応した入力項目の表示をおこなう提案がされて 、る(たとえば、下記特許 文献 2参照。)。 [0005] In recent years, when inputting a destination point, a genre is displayed as an input item for destination related information, and a user selects a destination point through multiple levels. There has been a proposal to learn the input items selected until the final decision of the input items and display the input items accurately corresponding to the user's needs (for example, see Patent Document 2 below).
[0006] 特許文献 1 :特開 2002— 104105号公報 Patent Document 1: Japanese Patent Application Laid-Open No. 2002-104105
特許文献 2 :特開 2000— 88597号公報  Patent Document 2: JP 2000-88597
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0007] し力しながら、上述した特許文献 1に記載の技術では、各搭乗者が、プロファイル情 報を記憶した IDカードを作成して所持する必要や、 IDカードの紛失および破損を予 防する措置をとる必要があり、搭乗者の負担が増大するという問題が一例として挙げ られる。また、 IDカードを利用して、車内環境の調整をおこなうため、 IDカードを所持 して 、な 、搭乗者の好みにあわせた車内環境の設定をおこなえな 、と 、う問題が一 例として挙げられる。 [0007] However, with the technique described in Patent Document 1 described above, it is necessary for each passenger to create and possess an ID card storing profile information, and to prevent loss and damage of the ID card. One example is the problem of increasing the burden on the passengers. In addition, the use of an ID card to adjust the interior environment of the vehicle, the possession of the ID card makes it impossible to set the interior environment according to the passenger's preference. It is done.
[0008] また、上述した特許文献 2に記載の技術では、利用者による目的地点の入力が必 要不可欠であり、学習していない地点については、複数の階層を迪つて目的地点の 入力をおこなわなければならないという問題が一例として挙げられる。また、利用者 以外に搭乗者力 ^、る場合は、搭乗者の所望する目的地点を加味しつつ、目的地点 の入力をおこなわなければならないという問題が一例として挙げられる。  [0008] In addition, in the technique described in Patent Document 2 described above, it is indispensable for the user to input a destination point. For points that have not been learned, the destination point is input across multiple levels. The problem of having to be mentioned is an example. In addition, in the case of passenger power other than the user, there is a problem that the destination point must be input while taking into account the destination point desired by the passenger.
課題を解決するための手段  Means for solving the problem
[0009] 上述した課題を解決し、目的を達成するため、請求項 1の発明にカゝかるコンテンツ 提供装置は、移動体においてコンテンツを提供するコンテンツ提供装置において、 前記移動体の搭乗者を特定する特定手段と、前記特定手段によって特定された搭 乗者に関する情報 (以下「搭乗者情報」 t 、う)を取得する搭乗者情報取得手段と、 前記搭乗者情報取得手段によって取得された搭乗者情報に基づいて、出力するコ ンテンッを決定する決定手段と、前記決定手段によって決定されたコンテンツを出力 する出力手段と、を備えることを特徴とする。 [0009] In order to solve the above-described problems and achieve the object, the content providing apparatus according to the invention of claim 1 is a content providing apparatus that provides content in a mobile object, and identifies a passenger of the mobile object. Identifying means, passenger information obtaining means for obtaining information relating to the passenger identified by the identifying means (hereinafter referred to as “passenger information” t), and passenger obtained by the passenger information obtaining means The information processing apparatus includes: a determination unit that determines content to be output based on the information; and an output unit that outputs the content determined by the determination unit.
[0010] また、請求項 8の発明に力かるコンテンツ提供装置は、移動体においてコンテンツ を提供するコンテンツ提供装置において、前記搭乗者の挙動を検出する挙動検出 手段と、前記挙動検出手段によって検出された結果に基づいて、出力するコンテン ッを決定する決定手段と、前記決定手段によって決定されたコンテンツを出力する出 力手段と、を備えることを特徴とする。 [0010] Further, the content providing device according to the invention of claim 8 is a behavior detecting device that detects the behavior of the passenger in the content providing device that provides content on a mobile object. Means for determining content to be output based on the result detected by the behavior detecting means, and output means for outputting the content determined by the determining means. .
[0011] また、請求項 10の発明に力かるコンテンツ提供方法は、移動体においてコンテンツ を提供するコンテンツ提供方法にお!ヽて、前記移動体の搭乗者を特定する特定ェ 程と、前記特定工程によって特定された搭乗者情報を取得する搭乗者情報取得ェ 程と、前記搭乗者情報取得工程によって取得された搭乗者情報に基づいて、出力 するコンテンツを決定する決定工程と、前記決定工程によって決定されたコンテンツ を出力する出力工程と、を含むことを特徴とする。  [0011] In addition, the content providing method according to the invention of claim 10 includes a specifying step for specifying a passenger of the moving object and the specifying method for providing the content in the moving object. A passenger information acquisition process for acquiring the passenger information specified by the process, a determination process for determining the content to be output based on the passenger information acquired by the passenger information acquisition process, and the determination process. An output step of outputting the determined content.
[0012] また、請求項 11の発明に力かるコンテンツ提供方法は、移動体においてコンテンツ を提供するコンテンツ提供方法において、前記搭乗者の挙動を検出する挙動検出 工程と、前記挙動検出工程によって検出された検出結果に基づいて、出力するコン テンッを決定する決定工程と、前記決定工程によって決定されたコンテンツを出力す る出力工程と、を含むことを特徴とする。  [0012] Furthermore, the content providing method according to the invention of claim 11 is a content providing method for providing content on a mobile object, and is detected by a behavior detecting step for detecting the behavior of the passenger and the behavior detecting step. And a determination step of determining content to be output based on the detection result, and an output step of outputting the content determined by the determination step.
[0013] また、請求項 12の発明に力かるコンテンツ提供プログラムは、請求項 10または 11 に記載のコンテンツ提供方法をコンピュータに実行させることを特徴とする。  [0013] A content providing program according to the invention of claim 12 causes a computer to execute the content providing method according to claim 10 or 11.
[0014] また、請求項 13の発明にかかるコンピュータに読み取り可能な記録媒体は、請求 項 12に記載のコンテンツ提供プログラムを記録したことを特徴とする。  [0014] A computer-readable recording medium according to the invention of claim 13 records the content providing program according to claim 12.
図面の簡単な説明  Brief Description of Drawings
[0015] [図 1]図 1は、本実施の形態 1にかかるコンテンツ提供装置の機能的構成の一例を示 すブロック図である。  FIG. 1 is a block diagram showing an example of a functional configuration of a content providing apparatus according to the first embodiment.
[図 2]図 2は、本実施の形態 1にかかるコンテンツ提供装置の処理の内容を示すフロ 一チャートである。  FIG. 2 is a flowchart showing details of processing of the content providing apparatus according to the first embodiment.
[図 3]図 3は、本実施例 1にかかるナビゲーシヨン装置のハードウェア構成の一例を示 すブロック図である。  FIG. 3 is a block diagram of an example of a hardware configuration of the navigation device according to the first embodiment.
[図 4]図 4は、本実施例 1にかかるナビゲーシヨン装置が搭載された車両内部の一例 を示す説明図である。  FIG. 4 is an explanatory diagram of an example of the interior of the vehicle in which the navigation device according to the first embodiment is mounted.
[図 5]図 5は、本実施例 1にかかるナビゲーシヨン装置において搭乗者情報を用いた [FIG. 5] FIG. 5 shows the use of passenger information in the navigation device according to the first embodiment.
o o
処理の内容を示すフローチャートである。  It is a flowchart which shows the content of a process.
[図 6]図 6は、本実施例 1にかかるナビゲーシヨン装置において挙動情報を用いた処 [FIG. 6] FIG. 6 is a diagram illustrating a process using behavior information in the navigation device according to the first embodiment.
Yes
理の内容を示すフローチャートである。  It is a flowchart which shows the content of a process.
[図 7]図 7は、本実施の形態 2にかかるコンテンツ提供装置の機能的構成の一例を示 すブロック図である。  FIG. 7 is a block diagram showing an example of a functional configuration of the content providing apparatus according to the second embodiment.
[図 8]図 8は、本実施の形態 2にかかるコンテンツ提供装置の処理の内容を示すフロ 一チャートである。  FIG. 8 is a flowchart showing details of processing of the content providing apparatus according to the second embodiment.
[図 9]図 9は、本実施例 2にかかるナビゲーシヨン装置において搭乗者情報を用いた 処理の内容を示すフローチャートである。  FIG. 9 is a flowchart of the process using the passenger information in the navigation device according to the second embodiment.
[図 10]図 10は、本実施例 2にかかるナビゲーシヨン装置において挙動情報を用いた 処理の内容を示すフローチャートである。  FIG. 10 is a flowchart of the contents of the process using the behavior information in the navigation device according to the second embodiment.
符号の説明  Explanation of symbols
コンテンツ提供装置  Content providing device
101 搭乗者特定部  101 Passenger Identification Department
102 搭乗者情報取得部  102 Passenger information acquisition unit
103 コンテンツ決定部  103 Content determination section
104 コンテンツ出力部  104 Content output section
105 挙動検出部  105 Behavior detector
700 コンテンツ提供装置  700 Content provision device
701 搭乗者特定部  701 Passenger Identification Department
702 搭乗者情報取得部  702 Passenger Information Acquisition Department
703 地点情報設定部  703 Point information setting section
704 地点情報出力部  704 Location information output section
705 挙動検出部  705 Behavior detector
発明を実施するための最良の形態 BEST MODE FOR CARRYING OUT THE INVENTION
017] 以下に添付図面を参照して、この発明にかかるコンテンツ提供装置、コンテンツ提 供方法、コンテンツ提供プログラムおよびコンピュータに読み取り可能な記録媒体の 好適な実施の形態 1を詳細に説明する。 [0018] (実施の形態 1) Hereinafter, a preferred embodiment 1 of a content providing apparatus, a content providing method, a content providing program, and a computer-readable recording medium according to the present invention will be described in detail with reference to the accompanying drawings. [0018] (Embodiment 1)
(コンテンツ提供装置の機能的構成)  (Functional configuration of content providing device)
図 1を用いて、本実施の形態 1にかかるコンテンツ提供装置の機能的構成について 説明する。図 1は、本実施の形態 1にかかるコンテンツ提供装置の機能的構成の一 例を示すブロック図である。  The functional configuration of the content providing apparatus according to the first embodiment will be described with reference to FIG. FIG. 1 is a block diagram illustrating an example of a functional configuration of the content providing apparatus according to the first embodiment.
[0019] 図 1において、コンテンツ提供装置 100は、搭乗者特定部 101と、搭乗者情報取得 部 102と、コンテンツ決定部 103と、コンテンツ出力部 104と、挙動検出部 105と、を 含み構成されている。 In FIG. 1, the content providing apparatus 100 includes a passenger identification unit 101, a passenger information acquisition unit 102, a content determination unit 103, a content output unit 104, and a behavior detection unit 105. ing.
[0020] 搭乗者特定部 101は、搭乗者を特定する。搭乗者の特定は、たとえば、各搭乗者 における、移動体の所有者との関係 (本人、親類、友人、他人など)の特定である。  [0020] The passenger identification unit 101 identifies the passenger. The identification of the passenger is, for example, the identification of the relationship (owner, relative, friend, other person, etc.) with the owner of the moving body in each passenger.
[0021] 搭乗者情報取得部 102は、搭乗者特定部 101によって特定された搭乗者に関する 搭乗者情報を取得する。搭乗者情報は、たとえば、搭乗者に関する嗜好、年齢、性 別などの特徴や、搭乗者の配置や人数などの構成を含む情報である。  The passenger information acquisition unit 102 acquires the passenger information related to the passenger specified by the passenger specifying unit 101. Passenger information is, for example, information including characteristics such as preferences, age, and gender related to the passenger, and the configuration such as the arrangement and number of passengers.
[0022] 挙動検出部 105は、搭乗者の挙動を検出する。搭乗者の挙動は、たとえば、眠気 や疲れや体調など搭乗者の身体的状況を含む情報であり、所定の挙動を示す搭乗 者の配置や人数などを含む情報でもよ 、。  [0022] The behavior detection unit 105 detects the behavior of the passenger. The passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue and physical condition, and may be information including the arrangement and number of passengers exhibiting the predetermined behavior.
[0023] コンテンツ決定部 103は、搭乗者情報取得部 102によって取得された搭乗者情報 と、挙動検出部 105によって検出された搭乗者の挙動の少なくともどちらかに基づい て、出力するコンテンツを決定する。出力するコンテンツは、たとえば、図示しない記 録媒体にあらかじめ記録された音楽や映像、および図示しない通信部により受信す るラジオ放送やテレビ放送などである。  [0023] The content determination unit 103 determines the content to be output based on at least one of the passenger information acquired by the passenger information acquisition unit 102 and the behavior of the passenger detected by the behavior detection unit 105. . The content to be output is, for example, music or video recorded in advance on a recording medium (not shown), and radio broadcast or television broadcast received by a communication unit (not shown).
[0024] コンテンツ出力部 104は、コンテンツ決定部 103によって決定されたコンテンツを出 力する。コンテンツの出力は、たとえば、移動体に搭載されたディスプレイなどの表示 装置やスピーカなどの音響装置などによりおこなう。  The content output unit 104 outputs the content determined by the content determination unit 103. The content is output by, for example, a display device such as a display mounted on a moving body or an acoustic device such as a speaker.
[0025] (コンテンツ提供装置の処理の内容)  [Details of processing of content providing apparatus]
つぎに、図 2を用いて、本実施の形態 1にかかるコンテンツ提供装置の処理の内容 について説明する。図 2は、本実施の形態 1にかかるコンテンツ提供装置の処理の内 容を示すフローチャートである。図 2のフローチャートにおいて、まず、コンテンツ提供 装置 100は、コンテンツ提供の指示があった力否かを判断する(ステップ S201)。コ ンテンッ提供の指示は、たとえば、搭乗者が図示しない操作部カゝらおこなう構成でも よい。 Next, the contents of processing of the content providing apparatus according to the first embodiment will be described with reference to FIG. FIG. 2 is a flowchart showing the contents of the processing of the content providing apparatus according to the first embodiment. In the flowchart of Fig. 2, first, content provision The apparatus 100 determines whether or not there is an instruction to provide content (step S201). The content providing instruction may be, for example, a configuration in which a passenger gives an operation unit (not shown).
[0026] ステップ S201において、コンテンツ提供の指示があるのを待って、指示があった場 合 (ステップ S201: Yes)は、搭乗者特定部 101は、搭乗者を特定する (ステップ S20 2)。搭乗者の特定は、たとえば、移動体の所有者と搭乗者との関係の特定である。  [0026] In step S201, after waiting for an instruction to provide content, if there is an instruction (step S201: Yes), the passenger identification unit 101 identifies the passenger (step S202). The identification of the passenger is, for example, the identification of the relationship between the owner of the mobile object and the passenger.
[0027] つぎに、搭乗者情報取得部 102は、ステップ S202において特定された搭乗者に 関する搭乗者情報を取得する (ステップ S203)。搭乗者情報は、たとえば、搭乗者に 関する嗜好、年齢、性別などの特徴や、搭乗者の配置や人数などの構成を含む情 報である。  Next, the occupant information acquisition unit 102 acquires occupant information about the occupant identified in step S202 (step S203). Passenger information is, for example, information including characteristics such as preferences, age, and sex regarding the passenger, and configuration such as the arrangement and number of passengers.
[0028] また、挙動検出部 105は、搭乗者の挙動を検出する (ステップ S204)。搭乗者の挙 動は、たとえば、眠気や疲れや体調など搭乗者の身体的状況を含む情報である。な お、本フローチャートにおいては、搭乗者情報を取得して (ステップ S203)、搭乗者 の挙動を検出する (ステップ S 204)こととしている力 搭乗者の挙動を検出して (ステ ップ S 204)、搭乗者情報を取得する (ステップ S203)順序でもよぐどちらかのステツ プをおこなわな!/ヽ構成でもよ!/、。  [0028] The behavior detecting unit 105 detects the behavior of the passenger (step S204). The passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue, and physical condition. In this flowchart, the passenger information is acquired (step S203), and the behavior of the passenger to be detected (step S204) is detected (step S204). ), Acquire passenger information (Step S203) Do not perform either step in the order!
[0029] つづいて、コンテンツ決定部 103は、ステップ S203において取得された搭乗者情 報と、ステップ S204において検出された搭乗者の挙動の少なくともどちらか一方に 基づいて、出力するコンテンツを決定する(ステップ S205)。そして、コンテンツ出力 部 104は、ステップ S205において決定されたコンテンツを出力し (ステップ S206)、 一連の処理を終了する。  [0029] Next, the content determination unit 103 determines the content to be output based on at least one of the passenger information acquired in step S203 and the passenger behavior detected in step S204 ( Step S205). Then, the content output unit 104 outputs the content determined in step S205 (step S206), and ends the series of processes.
[0030] なお、上述のフローチャートにおいては、コンテンツ提供の指示があるのを待って、 指示があった場合 (ステップ S 201: Yes)に、搭乗者を特定し (ステップ S202)、搭乗 者情報を取得し (ステップ S 203)、挙動を検出する (ステップ S 204)構成としているが 、コンテンツ提供の指示がある前に、前述のステップ S202〜ステップ S204をおこな う構成としてもよい。たとえば、あら力じめ搭乗時に、搭乗者を特定し (ステップ S202) 、搭乗者情報を取得し (ステップ S 203)、挙動を検出して (ステップ S204)、コンテン ッ提供の指示があるのを待って、指示があった場合 (ステップ S201 : Yes)に、コンテ ンッを決定する (ステップ S205)構成でもよ!/、。 [0030] In the above flow chart, after waiting for an instruction to provide content, if there is an instruction (step S201: Yes), the passenger is identified (step S202), and the passenger information is stored. Although it is configured to acquire (step S203) and detect behavior (step S204), it may be configured to perform the above-described steps S202 to S204 before instructing content provision. For example, at the time of boarding by force, the passenger is identified (step S202), the passenger information is obtained (step S203), the behavior is detected (step S204), and there is an instruction to provide content. Wait for instructions (Step S201: Yes) (Step S205) The configuration is also good! /.
[0031] 以上説明したように、本実施の形態 1にかかるコンテンツ提供装置、コンテンツ提供 方法、コンテンツ提供プログラムおよびコンピュータに読み取り可能な記録媒体によ れば、ユーザは、自らの入力操作によりコンテンツを設定しなくても、搭乗者の搭乗 者情報および挙動に基づいて、コンテンツの出力ができる。したがって、効率的に搭 乗者に適したコンテンツの出力を図ることができる。 [0031] As described above, according to the content providing apparatus, the content providing method, the content providing program, and the computer-readable recording medium according to the first embodiment, the user can input the content by his / her input operation. Even without setting, content can be output based on the passenger information and behavior of the passenger. Therefore, it is possible to efficiently output content suitable for the passenger.
実施例 1  Example 1
[0032] 以下に、本発明の実施例 1について説明する。本実施例 1では、たとえば、車両( 四輪車、二輪車を含む)などの移動体に搭載されるナビゲーシヨン装置によって、本 発明のコンテンツ提供装置を実施した場合の一例について説明する。  [0032] Hereinafter, Example 1 of the present invention will be described. In the first embodiment, an example in which the content providing device of the present invention is implemented by a navigation device mounted on a moving body such as a vehicle (including a four-wheeled vehicle and a two-wheeled vehicle) will be described.
[0033] (ナビゲーシヨン装置のハードウェア構成)  [0033] (Hardware configuration of navigation device)
図 3を用いて、本実施例 1にかかるナビゲーシヨン装置のハードウェア構成について 説明する。図 3は、本実施例 1にかかるナビゲーシヨン装置のハードウェア構成の一 例を示すブロック図である。  The hardware configuration of the navigation device according to the first embodiment will be described with reference to FIG. FIG. 3 is a block diagram of an example of a hardware configuration of the navigation device according to the first embodiment.
[0034] 図 3において、ナビゲーシヨン装置 300は、車両などの移動体に搭載されており、ナ ピゲーシヨン制御部 301と、ユーザ操作部 302と、表示部 303と、位置取得部 304と 、記録媒体 305と、記録媒体デコード部 306と、音声出力部 307と、通信部 308と、 経路探索部 309と、経路誘導部 310と、音声生成部 311と、スピーカ 312と、搭乗者 撮影部 313と、音声処理部 314と、によって構成される。  In FIG. 3, a navigation device 300 is mounted on a moving body such as a vehicle, and includes a navigation control unit 301, a user operation unit 302, a display unit 303, a position acquisition unit 304, and a recording medium. 305, a recording medium decoding unit 306, an audio output unit 307, a communication unit 308, a route search unit 309, a route guidance unit 310, an audio generation unit 311, a speaker 312, a passenger photographing unit 313, And an audio processing unit 314.
[0035] ナビゲーシヨン制御部 301は、ナビゲーシヨン装置 300全体を制御する。ナビゲー シヨン制御部 301は、たとえば所定の演算処理を実行する CPU (Central Process ing Unit)や、各種制御プログラムを格納する ROM (Read Only Memory)、お よび、 CPUのワークエリアとして機能する RAM (Random Access Memory)など によって構成されるマイクロコンピュータなどによって実現することができる。  The navigation control unit 301 controls the entire navigation device 300. The navigation control unit 301 includes, for example, a CPU (Central Processing Unit) that executes predetermined arithmetic processing, a ROM (Read Only Memory) that stores various control programs, and a RAM (Random) that functions as a work area for the CPU. It can be realized by a microcomputer constituted by an Access Memory).
[0036] ナビゲーシヨン制御部 301は、経路誘導に際し、位置取得部 304によって取得され た現在位置に関する情報と、記録媒体 305から記録媒体デコード部 306を経由して 得られた地図情報に基づいて、地図上のどの位置を走行している力を算出し、算出 結果を表示部 303へ出力する。また、ナビゲーシヨン制御部 301は、経路誘導に際し 、経路探索部 309、経路誘導部 310、音声生成部 311との間で経路誘導に関する情 報の入出力をおこない、その結果得られる情報を表示部 303および音声出力部 307 へ出力する。 [0036] The navigation control unit 301, based on the information on the current position acquired by the position acquisition unit 304 and the map information obtained from the recording medium 305 via the recording medium decoding unit 306, upon route guidance, The driving force at which position on the map is calculated, and the calculation result is output to the display unit 303. In addition, the navigation control unit 301 performs route guidance. The route search unit 309, the route guidance unit 310, and the voice generation unit 311 input / output information related to route guidance, and output the obtained information to the display unit 303 and the voice output unit 307.
[0037] また、ナビゲーシヨン制御部 301は、搭乗者撮影部 313により撮影された搭乗者の 搭乗者像あるいは挙動に基づいて、後述する搭乗者の識別情報や挙動情報を生成 する。そして、ユーザがユーザ操作部 302を操作して入力するコンテンツ再生の指示 にしたがい、搭乗者の識別情報や挙動情報に基づいて、音声や映像などのコンテン ッ再生を制御する。コンテンツ再生の制御は、たとえば、記録媒体 305に記録された 音楽や映像、あるいは通信部 308により受信するラジオ放送やテレビ放送などから、 再生するコンテンツを決定して、表示部 303あるいは音声処理部 314へ出力する。  Further, the navigation control unit 301 generates passenger identification information and behavior information, which will be described later, based on the passenger image or behavior of the passenger photographed by the passenger photographing unit 313. Then, in accordance with the content reproduction instruction input by the user operating the user operation unit 302, content reproduction such as audio and video is controlled based on the identification information and behavior information of the passenger. The content playback control is performed by, for example, determining the content to be played back from music or video recorded on the recording medium 305 or radio broadcast or television broadcast received by the communication unit 308, and displaying the display unit 303 or the audio processing unit 314. Output to.
[0038] ユーザ操作部 302は、ユーザがリモコンやスィッチ、タツチパネルなどの操作手段 を操作して入力した情報を取得してナビゲーシヨン制御部 301に対して出力する。  The user operation unit 302 acquires information input by the user by operating operation means such as a remote control, a switch, and a touch panel, and outputs the acquired information to the navigation control unit 301.
[0039] 表示部 303は、たとえば、 CRT (Cathode Ray Tube)、 TFT液晶ディスプレイ、 有機 ELディスプレイ、プラズマディスプレイなどを含む。表示部 303は、具体的には 、たとえば、映像 IZF (インターフェース)や映像 IZFに接続された映像表示用のデ イスプレイ装置によって構成することができる。映像 IZFは、具体的には、たとえば、 ディスプレイ装置全体の制御をおこなうグラフィックコントローラと、即時表示可能な画 像情報を一時的に記憶する VRAM (Video RAM)などのバッファメモリと、グラフィ ックコントローラから出力される画像情報に基づいて、ディスプレイ装置を表示制御す る制御 ICなどによって構成される。表示部 303には、交通情報や地図情報や経路誘 導に関する情報やナビゲーシヨン制御部 301から出力される映像に関するコンテン ッ、その他各種情報が表示される。  Display unit 303 includes, for example, a CRT (Cathode Ray Tube), a TFT liquid crystal display, an organic EL display, a plasma display, and the like. Specifically, the display unit 303 can be configured by, for example, a video IZF (interface) or a video display device connected to the video IZF. Specifically, the video IZF is, for example, a graphic controller that controls the entire display device, a buffer memory such as VRAM (Video RAM) that temporarily stores image information that can be displayed immediately, and a graphics controller. It is composed of a control IC that controls the display of the display device based on the image information output from the device. The display unit 303 displays traffic information, map information, information on route guidance, content about video output from the navigation control unit 301, and other various types of information.
[0040] 位置取得部 304は、 GPSレシーバおよび車速センサーや角速度センサーや加速 度センサーなどの各種センサー力も構成され、移動体の現在位置 (ナビゲーシヨン装 置 300の現在位置)の情報を取得する。 GPSレシーバは、 GPS衛星からの電波を受 信し、 GPS衛星との幾何学的位置を求める。なお、 GPSとは、 Global Positioning[0040] The position acquisition unit 304 includes a GPS receiver and various sensor forces such as a vehicle speed sensor, an angular velocity sensor, and an acceleration sensor, and acquires information on the current position of the moving object (current position of the navigation device 300). The GPS receiver receives the radio wave from the GPS satellite and determines the geometric position with the GPS satellite. GPS means Global Positioning
Systemの略称であり、 4つ以上の衛星からの電波を受信することによって地上で の位置を正確に求めるシステムである。 GPSレシーバは、 GPS衛星からの電波を受 信するためのアンテナ、受信した電波を復調するチューナーおよび復調した情報に 基づいて現在位置を算出する演算回路などによって構成される。 It is an abbreviation for System, and is a system that accurately determines the position on the ground by receiving radio waves from four or more satellites. The GPS receiver receives radio waves from GPS satellites. It consists of an antenna for transmitting, a tuner that demodulates received radio waves, and an arithmetic circuit that calculates the current position based on the demodulated information.
[0041] 記録媒体 305には、各種制御プログラムや各種情報がコンピュータに読み取り可 能な状態で記録されている。この記録媒体 305は、たとえば、 HD (Hard Disk)や D VD (Digital Versatile Disk)、 CD (Compact Disk)、メモリカードによって実現 することができる。なお、記録媒体 305は、記録媒体デコード部 306による情報の書 き込みを受け付けるとともに、書き込まれた情報を不揮発に記録するようにしてもよい  [0041] Various control programs and various information are recorded on the recording medium 305 in a state readable by a computer. The recording medium 305 can be realized by, for example, an HD (Hard Disk), a DVD (Digital Versatile Disk), a CD (Compact Disk), or a memory card. Note that the recording medium 305 may accept writing of information by the recording medium decoding unit 306 and record the written information in a nonvolatile manner.
[0042] また、記録媒体 305には、経路探索および経路誘導に用いられる地図情報が記録 されている。記録媒体 305に記録されている地図情報は、建物、河川、地表面などの 地物 (フィーチャ)をあらわす背景データと、道路の形状をあらわす道路形状データと を有しており、表示部 303の表示画面において 2次元または 3次元に描画される。ナ ピゲーシヨン装置 300が経路誘導中の場合は、記録媒体デコード部 306によって記 録媒体 305から読み取られた地図情報と位置取得部 304によって取得された移動体 の位置を示すマークとが表示部 303に表示されることとなる。 [0042] In addition, map information used for route search and route guidance is recorded in the recording medium 305. The map information recorded in the recording medium 305 includes background data representing features (features) such as buildings, rivers, and the ground surface, and road shape data representing the shape of the road. It is drawn in 2D or 3D on the display screen. When the navigation device 300 is guiding a route, the map information read from the recording medium 305 by the recording medium decoding unit 306 and the mark indicating the position of the moving body acquired by the position acquisition unit 304 are displayed on the display unit 303. Will be displayed.
[0043] また、記録媒体 305には、あらかじめ登録された、搭乗者を識別するための登録識 別情報や搭乗者の挙動を判断する特定挙動情報、および映像や音楽などのコンテ ンッが記録されている。登録識別情報は、たとえば、カメラなどで撮影する搭乗者像 の特徴点を抽出した情報を含むもので、顔のパターンや目の虹彩や指紋データや音 声データなどである。また、特定挙動情報は、たとえば、眠気や疲れなど特定挙動状 態に関する特徴を抽出した情報を含むもので、まぶたの動きや音量の大小や心拍数 などである。  [0043] The recording medium 305 records registration identification information for identifying the passenger, specific behavior information for determining the behavior of the passenger, and content such as video and music. ing. The registered identification information includes, for example, information obtained by extracting feature points of a passenger image photographed by a camera or the like, such as a face pattern, an eye iris, fingerprint data, or voice data. The specific behavior information includes, for example, information obtained by extracting features related to the specific behavior state such as drowsiness and fatigue, such as eyelid movement, volume level, and heart rate.
[0044] 記録媒体 305に記録されたコンテンツは、たとえば、搭乗者の登録識別情報に基 づ 、て識別された搭乗者の搭乗者情報や、搭乗者の特定挙動情報に基づ!、て判断 される搭乗者の特定挙動状態に関連づけられて記録されており、ナビゲーシヨン制 御部 301における搭乗者の識別および挙動の判断の結果に応じて読み出すことが できる。搭乗者情報は、たとえば、嗜好、年齢、性別などの特徴や、搭乗者の配置、 人数などの構成を含む情報である。 [0045] なお、本実施例 1では地図情報やコンテンツなどを記録媒体 305に記録するように したが、これに限るものではない。地図情報やコンテンツなどは、ナビゲーシヨン装置 300外部のサーバなどに記録されていてもよい。その場合、ナビゲーシヨン装置 300 は、たとえば、通信部 308を通じて、ネットワークを介してサーノから地図情報ゃコン テンッを取得する。取得された情報は RAMなどに記憶される。 [0044] The content recorded in the recording medium 305 is determined based on, for example, the passenger information of the identified passenger or the specific behavior information of the passenger based on the registered identification information of the passenger! Is recorded in association with the specific behavior state of the occupant and can be read out according to the result of the identification of the occupant and the determination of the behavior in the navigation control unit 301. The passenger information is, for example, information including characteristics such as preference, age, and sex, and the configuration such as the arrangement of passengers and the number of passengers. [0045] In the first embodiment, map information, content, and the like are recorded on the recording medium 305. However, the present invention is not limited to this. Map information, content, and the like may be recorded in a server outside the navigation device 300. In this case, the navigation device 300 acquires the map information content from the sano via the network through the communication unit 308, for example. The acquired information is stored in RAM or the like.
[0046] 記録媒体デコード部 306は、記録媒体 305に対する情報の読み取り Z書き込みの 制御をおこなう。たとえば、記録媒体 305として HDを用いた場合には、記録媒体デ コード部 306は、 HDD (Hard Disk Drive)となる。  The recording medium decoding unit 306 controls reading of information on the recording medium 305 and Z writing. For example, when HD is used as the recording medium 305, the recording medium decoding unit 306 is an HDD (Hard Disk Drive).
[0047] 音声出力部 307は、接続されたスピーカ 312への出力を制御することによって、案 内音などの音声を再生する。スピーカ 312は、 1つであってもよいし、複数であっても よい。この音声出力部 307は、たとえば、音声デジタル情報の DZA変換をおこなう D ZAコンバータと、 DZAコンバータから出力される音声アナログ信号を増幅する増 幅器と、音声アナログ情報の AZD変換をおこなう AZDコンバータと、から構成する ことができる。  [0047] The audio output unit 307 reproduces a sound such as an internal sound by controlling the output to the connected speaker 312. There may be one speaker 312 or a plurality of speakers. The audio output unit 307 includes, for example, a D ZA converter that performs DZA conversion of audio digital information, an amplifier that amplifies an audio analog signal output from the DZA converter, and an AZD converter that performs AZD conversion of audio analog information. It can consist of
[0048] 通信部 308は、各種情報を外部から取得する。たとえば、通信部 308は、 FM多重 チューナー、 VICS (登録商標) Zビーコンレシーバ、無線通信機器およびその他の 通信機器や、携帯電話、 PHS、通信カードおよび無線 LANなどの通信媒体を介し て他の通信機器との通信をおこなう。あるいは、ラジオ放送による電波やテレビ放送 による電波や衛星放送により通信をおこなうことのできる機器などでもよい。  [0048] The communication unit 308 obtains various types of information from the outside. For example, the communication unit 308 is an FM multiplex tuner, a VICS (registered trademark) Z beacon receiver, a wireless communication device and other communication devices, and other communication via a communication medium such as a mobile phone, PHS, communication card and wireless LAN. Communicate with the device. Alternatively, it may be a device that can communicate by radio broadcast radio waves, television broadcast radio waves, or satellite broadcasts.
[0049] 通信部 308によって取得される情報は、道路交通情報通信システムセンター力も配 信される渋滞や交通規制などの交通情報や、事業者が独自の方式で取得した交通 情報や、その他インターネット上の公開データおよびコンテンツなどである。通信部 3 08は、たとえば、全国の交通情報やコンテンツを蓄積しているサーバに対しネットヮ ークを介して、交通情報あるいはコンテンツを要求し、要求した情報を取得するように してもよい。また、ラジオ放送による電波やテレビ放送などの電波や衛星放送などか ら、映像ある ヽは音声に関する信号を受信する構成でもよ ヽ。  [0049] Information acquired by the communication unit 308 includes traffic information such as traffic jams and traffic regulations that are also distributed by the road traffic information communication system center, traffic information acquired by operators in their own way, and other information on the Internet. Public data and content. For example, the communication unit 308 may request traffic information or content from a server storing traffic information and content nationwide via the network and obtain the requested information. In addition, it may be configured to receive video signals or audio signals from radio broadcasts, television broadcasts, or satellite broadcasts.
[0050] 経路探索部 309は、記録媒体 305から記録媒体デコード部 306を介して取得され る地図情報や、通信部 308を介して取得される交通情報などを利用して、出発地点 力 目的地点までの最適経路を探索する。ここで、最適経路とは、ユーザの要望に 最も合致する経路である。 [0050] The route search unit 309 uses the map information acquired from the recording medium 305 via the recording medium decoding unit 306, the traffic information acquired via the communication unit 308, etc. Force Searches for the optimal route to the destination. Here, the optimal route is the route that best matches the user's request.
[0051] 経路誘導部 310は、経路探索部 309によって探索された最適経路情報、位置取得 部 304によって取得された移動体の位置情報、記録媒体 305から記録媒体デコード 部 306を経由して得られた地図情報に基づいて、ユーザを目的地点まで誘導するた めの経路誘導情報の生成をおこなう。このとき生成される経路誘導情報は、通信部 3 08によって受信した渋滞情報を考慮したものであってもよい。経路誘導部 310で生 成された経路誘導情報は、ナビゲーシヨン制御部 301を介して表示部 303へ出力さ れる。  [0051] The route guidance unit 310 is obtained from the optimal route information searched by the route search unit 309, the position information of the moving body acquired by the position acquisition unit 304, and the recording medium 305 via the recording medium decoding unit 306. Based on the obtained map information, route guidance information for guiding the user to the destination point is generated. The route guidance information generated at this time may be information that considers the traffic jam information received by the communication unit 308. The route guidance information generated by the route guidance unit 310 is output to the display unit 303 via the navigation control unit 301.
[0052] 音声生成部 311は、案内音などの各種音声の情報を生成する。すなわち、経路誘 導部 310で生成された経路誘導情報に基づいて、案内ポイントに対応した仮想音源 の設定と音声ガイダンス情報の生成をおこな 、、これをナビゲーシヨン制御部 301を 介して音声出力部 307へ出力する。  [0052] The sound generation unit 311 generates information of various sounds such as a guide sound. That is, based on the route guidance information generated by the route guidance unit 310, the virtual sound source corresponding to the guidance point is set and the voice guidance information is generated, and this is output as voice via the navigation control unit 301. Output to part 307.
[0053] 搭乗者撮影部 313は、搭乗者を撮影する。搭乗者の撮影は、動画あるいは静止画 のどちらでもよぐたとえば、搭乗者像や搭乗者の挙動を撮影して、ナビゲーシヨン制 御部 301へ出力する。  [0053] The passenger photographing unit 313 photographs a passenger. The passenger may be photographed using either a moving image or a still image. For example, the passenger image and the behavior of the passenger are photographed and output to the navigation control unit 301.
[0054] 音声処理部 314は、接続されたスピーカ 312への出力を制御することによって、ナ ピゲーシヨン制御部 301から出力される音楽などの音声を再生する。スピーカ 312は 、 1つであってもよいし、複数であってもよい。この音声処理部 314は、たとえば、音声 出力部 307とほぼ同様の構成であってもよ 、。  The audio processing unit 314 reproduces audio such as music output from the navigation control unit 301 by controlling output to the connected speaker 312. There may be one speaker 312 or a plurality of speakers 312. For example, the sound processing unit 314 may have substantially the same configuration as the sound output unit 307.
[0055] なお、実施の形態 1にかかるコンテンツ提供装置 100の機能的構成である搭乗者 特定部 101、搭乗者情報取得部 102および挙動検出部 105はナビゲーシヨン制御 部 301および搭乗者撮影部 313によって、コンテンツ決定部 103はナビゲーシヨン制 御部 301によって、コンテンツ出力部 104は表示部 303やスピーカ 312によって、そ れぞれその機能を実現する。  It should be noted that the passenger identification unit 101, the passenger information acquisition unit 102, and the behavior detection unit 105, which are functional configurations of the content providing apparatus 100 according to the first embodiment, are a navigation control unit 301 and a passenger photographing unit 313. Accordingly, the function is realized by the content determination unit 103 by the navigation control unit 301, and the content output unit 104 by the display unit 303 and the speaker 312.
[0056] つぎに、図 4を用いて、本実施例 1にかかるナビゲーシヨン装置 300が搭載された 車両内部について説明する。図 4は、本実施例 1にかかるナビゲーシヨン装置が搭載 された車両内部の一例を示す説明図である。 [0057] 図 4において、車両内部は、運転席シート 411と、助手席シート 412と、後部座席シ ート 413と、を有しており、運転席シート 411と助手席シート 412との周囲には、表示 装置 (表示部 303) 421aおよび音響装置 (スピーカ 312) 422ならびに情報再生機器 426aが設けられている。また、助手席シート 412には、後部座席シート 413の搭乗者 に向けて、表示装置 421bおよび情報再生機器 426bが設けられており、後部座席シ ート 413の後方には図示しない音響装置が設けられている。 Next, the inside of the vehicle in which the navigation device 300 according to the first embodiment is mounted will be described with reference to FIG. FIG. 4 is an explanatory diagram of an example of the inside of the vehicle on which the navigation device according to the first embodiment is mounted. In FIG. 4, the interior of the vehicle has a driver seat 411, a passenger seat 412, and a rear seat 413, around the driver seat 411 and the passenger seat 412. Are provided with a display device (display unit 303) 421a, an acoustic device (speaker 312) 422, and an information reproducing device 426a. The passenger seat 412 is provided with a display device 421b and an information reproducing device 426b for the passenger of the rear seat 413, and an acoustic device (not shown) is provided behind the rear seat 413. It has been.
[0058] 車両の天井部 414および各情報再生機器 426 (426a, 426b)には、撮影装置 (搭 乗者撮影部 313) 423が設けられており、搭乗者を撮影できる。なお、各情報再生機 器 426 (426a, 426b)は、車両に対して着脱可能な構造を備えていてもよい。  [0058] The vehicle's ceiling 414 and each information reproducing device 426 (426a, 426b) are provided with a photographing device (passenger photographing unit 313) 423, which can photograph a passenger. Each information reproducing device 426 (426a, 426b) may have a structure that can be attached to and detached from the vehicle.
[0059] (ナビゲーシヨン装置 300の処理の内容)  [0059] (Contents of processing of navigation device 300)
つぎに、図 5、図 6を用いて、本実施例 1にかかるナビゲーシヨン装置 300の処理の 内容について説明する。図 5は、本実施例 1にかかるナビゲーシヨン装置において搭 乗者情報を用いた処理の内容を示すフローチャートである。本図では、搭乗者情報 として、搭乗者の好みである嗜好情報を利用した場合について説明する。図 5のフロ 一チャートにおいて、まず、ナビゲーシヨン装置 300は、コンテンツ再生の指示があつ たカゝ否かを判断する (ステップ S501)。コンテンツ再生の指示は、たとえば、搭乗者 がユーザ操作部 302を操作しておこなう構成でもよい。  Next, the contents of the processing of the navigation device 300 according to the first embodiment will be described with reference to FIGS. FIG. 5 is a flowchart of the process using the passenger information in the navigation device according to the first embodiment. In this figure, a case where preference information which is a passenger's preference is used as the passenger information will be described. In the flowchart of FIG. 5, the navigation apparatus 300 first determines whether or not a content reproduction instruction has been given (step S501). The content reproduction instruction may be configured, for example, by a passenger operating the user operation unit 302.
[0060] ステップ S501において、コンテンツ再生の指示があるのを待って、指示があった場 合 (ステップ S501: Yes)は、つぎに、搭乗者撮影部 313は、搭乗者像を撮影する (ス テツプ S502)。搭乗者像の撮影は、たとえば、搭乗者の顔について静止画を撮影す る。  [0060] In step S501, waiting for the content playback instruction, and if there is an instruction (step S501: Yes), the passenger photographing unit 313 then photographs the passenger image (step S501: Yes). Step S502). The passenger image is shot, for example, by taking a still image of the passenger's face.
[0061] そして、ナビゲーシヨン制御部 301は、ステップ S502において撮影した搭乗者像か ら、搭乗者の識別情報を生成する (ステップ S503)。識別情報は、たとえば、搭乗者 の顔の特徴点を抽出した情報を含むもので、あら力じめ記録媒体 305に記録された 登録識別情報と照合をおこなうものである。  [0061] Then, the navigation control unit 301 generates passenger identification information from the passenger image taken in step S502 (step S503). The identification information includes, for example, information obtained by extracting the feature points of the passenger's face, and is collated with the registered identification information recorded on the recording medium 305.
[0062] つづいて、ナビゲーシヨン制御部 301は、記録媒体 305にあら力じめ登録されてい る登録識別情報と、ステップ S503において生成された識別情報との照合をおこない 、識別情報が一致する力否かを判断する (ステップ S504)。そして、識別情報が一致 する場合 (ステップ S 504 : Yes)は、ナビゲーシヨン制御部 301は、記録媒体 305から 、識別情報と一致した登録識別情報の搭乗者の嗜好情報を読み込む (ステップ S50 5)。嗜好情報は、たとえば、搭乗者のコンテンツの好みに関する情報であり、音楽で あれば邦楽や洋楽や歌手、ある 、は映像であればアニメや-ユースなどのジャンル を含む情報である。 [0062] Subsequently, the navigation control unit 301 collates the registered identification information pre-registered in the recording medium 305 with the identification information generated in step S503, and the identification information matches. It is determined whether or not (step S504). And the identification information matches If so (Step S504: Yes), the navigation control unit 301 reads from the recording medium 305 the passenger preference information of the registered identification information that matches the identification information (Step S505). The preference information is, for example, information related to the passenger's content preference, and if it is music, it includes information about genres such as Japanese music, Western music, and singer, and if it is video, such as animation and use.
[0063] つぎに、ナビゲーシヨン制御部 301は、ステップ S505において読み込んだ嗜好情 報に基づいて、再生するコンテンツを決定し (ステップ S506)、表示部 303あるいは 音声処理部 314へ出力する。  Next, the navigation control unit 301 determines the content to be played based on the preference information read in step S 505 (step S 506), and outputs it to the display unit 303 or the audio processing unit 314.
[0064] そして、音声処理部 314は音声処理を施して、スピーカ 312へコンテンツを出力し、 表示部 303あるいはスピーカ 312は、コンテンツを再生し (ステップ S507)、一連の処 理を終了する。 [0064] Then, the audio processing unit 314 performs audio processing and outputs the content to the speaker 312. The display unit 303 or the speaker 312 reproduces the content (step S507), and the series of processing ends.
[0065] また、ステップ S504において、識別情報が一致しない場合 (ステップ S504 : No)は 、搭乗者の登録をおこなうか否かを判断する (ステップ S508)。搭乗者の登録は、た とえば、表示部 303に搭乗者の登録を要求する旨を表示して、搭乗者に登録をおこ なうか否かの判断を促す構成としてもよ!ヽ。  In step S504, if the identification information does not match (step S504: No), it is determined whether or not to register a passenger (step S508). For example, the passenger registration may be configured to display a message indicating that the passenger registration is required on the display unit 303 and to prompt the passenger to determine whether or not to register!
[0066] ステップ S508において、搭乗者の登録をおこなわない場合 (ステップ S 508 : No) は、ナビゲーシヨン制御部 301は、表示部 303へコンテンツ選択要求を出力し (ステ ップ S510)、搭乗者力もコンテンツの選択を受け付ける (ステップ S511)。そして、ナ ピゲーシヨン制御部 301は、コンテンツ選択に基づいてコンテンツを表示部 303ある いは音声処理部 314に出力する。そして、音声処理部 314は音声処理を施して、ス ピー力 312へコンテンツを出力し、表示部 303あるいはスピーカ 312は、コンテンツを 再生し (ステップ S507)、一連の処理を終了する。  [0066] If the passenger is not registered in step S508 (step S508: No), the navigation control unit 301 outputs a content selection request to the display unit 303 (step S510). Also, the selection of content is accepted (step S511). Then, the navigation control unit 301 outputs the content to the display unit 303 or the audio processing unit 314 based on the content selection. Then, the audio processing unit 314 performs audio processing and outputs the content to the speech power 312. The display unit 303 or the speaker 312 reproduces the content (step S507), and the series of processing ends.
[0067] また、ステップ S508〖こお!/、て、搭乗者の登録をおこなわな!/、場合 (ステップ S508: Yes)は、表示部 303に搭乗者の登録を促す旨を表示するなどして、搭乗者の登録 識別情報、嗜好情報を登録する (ステップ S509)。搭乗者の登録識別情報の登録は 、たとえば、搭乗者撮影部 313により撮影した搭乗者像から特徴点を抽出しておこな う。また、嗜好情報の登録は、ユーザがユーザ操作部 302を操作することによってお こなう構成としてもよい。そして、ステップ S504にもどり、同様の処理を繰り返す。 [0068] なお、図 5の説明では、コンテンツ再生の指示があるのを待って、指示があった場 合 (ステップ S501: Yes)に、搭乗者像を撮影する (ステップ S502)構成としているが 、コンテンツ再生の指示がある前に、搭乗者像を撮影する (ステップ S502)構成とし てもよい。たとえば、あら力じめ、搭乗時や車両のエンジンスタート時や搭乗者の操作 時および走行中における所定の間隔ごとに搭乗者像を撮影して (ステップ S502)、コ ンテンッ再生の旨示を待ってもょ ヽ。 [0067] In addition, if step S508 is not required! /, Do not register the passenger! / (Step S508: Yes), a message such as prompting the passenger to register is displayed on the display 303. Then, registration identification information and preference information of the passenger are registered (step S509). Registration of registration identification information of a passenger is performed by, for example, extracting feature points from a passenger image photographed by the passenger photographing unit 313. In addition, the preference information may be registered by the user operating the user operation unit 302. Then, the process returns to step S504 and the same processing is repeated. [0068] Note that, in the description of Fig. 5, the configuration is such that a passenger image is photographed (step S502) after waiting for the content reproduction instruction (step S501: Yes). The passenger image may be taken before the content reproduction instruction is given (step S502). For example, take a picture of the passenger at predetermined intervals during boarding, when starting the vehicle engine, when operating the passenger, and during travel (step S502), and wait for the content playback indication. Moho.
[0069] また、図 5の説明では、搭乗者像を撮影して、搭乗者の顔の特徴点を抽出した情報 により識別情報を生成しているが、搭乗者個人の識別情報ではなぐ搭乗者の人数 や構成を識別情報として生成する構成でもよい。具体的には、たとえば、搭乗者の人 数や構成を識別して、大人数での乗車であれば、にぎやかな音楽などのコンテンツ 再生や、子供連れの家族であれば、子供向けの番組などのコンテンツ再生をおこな う構成としてもよい。  [0069] In the description of FIG. 5, the passenger image is taken and the identification information is generated based on the information obtained by extracting the feature points of the passenger's face, but the passenger is not the identification information of the individual passenger. The configuration may be such that the number of people and the configuration are generated as identification information. Specifically, for example, the number and composition of passengers can be identified. If you are riding with a large number of people, content such as live music is played, and if you are a family with children, programs for children, etc. The content may be played back.
[0070] また、図 5の説明では、ステップ S505において、識別情報が一致した搭乗者の搭 乗者情報として、嗜好情報を記録媒体 305から読み込む構成としているが、搭乗者 の年齢や性別を読み込んで、年齢や性別に適したコンテンツを再生する構成として もよい。具体的には、たとえば、子供の年齢の搭乗者であれば、子供向けの番組など のコンテンツ再生や、女性のみの搭乗者であれば、料理番組などのコンテンツ再生 をおこなう構成としてもよ 、。  In the description of FIG. 5, in step S 505, preference information is read from the recording medium 305 as passenger information of the passenger whose identification information matches, but the age and gender of the passenger are read. Thus, it may be configured to play content suitable for age and gender. Specifically, for example, a child-aged passenger can play content such as a program for children, and a female-only passenger can play content such as a cooking program.
[0071] また、図 5の説明では、記録媒体 305に記録されたコンテンツに、搭乗者の視聴履 歴を関連づけて記録してもよぐ搭乗者の嗜好情報を読み込む代わりに、視聴履歴 に応じて、前回搭乗時に途中まで視聴したコンテンツを、続きから再生する構成とし てもよい。 Further, in the explanation of FIG. 5, instead of reading the passenger's preference information that may be recorded in association with the passenger's viewing history in the content recorded in the recording medium 305, the content is recorded according to the viewing history. Thus, the content that was viewed halfway during the last boarding may be replayed from the continuation.
[0072] つづいて、搭乗者撮影部 313により搭乗者の挙動を撮影した場合について説明す る。図 6は、本実施例 1にかかるナビゲーシヨン装置において挙動情報を用いた処理 の内容を示すフローチャートである。図 6のフローチャートにおいて、まず、ナビゲー シヨン装置 300は、コンテンツ再生の指示があった力否かを判断する(ステップ S601 )。コンテンツ再生の指示は、たとえば、搭乗者がユーザ操作部 302を操作しておこ なう構成でもよい。 [0073] ステップ S601において、コンテンツ再生の指示があるのを待って、指示があった場 合 (ステップ S601 :Yes)は、つぎに、搭乗者撮影部 313は、搭乗者の挙動を撮影す る (ステップ 602)。搭乗者の挙動の撮影は、たとえば、搭乗者の眼球の動きを撮影し てもよい。 [0072] Next, a case where a passenger's behavior is photographed by the passenger photographing unit 313 will be described. FIG. 6 is a flowchart of the contents of the process using the behavior information in the navigation device according to the first embodiment. In the flowchart of FIG. 6, first, the navigation apparatus 300 determines whether or not a content reproduction instruction has been given (step S601). The content reproduction instruction may be, for example, a configuration in which the passenger operates the user operation unit 302. [0073] In step S601, waiting for the content playback instruction, and if there is an instruction (step S601: Yes), next, the passenger photographing unit 313 photographs the behavior of the passenger. (Step 602). For example, the movement of the occupant's eyeball may be photographed.
[0074] そして、ナビゲーシヨン制御部 301は、ステップ S602において撮影した搭乗者の眼 球の動きから、搭乗者の挙動情報を生成する (ステップ S603)。挙動情報は、たとえ ば、搭乗者の眼球の動きの特徴点を抽出した情報を含むもので、あらかじめ記録媒 体 305に記録された眠気や疲れなどにおける眼球の動きの特徴を含む特定挙動情 報と照合をおこなうものである。  [0074] Then, the navigation control unit 301 generates the passenger's behavior information from the movement of the passenger's eye shot in step S602 (step S603). The behavior information includes, for example, information obtained by extracting feature points of the passenger's eye movements, and specific behavior information including characteristics of eye movements such as sleepiness and fatigue recorded in the recording medium 305 in advance. Are to be verified.
[0075] つづいて、ナビゲーシヨン制御部 301は、記録媒体 305にあら力じめ登録されてい る特定挙動情報と、ステップ S603にお 、て生成された挙動情報との照合をおこな!/、 、搭乗者が特定挙動状態である力否かを判断する (ステップ S604)。そして、特定挙 動状態である場合 (ステップ S604 : Yes)は、ナビゲーシヨン制御部 301は、特定挙 動状態に基づいて再生するコンテンツを決定する (ステップ S605)。具体的には、た とえば、搭乗者が眠い挙動を示した場合、搭乗者が運転者であれば、再生するコン テンッを比較的騒が 、コンテンツに決定し、搭乗者が子供であれば落ち着 ヽた曲 のコンテンツに決定する構成でもよい。  [0075] Subsequently, the navigation control unit 301 collates the specific behavior information registered in advance in the recording medium 305 with the behavior information generated in step S603! /, Then, it is determined whether or not the passenger is in a specific behavior state (step S604). If it is in the specific motion state (step S604: Yes), the navigation control unit 301 determines the content to be played based on the specific motion state (step S605). Specifically, for example, when the passenger shows sleepy behavior, if the passenger is a driver, the content to be played is relatively noisy and the content is determined. It may be configured to determine the content of the song that has arrived.
[0076] つぎに、ナビゲーシヨン制御部 301は、ステップ S605において決定されたコンテン ッを、記録媒体 305から読み込み、表示部 303あるいは音声処理部 314に出力する 。そして、音声処理部 314は音声処理を施して、スピーカ 312へコンテンツを出力し、 表示部 303あるいはスピーカ 312は、コンテンツを再生し (ステップ S606)、一連の処 理を終了する。  Next, the navigation control unit 301 reads the content determined in step S 605 from the recording medium 305 and outputs it to the display unit 303 or the audio processing unit 314. Then, the audio processing unit 314 performs audio processing and outputs the content to the speaker 312. The display unit 303 or the speaker 312 reproduces the content (step S606), and the series of processing ends.
[0077] また、ステップ S604にお 、て、搭乗者が特定挙動状態でな 、場合 (ステップ S604 : No)は、ステップ S602へ戻り、同様の処理を繰り返す。なお、搭乗者が特定挙動状 態でなければ、その旨を搭乗者に報知して、コンテンツ選択を促す構成としてもよい 。また、特定挙動状態でない場合のコンテンツをあらかじめ設定しておき、再生する 構成としてもよい。  In step S604, if the passenger is not in the specific behavior state (step S604: No), the process returns to step S602 and the same process is repeated. If the occupant is not in the specific behavior state, the occupant may be notified to that effect and be prompted to select content. In addition, it is possible to set the content in the case of not being in the specific behavior state in advance and play it back.
[0078] なお、図 6の説明では、コンテンツ再生の指示があるのを待って、指示があった場 合 (ステップ S601: Yes)に、搭乗者の挙動を撮影する (ステップ S602)構成としてい る力 コンテンツ再生の指示がある前に、搭乗者の挙動を撮影する (ステップ S602) 構成としてもよい。たとえば、あら力じめ、搭乗時や車両のエンジンスタート時や搭乗 者の操作時および走行中における所定の間隔ごとに搭乗者の挙動を撮影して (ステ ップ S602)、コンテンツ再生の指示を待ってもよい。 In the description of FIG. 6, after an instruction for content reproduction is given, In this case (step S601: Yes), the force that is configured to capture the passenger's behavior (step S602) may be configured to capture the passenger's behavior (step S602) before the content playback instruction is issued. For example, brute force, take a picture of the passenger's behavior at predetermined intervals during boarding, when the engine of the vehicle is started, when the passenger operates, and during travel (step S602), and a content playback instruction is issued. You can wait.
[0079] また、図 6の説明では、ステップ S602において搭乗者の眼球の動きを撮影して挙 動情報を生成しているが、窓の開け閉めや搭乗者の体全体の動きや車内の音量な ど力も挙動情報を生成してもよい。たとえば、搭乗者が窓を開けた場合は、暑がつて V、たり気分がわるくなつたりする挙動とし、体全体の動きや音量で騒!、で 、る挙動に 関する挙動情報を生成してもよ ヽ。  [0079] In addition, in the description of FIG. 6, in step S602, movement information is generated by photographing movements of the passenger's eyeballs. For example, force may generate behavior information. For example, if a passenger opens a window, it may be a behavior that makes V feel hot and feels uncomfortable, and even if behavior information is generated on the behavior of the whole body and noise, it generates noise. Yo ヽ.
[0080] なお、本実施例 1におけるコンテンツの出力は、 1つ以上の表示部 303あるいはス ピー力 312を介しておこなう構成である力 それぞれについて、コンテンツ再生を制 御してもよい。たとえば、車両の各シートごとに表示部 303を設けて、各シートの搭乗 者ごとに適したコンテンツを再生してもよい。  It should be noted that content output in the first embodiment may control content reproduction for each of the forces that are configured via one or more display units 303 or speaker force 312. For example, a display unit 303 may be provided for each seat of the vehicle, and content suitable for each passenger of each seat may be reproduced.
[0081] また、本実施例 1では、図 5および図 6を用いて、搭乗者情報を用いた処理と挙動 情報を用いた処理にっ 、てそれぞれ説明した力 それぞれの機能をあわせて処理 する構成としてちよい。  [0081] Further, in the first embodiment, the processing using the passenger information and the processing using the behavior information are processed together with the respective functions of the forces described with reference to Figs. 5 and 6. It is good as composition.
[0082] また、本実施例 1では、カメラなどの搭乗者撮影部 313から搭乗者を撮影して、搭 乗者の識別情報ある 、は挙動情報を生成する構成として 、るが、搭乗者を撮影する 代わりに、その他センサーによって取得する情報から、搭乗者を識別する識別情報 あるいは搭乗者の挙動情報を生成する構成としてもょ 、。  Further, in the first embodiment, the passenger is photographed from the passenger photographing unit 313 such as a camera, and the passenger identification information or the behavior information is generated. Instead of taking a picture, it may be configured to generate identification information that identifies the passenger or passenger behavior information from information acquired by other sensors.
[0083] 識別情報あるいは挙動情報の生成は、たとえば、搭乗者が着座したシートにおける 荷重分布や全荷重を検知する着座センサーを用いてもよい。着座センサーにより、 搭乗者の人数や体格に関する情報などが取得できる。また、車内の所定の位置に 1 つ以上の指紋センサーを設けてもよい。指紋センサーにより、搭乗者の指紋情報を 取得して、搭乗者の識別ができる。また、車内にマイクなど音声センサーを設けてもよ い。音声センサーにより、搭乗者の音量や音質や音程などの音声情報を取得して、 搭乗者の識別ができ、また、人数や性別や眠気などが判断できる。また、脈拍などを 計測する人体センサーを用いてもよい。たとえば、脈拍などの情報を利用することで 、搭乗者の身体状況を把握するでき、搭乗者の挙動を判断することができる。 [0083] The identification information or the behavior information may be generated using, for example, a seating sensor that detects a load distribution or a total load on a seat on which a passenger is seated. Information on the number and physique of the passengers can be obtained by the seating sensor. One or more fingerprint sensors may be provided at predetermined positions in the vehicle. The fingerprint sensor can acquire the passenger's fingerprint information and identify the passenger. A voice sensor such as a microphone may be provided in the car. Voice information such as the volume, sound quality, and pitch of the passenger can be acquired by the voice sensor, so that the passenger can be identified, and the number, gender, sleepiness, etc. can be determined. Also, pulse A human body sensor for measurement may be used. For example, by using information such as the pulse, the physical condition of the passenger can be grasped, and the behavior of the passenger can be determined.
[0084] また、搭乗者の所有する携帯電話から個人の情報を取得する構成としてもよ!、。あ るいは、シートベルトやチャイルドシートの脱着を検知して、人数などの情報を取得し てもよい。  [0084] In addition, a configuration may be adopted in which personal information is acquired from a mobile phone owned by the passenger! Alternatively, information such as the number of people may be acquired by detecting the attachment / detachment of a seat belt or a child seat.
[0085] 以上説明したように、本実施の形態 1にかかるコンテンツ提供装置、コンテンツ提供 方法、コンテンツ提供プログラムおよびコンピュータに読み取り可能な記録媒体によ れば、ユーザは、自らの入力操作によりコンテンツを設定しなくても、搭乗者の搭乗 者情報および挙動に基づいて、コンテンツの出力ができる。したがって、効率的に搭 乗者に適したコンテンツの出力を図ることができる。  As described above, according to the content providing apparatus, the content providing method, the content providing program, and the computer-readable recording medium according to the first embodiment, the user can input the content by his / her input operation. Even without setting, content can be output based on the passenger information and behavior of the passenger. Therefore, it is possible to efficiently output content suitable for the passenger.
[0086] また、以上説明したように、本実施例 1にかかるナビゲーシヨン装置 300は、搭乗者 を撮影して搭乗者の識別情報と挙動情報を生成する。そして、識別情報、挙動情報 と登録識別情報、特定挙動情報を比較して再生するコンテンツを決定できるため、搭 乗者が自ら設定することなぐ搭乗者の状況に最適なコンテンツ提供を図ることがで きる。  [0086] As described above, the navigation device 300 according to the first embodiment captures the passenger and generates identification information and behavior information of the passenger. The content to be played can be determined by comparing the identification information, behavior information with the registered identification information, and the specific behavior information. wear.
[0087] また、搭乗者の撮影をする代わりに、着座センサーや指紋センサーや音声センサ 一など、様々な手段で取得する情報を利用することもできるため、汎用性の向上を図 ることがでさる。  [0087] In addition, instead of taking a picture of a passenger, information obtained by various means such as a seating sensor, a fingerprint sensor, and a voice sensor can be used, so that versatility can be improved. Monkey.
[0088] また、搭乗者の挙動にあわせたコンテンツを再生するため、運転手が眠気におそわ れた場合に騒がしい音楽をかければ、眠気の解消につながり、不都合のある挙動の 墙正を図ることができる。  [0088] In addition, if the driver plays drowsiness to play content that matches the behavior of the passenger, if the driver plays noisy music, the drowsiness will be eliminated, and inconvenient behavior will be corrected. Can do.
[0089] また、搭乗者の視聴履歴に基づ!/ヽてコンテンツを再生できるため、たとえば、搭乗 者が最近視聴したコンテンツを繰り返しを防止したり、搭乗者が途中まで視聴したコ ンテンッの続きを再生して、搭乗者に完結したコンテンツを提供できる。したがって、 搭乗者が自らコンテンツを設定しなくても、コンテンツ提供の最適化を図ることができ る。  [0089] In addition, since content can be played back based on the passenger's viewing history, for example, the content that the passenger has recently viewed can be prevented from being repeated, or the content that the passenger has watched halfway can be continued. Can be used to provide complete content to passengers. Therefore, even if the passenger does not set the content himself, the content provision can be optimized.
[0090] (実施の形態 2)  [0090] (Embodiment 2)
つぎに、本実施の形態 2について説明する。以下に添付図面を参照して、この発明 にかかるコンテンツ提供装置、コンテンツ提供方法、コンテンツ提供プログラムおよび コンピュータに読み取り可能な記録媒体の好適な実施の形態 2を詳細に説明する。 本実施の形態 2では、図 1で説明したコンテンツについて、地点情報を提供あるいは 設定する場合について説明する。 Next, the second embodiment will be described. The present invention will be described below with reference to the accompanying drawings. A preferred embodiment 2 of the content providing apparatus, the content providing method, the content providing program, and the computer-readable recording medium will be described in detail. In the second embodiment, a case where point information is provided or set for the content described in FIG. 1 will be described.
[0091] (コンテンツ提供装置の機能的構成) [0091] (Functional configuration of content providing apparatus)
まず、図 7を用いて、本実施の形態 2にかかるコンテンツ提供装置の機能的構成に ついて説明する。図 7は、本実施の形態 2にかかるコンテンツ提供装置の機能的構 成の一例を示すブロック図である。  First, the functional configuration of the content providing apparatus according to the second embodiment will be described with reference to FIG. FIG. 7 is a block diagram of an example of a functional configuration of the content providing apparatus according to the second embodiment.
[0092] 図 7において、コンテンツ提供装置 700は、搭乗者特定部 701と、搭乗者情報取得 部 702と、地点情報設定部 703と、地点情報出力部 704と、挙動検出部 705と、を含 み構成されている。 In FIG. 7, the content providing apparatus 700 includes a passenger identification unit 701, a passenger information acquisition unit 702, a spot information setting unit 703, a spot information output unit 704, and a behavior detection unit 705. Only configured.
[0093] 搭乗者特定部 701は、搭乗者を特定する。搭乗者の特定は、たとえば、各搭乗者 における、移動体の所有者との関係 (本人、親類、友人、他人など)の特定である。  The passenger identification unit 701 identifies the passenger. The identification of the passenger is, for example, the identification of the relationship (owner, relative, friend, other person, etc.) with the owner of the moving body in each passenger.
[0094] 搭乗者情報取得部 702は、搭乗者特定部 701によって特定された搭乗者に関する 搭乗者情報を取得する。搭乗者情報は、たとえば、搭乗者に関する嗜好、年齢、性 別などの特徴や、搭乗者の配置や人員などの構成を含む情報である。  The passenger information acquisition unit 702 acquires passenger information related to the passenger specified by the passenger identification unit 701. Passenger information is, for example, information including characteristics such as preferences, age, and sex regarding the passenger, and the configuration of the passenger layout and personnel.
[0095] 挙動検出部 705は、搭乗者の挙動を検出する。搭乗者の挙動は、たとえば、眠気 や疲れや体調など搭乗者の身体的状況を含む情報であり、所定の挙動を示す搭乗 者の配置や人員などを含む情報でもよ 、。  [0095] The behavior detection unit 705 detects the behavior of the passenger. The passenger's behavior is information including the physical condition of the passenger such as sleepiness, tiredness, and physical condition, for example, and may include information such as the layout of the passenger exhibiting the predetermined behavior and personnel.
[0096] 地点情報設定部 703は、搭乗者情報取得部 702によって取得された搭乗者情報と 、挙動検出部 705によって検出された搭乗者の挙動の少なくともどちらかに基づいて 、地点情報を設定する。地点情報は、たとえば、経路誘導における目的地点や立ち 寄り地点を含む情報で、図示しない通信部によって取得、あるいは図示しない記録 媒体を参照して設定する。また、地点情報の設定は、目的地点や立ち寄り地点となる 施設に対して、図示しない通信部を介して、予約などの利用に関する要求をおこな Vヽ、要求が受け入れられた地点を設定する構成でもよ!/ヽ。  The point information setting unit 703 sets the point information based on at least one of the passenger information acquired by the passenger information acquisition unit 702 and the behavior of the passenger detected by the behavior detection unit 705. . The point information is, for example, information including a destination point and a stop point in route guidance, which is acquired by a communication unit (not shown) or set with reference to a recording medium (not shown). In addition, the point information is set to a destination or a stop-off facility via a communication unit (not shown), making a request for use such as reservation V ヽ, and the point where the request is accepted is set But!
[0097] 地点情報出力部 704は、地点情報設定部 703によって設定された地点情報を出 力する。地点情報の出力は、たとえば、移動体に搭載されたディスプレイなどの表示 装置やスピーカなどの音響装置などによりおこなう。また、地点情報の出力は、複数 の候補を出力して、ユーザにより目的地点や立ち寄り地点を選択できる構成としても よい。 The point information output unit 704 outputs the point information set by the point information setting unit 703. The location information is output, for example, on a display mounted on a moving object. This is performed by an audio device such as a device or a speaker. The point information may be output by outputting a plurality of candidates and allowing the user to select a destination point or a stop-off point.
[0098] (コンテンツ提供装置の処理の内容)  [0098] (Contents of processing of content providing apparatus)
つぎに、図 8を用いて、本実施の形態 2にかかるコンテンツ提供装置 700の処理の 内容について説明する。図 8は、本実施の形態 2にかかるコンテンツ提供装置の処理 の内容を示すフローチャートである。図 8のフローチャートにおいて、まず、コンテンツ 提供装置 700は、経路誘導の指示があつたか否かを判断する (ステップ S801)。経 路誘導の指示は、たとえば、搭乗者が図示しない操作部からおこなう構成でもよい。  Next, the contents of processing of the content providing apparatus 700 according to the second embodiment will be described with reference to FIG. FIG. 8 is a flowchart showing the contents of processing of the content providing apparatus according to the second embodiment. In the flowchart of FIG. 8, first, the content providing apparatus 700 determines whether or not a route guidance instruction has been given (step S801). The route guidance may be instructed from the operation unit (not shown) by the passenger, for example.
[0099] ステップ S801において、経路誘導の指示があるのを待って、指示があった場合 (ス テツプ S801: Yes)は、搭乗者特定部 701は、搭乗者を特定する (ステップ S802)。 搭乗者の特定は、たとえば、移動体の所有者と搭乗者との関係の特定である。  [0099] In step S801, after waiting for a route guidance instruction to be given (step S801: Yes), the passenger identification unit 701 identifies the passenger (step S802). The identification of the passenger is, for example, the identification of the relationship between the owner of the mobile object and the passenger.
[0100] つぎに、搭乗者情報取得部 702は、ステップ S802において特定された搭乗者に 関する搭乗者情報を取得する (ステップ S803)。搭乗者情報は、たとえば、搭乗者に 関する嗜好、年齢、性別などの特徴や、搭乗者の配置や人員などの構成を含む情 報である。  Next, the occupant information acquisition unit 702 acquires occupant information related to the occupant identified in step S802 (step S803). Passenger information is, for example, information including characteristics such as preferences, age, and sex regarding the passenger, as well as the configuration of the passengers and personnel.
[0101] また、挙動検出部 705は、搭乗者の挙動を検出する (ステップ S804)。搭乗者の挙 動は、たとえば、眠気や疲れや体調など搭乗者の身体的状況を含む情報である。な お、本フローチャートにおいては、搭乗者情報を取得して (ステップ S803)、搭乗者 の挙動を検出する (ステップ S804)こととして 、るが、搭乗者の挙動を検出して (ステ ップ S804)、搭乗者情報を取得する (ステップ S803)順序でもよぐどちらかのステツ プをおこなわな!/ヽ構成でもよ!/、。  [0101] The behavior detection unit 705 detects the behavior of the passenger (step S804). The passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue, and physical condition. In this flowchart, passenger information is acquired (step S803) and the passenger's behavior is detected (step S804). However, the passenger's behavior is detected (step S804). ), Acquire passenger information (Step S803) Do not perform either step in the order!
[0102] つづ 、て、地点情報設定部 703は、ステップ S803にお 、て取得された搭乗者情 報と、ステップ S804において検出された搭乗者の挙動の少なくともどちらか一方に 基づいて、地点情報を設定する (ステップ S805)。  [0102] Next, the location information setting unit 703 determines the location information based on at least one of the passenger information acquired in step S803 and the passenger behavior detected in step S804. Is set (step S805).
[0103] そして、地点情報出力部 704は、ステップ S805において設定された地点情報を出 力し (ステップ S806)、一連の処理を終了する。なお、本フローチャートでは、経路誘 導の指示があってから、搭乗者情報の取得や挙動の検出などの処理をおこなう構成 としているが、経路誘導の指示に限るものではない。たとえば、移動体の移動開始と 同時におこなう構成や、移動体の移動中に所定の間隔ごとにおこなう構成としてもよ い。 [0103] Then, the spot information output unit 704 outputs the spot information set in step S805 (step S806), and the series of processing ends. In this flow chart, a configuration that performs processing such as acquisition of passenger information and detection of behavior after a route guidance instruction is given. However, it is not limited to a route guidance instruction. For example, a configuration that is performed simultaneously with the start of movement of the moving body or a configuration that is performed at predetermined intervals while the moving body is moving may be employed.
[0104] 以上説明したように、本実施の形態 2によれば、ユーザは、自らの入力操作により地 点情報を設定しなくても、搭乗者の搭乗者情報および挙動に基づいて、地点情報を 出力できる。したがって、効率的かつ的確に、搭乗者に適した地点情報の出力を図 ることがでさる。  [0104] As described above, according to the second embodiment, the user can set the point information based on the passenger information and behavior of the passenger without setting the point information by his input operation. Can be output. Therefore, it is possible to output point information suitable for the passenger efficiently and accurately.
実施例 2  Example 2
[0105] 以下に、本発明の実施例 2について説明する。本実施例 2では、たとえば、図 3に 示したナビゲーシヨン装置 300によって、本発明のコンテンツ提供装置を実施した場 合の一例について説明する。なお、ナビゲーシヨン装置 300のハードウェア構成につ いて、図 3と同様の構成である機能については説明を省略する。また、ナビゲーショ ン装置 300が搭載された車両内部については、図 4とほぼ同様であるため説明を省 略する。  [0105] Example 2 of the present invention will be described below. In the second embodiment, an example in which the content providing device of the present invention is implemented by the navigation device 300 shown in FIG. 3 will be described. Note that the description of the hardware configuration of the navigation device 300 is omitted for functions that are the same as those in FIG. The interior of the vehicle equipped with the navigation device 300 is almost the same as that in FIG.
[0106] (ナビゲーシヨン装置 300のハードウェア構成)  [0106] (Hardware configuration of navigation device 300)
図 3において、記録媒体 305に記録された地図情報は、目的地点情報や立ち寄り 地点情報などを含んでいる。目的地点情報や立ち寄り地点情報などは、たとえば、 搭乗者の登録識別情報に基づ!、て識別された搭乗者の搭乗者情報や、搭乗者の特 定挙動情報に基づいて判断される搭乗者の特定挙動状態に関連づけられている。 そして、目的地点情報や立ち寄り地点情報は、ナビゲーシヨン制御部 301における 搭乗者の識別および挙動の判断の結果に応じて読み出される。搭乗者情報は、たと えば、嗜好、年齢、性別などの特徴や、搭乗者の配置、人員などの構成を含む情報 である。特定挙動状態は、たとえば、眠気や疲れや体調など搭乗者の身体的状態で ある。  In FIG. 3, the map information recorded on the recording medium 305 includes destination point information and stop-off point information. Destination information and stop-point information are based on, for example, the passenger's registered identification information, and the passenger information determined based on the passenger information of the identified passenger and the specific behavior information of the passenger. Is associated with a specific behavior state. Then, the destination point information and the stop point information are read according to the result of the passenger identification and behavior determination in the navigation control unit 301. Passenger information is, for example, information including characteristics such as preference, age, and gender, and the layout of passengers and personnel. The specific behavior state is, for example, the physical state of the passenger such as sleepiness, fatigue, or physical condition.
[0107] また、目的地点情報や立ち寄り地点情報は、飲食店やアミューズメント施設などの 施設情報を含んでいる。施設情報は、たとえば、搭乗者の人員や年齢などに応じた 割引情報、おすすめ情報、施設の空席情報などであり、後述する通信部 308により、 施設力も情報を取得できる構成でもよ 、。 [0108] また、経路探索部 309が探索する経路の出発地点には、位置取得部 304によって 取得される現在位置、またはユーザ操作部 302からユーザによって指定される出発 地点などが設定される。また、目的地点には、ユーザによって指定された目的地点や 、ジャンル検索などによって地図データ力 検索された施設を目的地点として設定す る他、前述した搭乗者の搭乗者情報や搭乗者の特定挙動状態に関連づけられて 、 る目的地点情報から、目的地点を設定する構成としてもよい。 [0107] Further, the destination point information and the stop point information include facility information such as restaurants and amusement facilities. The facility information is, for example, discount information according to the number of passengers, age, etc., recommended information, vacancy information of the facility, etc., and the facility power can also be acquired by the communication unit 308 described later. [0108] Also, the current position acquired by the position acquisition unit 304 or the departure point designated by the user from the user operation unit 302 is set as the departure point of the route searched by the route search unit 309. In addition to setting the destination point specified by the user or the facility that has been searched for map data by genre search, etc. as the destination point, the passenger information of the passenger and the specific behavior of the passenger described above are set. A configuration may be adopted in which the destination point is set from destination point information associated with the state.
[0109] さらに、経路探索部 309が探索する経路上に立ち寄り地点を設定してもよい。立ち 寄り地点の設定は、たとえば、ユーザ操作部 302からユーザによって指定される地点 や、前述した搭乗者の搭乗者情報や搭乗者の特定挙動状態に関連づけられて 、る 立ち寄り地点情報から、立ち寄り地点を設定してもよい。  [0109] Further, a stop point may be set on the route searched by the route search unit 309. The setting of the stop point is, for example, a point specified by the user from the user operation unit 302 or the stop point information based on the stop point information associated with the above-described passenger information of the passenger or the specific behavior state of the passenger. May be set.
[0110] 目的地点や立ち寄り地点の設定は、必要に応じ、通信部 308を介して施設の利用 に関する要求をおこない、要求が受け入れられた地点を設定する構成としてもよい。 施設の利用に関する要求は、たとえば、搭乗者が収容可能であることを確認するた め、空席状況を問い合わせたり、搭乗者の人員によって割引のある飲食店を予約し たり、予約して待ち時間を低減できるアミューズメント施設へ予約したりして、確認ある いは予約の取れた施設について、目的地点や立ち寄り地点として設定してもよい。  [0110] The setting of the destination point and the stop point may be configured such that a request for use of the facility is made via the communication unit 308 as necessary, and the point where the request is accepted is set. Requests regarding the use of facilities, for example, to confirm availability of passengers, ask for availability, reserve restaurants with discounts by passengers, and reserve waiting times. You may make reservations to amusement facilities that can be reduced, and set the destination or stopover point for facilities that you have confirmed or reserved.
[0111] なお、実施の形態 2にかかるコンテンツ提供装置 700の機能的構成である搭乗者 特定部 701、搭乗者情報取得部 702および挙動検出部 705はナビゲーシヨン制御 部 301および搭乗者撮影部 313によって、地点情報設定部 703は経路探索部 309 によって、地点情報出力部 704は表示部 303やスピーカ 312によって、それぞれそ の機能を実現する。  [0111] The passenger identification unit 701, the passenger information acquisition unit 702, and the behavior detection unit 705, which are functional configurations of the content providing apparatus 700 according to the second embodiment, are a navigation control unit 301 and a passenger photographing unit 313. Thus, the point information setting unit 703 realizes the function by the route search unit 309, and the point information output unit 704 realizes the function by the display unit 303 and the speaker 312.
[0112] (ナビゲーシヨン装置 300の処理の内容)  [0112] (Contents of processing of navigation device 300)
つぎに、図 9、図 10を用いて、本実施例 2にかかるナビゲーシヨン装置 300の処理 の内容について説明する。図 9は、本実施例 2にかかるナビゲーシヨン装置において 搭乗者情報を用いた処理の内容を示すフローチャートである。なお、本フローチヤ一 トでは、搭乗者情報を用いて目的地点を設定する処理について説明する。図 9のフ ローチャートにおいて、まず、ナビゲーシヨン装置 300は、経路誘導の要求があった か否かを判断する (ステップ S901)。経路誘導の要求は、たとえば、搭乗者がユーザ 操作部 302を操作しておこなう構成でもよ 、。 Next, the contents of the processing of the navigation device 300 according to the second embodiment will be described with reference to FIGS. FIG. 9 is a flowchart of the process using the passenger information in the navigation device according to the second embodiment. In this flowchart, a process for setting a destination point using passenger information will be described. In the flowchart of FIG. 9, the navigation apparatus 300 first determines whether or not a route guidance request has been made (step S901). The route guidance request is, for example, when the passenger is a user It may be configured to operate the operation unit 302.
[0113] ステップ S901において、経路誘導の要求があるのを待って、要求があった場合 (ス テツプ S901: Yes)は、つぎに、搭乗者撮影部 313は、搭乗者像を撮影する (ステツ プ S902)。搭乗者像の撮影は、たとえば、搭乗者の顔について静止画を撮影する。  [0113] In step S901, after waiting for a route guidance request, if there is a request (step S901: Yes), then the passenger photographing unit 313 photographs a passenger image (step S902). For example, the passenger image is taken by taking a still image of the face of the passenger.
[0114] そして、ナビゲーシヨン制御部 301は、ステップ S902において撮影した搭乗者像か ら、搭乗者の識別情報を生成する (ステップ S903)。識別情報は、たとえば、搭乗者 の顔の特徴点を抽出した情報を含むもので、あら力じめ記録媒体 305に記録された 登録識別情報と照合をおこなうものである。  [0114] Then, the navigation control unit 301 generates passenger identification information from the passenger image captured in step S902 (step S903). The identification information includes, for example, information obtained by extracting the feature points of the passenger's face, and is collated with the registered identification information recorded on the recording medium 305.
[0115] つづいて、ナビゲーシヨン制御部 301は、記録媒体 305にあら力じめ登録されてい る登録識別情報と、ステップ S903において生成された識別情報との照合をおこない 、識別情報が一致する力否かを判断する (ステップ S904)。そして、識別情報が一致 する場合 (ステップ S 904 : Yes)は、ナビゲーシヨン制御部 301は、記録媒体 305から 、識別情報と一致した登録識別情報の搭乗者の搭乗者情報に関連づけられた目的 地点情報を読み込む (ステップ S905)。搭乗者情報に関連づけられた目的地点情 報は、たとえば、搭乗者の嗜好に関するもので、搭乗者の好みの飲食店などの目的 地点情報を読み込む構成としてもよい。また、搭乗者が過去に立ち寄つたことを示す 情報を目的地点情報に関連付けて管理し、その関連付けられた目的地点情報を外 して、他の目的地点情報を読み込む構成としてもよい。なお、「過去に立ち寄つたこと 」の判断は、たとえば、過去に車両の目的地点として設定され、さらに、その目的地点 に車両が到達したことをもっておこなうことができる。さらに、搭乗者が過去に立ち寄 つたことのある目的地点情報に対し、搭乗者が独自の評価を設定し、その評価の高 Vヽ目的地点情報を読み込む構成としてもよ!、。なお、「独自の評価の設定」は、搭乗 者の感じた目的地点に対する独自の点数を、搭乗者が入力し、その点数の大きい目 的地点を評価の高い目的地点として設定する構成でもよい。  [0115] Next, the navigation control unit 301 collates the registered identification information pre-registered in the recording medium 305 with the identification information generated in step S903, so that the identification information matches. It is determined whether or not (step S904). If the identification information matches (step S 904: Yes), the navigation control unit 301 obtains the destination point associated with the passenger information of the passenger of the registered identification information that matches the identification information from the recording medium 305. Information is read (step S905). The destination point information associated with the passenger information relates to the passenger's preference, for example, and may be configured to read destination point information such as a restaurant preferred by the passenger. In addition, information indicating that the passenger has stopped in the past may be managed in association with the destination information, and the destination information may be removed and the other destination information may be read. The determination of “stopped in the past” can be made, for example, when a destination point of the vehicle is set in the past and the vehicle has arrived at the destination point. In addition, the passenger may set their own evaluation for the destination information that the passenger has visited in the past, and read the high-V destination information of the evaluation! The “specific evaluation setting” may be configured such that the passenger inputs an original score for the destination point felt by the passenger and sets the target point having a high score as the high evaluation destination point.
[0116] そして、経路探索部 309は、ステップ S905において読み込んだ目的地点情報に 基づ 、て、目的地点を設定する (ステップ S906)。  [0116] Then, the route search unit 309 sets a destination point based on the destination point information read in step S905 (step S906).
[0117] そして、経路誘導部 310は、ステップ S906において設定された目的地点までユー ザを誘導するための経路誘導情報を生成してナビゲーシヨン制御部 301に出力する 。そして、ナビゲーシヨン制御部 301は、経路誘導情報に基づいてナビゲーシヨン装 置 300を制御し、経路を誘導して (ステップ S907)、一連の処理を終了する。 [0117] Then, the route guidance unit 310 generates route guidance information for guiding the user to the destination set in step S906 and outputs the route guidance information to the navigation control unit 301. . Then, the navigation control unit 301 controls the navigation device 300 based on the route guidance information, guides the route (step S907), and ends the series of processes.
[0118] また、ステップ S904において、識別情報が一致しない場合 (ステップ S904 : No)は 、搭乗者の登録をおこなうか否かを判断する (ステップ S908)。搭乗者の登録は、た とえば、表示部 303に搭乗者の登録を要求する旨を表示して、搭乗者に登録をおこ なうか否かの判断を促す構成としてもよ!ヽ。  [0118] If the identification information does not match in step S904 (step S904: No), it is determined whether or not to register a passenger (step S908). For example, the passenger registration may be configured to display a message indicating that the passenger registration is required on the display unit 303 and to prompt the passenger to determine whether or not to register!
[0119] ステップ S908において、搭乗者の登録をおこなわない場合 (ステップ S 908 : No) は、ナビゲーシヨン制御部 301は、表示部 303へ目的地点選択要求を出力し (ステツ プ S910)、搭乗者から目的地点の選択を受け付ける (ステップ S911)。そして、経路 探索部 309は、ステップ S911における目的地点の選択に基づいて、 目的地点を設 定する(ステップ S906)。そして、経路誘導部 310は、ステップ S906において設定さ れた目的地点までユーザを誘導するための経路誘導情報を生成してナビゲーシヨン 制御部 301に出力する。そして、ナビゲーシヨン制御部 301は、経路誘導情報に基 づいてナビゲーシヨン装置 300を制御し、経路を誘導して (ステップ S 907)、一連の 処理を終了する。  [0119] In step S908, when the passenger is not registered (step S908: No), the navigation control unit 301 outputs a destination selection request to the display unit 303 (step S910). The destination selection is accepted from (Step S911). Then, the route search unit 309 sets a destination point based on the selection of the destination point in step S911 (step S906). Then, the route guidance unit 310 generates route guidance information for guiding the user to the destination set in step S906 and outputs the route guidance information to the navigation control unit 301. Then, the navigation control unit 301 controls the navigation device 300 based on the route guidance information, guides the route (step S907), and ends the series of processes.
[0120] また、ステップ S908において、搭乗者の登録をおこなう場合 (ステップ S 908 : Yes) は、表示部 303に搭乗者の登録を促す旨を表示するなどして、搭乗者の登録識別情 報を登録する (ステップ S909)。搭乗者の登録識別情報の登録は、たとえば、搭乗 者撮影部 313により撮影した搭乗者像から特徴点を抽出しておこなう。また、あわせ て搭乗者情報の登録を、ユーザがユーザ操作部 302を操作することによっておこなう 構成としてもよい。そして、ステップ S904にもどり、同様の処理を繰り返す。  [0120] If the passenger is to be registered in step S908 (step S908: Yes), the registration identification information of the passenger is displayed, for example, by prompting the display 303 to prompt the passenger to register. Is registered (step S909). Registration of registration identification information of a passenger is performed by, for example, extracting feature points from a passenger image photographed by the passenger photographing unit 313. In addition, the passenger information may be registered by operating the user operation unit 302 by the user. Then, the process returns to step S904 and the same processing is repeated.
[0121] なお、図 9の説明では、経路誘導の要求があるのを待って、要求があった場合 (ス テツプ S901: Yes)に、搭乗者像を撮影する (ステップ S902)構成としているが、経路 誘導の要求がある前に、搭乗者像を撮影する (ステップ S 902)構成としてもよい。たと えば、あらカゝじめ、搭乗時や車両のエンジンスタート時や搭乗者の操作時に搭乗者 像を撮影して (ステップ S 902)、経路誘導の要求を待ってもよい。  [0121] In the description of Fig. 9, a configuration is adopted in which a passenger image is photographed (step S902) after waiting for a route guidance request and when there is a request (step S901: Yes). The passenger image may be taken before the route guidance is requested (step S902). For example, the passenger image may be taken at the time of boarding, when the engine of the vehicle is started, or when the passenger is operated (step S902), and a route guidance request may be waited.
[0122] また、図 9の説明では、搭乗者像を撮影して、搭乗者の顔の特徴点を抽出した情報 により識別情報を生成しているが、搭乗者個人の識別情報ではなぐ搭乗者の人員 や構成を識別情報として生成する構成でもよい。具体的には、たとえば、搭乗者の人 員や構成を識別して、多くの人員の搭乗や、子供連れの家族での搭乗などが識別で さるようにしてちょい。 [0122] In the description of Fig. 9, the passenger image is taken and the identification information is generated based on the information obtained by extracting the feature points of the passenger's face, but the passenger is not the identification information of the individual passenger. Personnel Or the structure which produces | generates a structure as identification information may be sufficient. Specifically, for example, the number and composition of passengers should be identified so that the boarding of many people and boarding with families with children can be identified.
[0123] また、ステップ S905において、搭乗者情報に関連づけられた目的地点情報は、た とえば、搭乗者の嗜好に関するものとしたが、その他、年齢、性別、配置、人員、履歴 などに関連づけられている構成としてもよい。より具体的には、搭乗者が高齢者であ れば高齢者向けの飲食店や、搭乗者が過去に行ったことのある施設や、搭乗者にと つて過去に評判のよ力つたお店や、搭乗者の人員に応じた割引サービス、おすすめ サービスのあるお店や、家族連れであれば家族連れ向けの施設や、友人同士であ ればアミューズメント施設などが目的地点情報に関連づけられている。  [0123] Further, in step S905, the destination point information associated with the passenger information is, for example, related to the passenger's preference, but is also associated with age, gender, placement, personnel, history, etc. It is good also as composition which has. More specifically, if the passenger is an elderly person, a restaurant for the elderly, a facility that the passenger has visited in the past, or a shop that has a reputation for the passenger in the past. In addition, discount shops according to the number of passengers, shops with recommended services, facilities for families for families, and amusement facilities for friends are linked to destination information. .
[0124] また、ステップ S906における目的地点の設定は、必要に応じ、通信部 308を介し て施設の利用に関する要求をおこな 、、要求が受け入れられた地点を設定する構成 としてもよい。より具体的には、搭乗者の人員に応じて、飲食店やアミューズメント施 設の予約をおこなって力も設定するようにしてもょ 、。  [0124] In addition, the setting of the destination point in step S906 may be configured such that a request for using the facility is made via the communication unit 308 as necessary, and the point where the request is accepted is set. More specifically, depending on the number of passengers, you may make reservations for restaurants and amusement facilities and set the power.
[0125] つづいて、搭乗者撮影部 313により搭乗者の挙動を撮影した場合について説明す る。図 10は、本実施例 2にかかるナビゲーシヨン装置において挙動情報を用いた処 理の内容を示すフローチャートである。なお、本フローチャートでは、挙動情報を用 V、て立ち寄り地点を設定する処理にっ 、て説明する。図 10のフローチャートにお ヽ て、まず、ナビゲーシヨン装置 300は、経路誘導の要求があった力否かを判断する( ステップ S1001)。経路誘導の要求は、たとえば、搭乗者がユーザ操作部 302を操 作しておこなう構成でもよ!/、。  [0125] Next, a case where the passenger's behavior is photographed by the passenger photographing unit 313 will be described. FIG. 10 is a flowchart of the contents of the process using the behavior information in the navigation device according to the second embodiment. In this flowchart, the behavior information is used for V and the process of setting a stop point is described. In the flowchart of FIG. 10, first, the navigation apparatus 300 determines whether or not there is a request for route guidance (step S1001). The route guidance request may be configured, for example, by the passenger operating the user operation unit 302! /.
[0126] ステップ S1001において、経路誘導の要求があるのを待って、指示があった場合( ステップ S1001 : Yes)は、つぎに、搭乗者撮影部 313は、搭乗者の挙動を撮影する (ステップ S1002)。搭乗者の挙動の撮影は、たとえば、搭乗者の眼球の動きを撮影 してちよい。  [0126] In step S1001, a request for route guidance is waited for and if there is an instruction (step S1001: Yes), then the passenger photographing unit 313 photographs the behavior of the passenger (step S1002). For example, the movement of the passenger's eyeball may be imaged.
[0127] そして、ナビゲーシヨン制御部 301は、ステップ S1002において撮影した搭乗者の 眼球の動きから、搭乗者の挙動情報を生成する (ステップ S 1003)。挙動情報は、た とえば、搭乗者の眼球の動きの特徴点を抽出した情報を含むもので、あらかじめ記録 媒体 305に記録された眠気や疲れなどにおける眼球の動きの特徴を含む特定挙動 情報と照合をおこなうものである。 [0127] Then, the navigation control unit 301 generates the passenger's behavior information from the movement of the passenger's eye shot in step S1002 (step S1003). The behavior information includes, for example, information obtained by extracting feature points of the movement of the passenger's eyeball, and is recorded in advance. The specific behavior information including the characteristics of eye movements such as sleepiness and fatigue recorded in the medium 305 is checked.
[0128] つづいて、ナビゲーシヨン制御部 301は、記録媒体 305にあら力じめ登録されてい る特定挙動情報と、ステップ S1003において生成された挙動情報との照合をおこな い、搭乗者が特定挙動状態である力否かを判断する (ステップ S1004)。そして、特 定挙動状態である場合 (ステップ S 1004 : Yes)は、ナビゲーシヨン制御部 301は、特 定挙動状態に基づいて、特定挙動状態に関連づけられた立ち寄り地点情報を読み 込む (ステップ S1005)。具体的には、たとえば、搭乗者が眠い挙動を示した場合、 搭乗者が運転者であれば、休憩のできる地点を立ち寄り地点情報として、記録媒体 3 05から読み込む構成でもよ 、。  [0128] Next, the navigation control unit 301 compares the specific behavior information registered in advance on the recording medium 305 with the behavior information generated in step S1003, and the passenger identifies It is determined whether or not the force is in the behavior state (step S1004). If it is in the specific behavior state (step S 1004: Yes), the navigation control unit 301 reads the stop point information associated with the specific behavior state based on the specific behavior state (step S 1005). . Specifically, for example, when the passenger shows sleepy behavior, if the passenger is a driver, the point where the passenger can take a break can be read as the stop point information from the recording medium 305.
[0129] つぎに、経路探索部 309は、ステップ S1005において読み込んだ立ち寄り地点情 報に基づいて、立ち寄り地点を設定する (ステップ S1006)。そして、経路誘導部 31 0は、ステップ S 1006において設定された立ち寄り地点までユーザを誘導するための 経路誘導情報を生成してナビゲーシヨン制御部 301に出力する。そして、ナビゲーシ ヨン制御部 301は、経路誘導情報に基づいてナビゲーシヨン装置 300を制御し、経 路を誘導して (ステップ S 1007)、一連の処理を終了する。  [0129] Next, the route search unit 309 sets a stop point based on the stop point information read in step S1005 (step S1006). Then, the route guidance unit 310 generates route guidance information for guiding the user to the stop point set in step S 1006 and outputs the route guidance information to the navigation control unit 301. Then, the navigation control unit 301 controls the navigation device 300 based on the route guidance information, guides the route (step S 1007), and ends the series of processes.
[0130] また、ステップ S 1004において、搭乗者が特定挙動状態でない場合 (ステップ S 10 04 :No)は、ステップ S1002へ戻り、同様の処理を繰り返す。なお、搭乗者が特定挙 動状態でなければ、その旨を搭乗者に報知して、立ち寄り地点の設定を促す構成と してもよい。また、特定挙動状態でない場合、所定の時間ごとに休憩地点を立ち寄り 地点とするなど、あらかじめ設定しておき、経路を誘導する構成としてもよい。  In step S 1004, if the passenger is not in the specific behavior state (step S 1004: No), the process returns to step S 1002 and the same processing is repeated. If the passenger is not in a specific motion state, the passenger may be notified of this fact and prompted to set a stop point. In addition, when the state is not a specific behavior state, a route may be guided in advance by setting a break point as a stop point every predetermined time.
[0131] なお、図 10の説明では、経路誘導の要求があるのを待って、要求があった場合 (ス テツプ S1001: Yes)に、搭乗者の挙動を撮影する (ステップ S 1002)構成としている 力 経路誘導の要求がある前に、搭乗者の挙動を撮影する (ステップ S 1002)構成と してもよい。たとえば、あら力じめ、搭乗時や車両のエンジンスタート時や搭乗者の操 作時および走行中における所定の間隔ごとに搭乗者の挙動を撮影して (ステップ S 1 002)、経路誘導の要求を待ってもよい。あるいは、所定の間隔ごとに搭乗者の挙動 を撮影して、疲れや眠気などの挙動に対しては、休憩地点に速やかに誘導する構成 としてちよい。 [0131] In the description of FIG. 10, the configuration is such that the behavior of the passenger is photographed (Step S1002) after waiting for the request for route guidance (Step S1001: Yes). It may be configured so that the behavior of the passenger is photographed before the force route guidance is requested (step S1002). For example, intensify and photograph the passenger's behavior at predetermined intervals during boarding, when the vehicle engine is started, when the passenger is operating, and during travel (step S 1 002). You may wait. Alternatively, the behavior of passengers can be photographed at predetermined intervals, and behaviors such as fatigue and sleepiness can be promptly guided to resting points. As good as.
[0132] また、図 10の説明では、ステップ S1002において搭乗者の眼球の動きを撮影して 挙動情報を生成しているが、窓の開け閉めや搭乗者の体全体の動きや車内の音量 など力も挙動情報を生成してもよい。たとえば、搭乗者が窓を開けた場合は、暑がつ て!、たり気分がわるくなつたりする挙動とし、体全体の動きや音量で騒 ヽで 、る挙動 に関する挙動情報を生成してもよ 、。  [0132] In the description of Fig. 10, behavior information is generated by photographing the movement of the passenger's eyeballs in step S1002, but the opening and closing of the window, the movement of the entire body of the passenger, the volume in the vehicle, etc. Forces may also generate behavior information. For example, if a passenger opens a window, the behavior may be hot and uncomfortable, and the behavior information related to the behavior may be generated based on the movement and volume of the entire body. ,.
[0133] また、ステップ S1006における立ち寄り地点の設定は、必要に応じ、通信部 308を 介して施設の利用に関する要求をおこな 、、要求が受け入れられた地点を設定する 構成としてもよい。より具体的には、搭乗者の人員に応じて、休憩する飲食店の予約 をおこなって力 設定するようにしてもょ 、。  [0133] In addition, the setting of the stop point in step S1006 may be configured such that a request for use of the facility is made via the communication unit 308 as necessary, and the point where the request is accepted is set. More specifically, depending on the number of passengers, you may make a reservation for a restaurant to take a break and set the power.
[0134] また、本実施例 2では、図 9および図 10を用いて、搭乗者情報を用いた処理と挙動 情報を用いた処理にっ 、てそれぞれ説明した力 それぞれの機能をあわせて処理 する構成としてちよい。  [0134] Further, in the second embodiment, the processing using the passenger information and the processing using the behavior information are processed together with the functions of the respective forces described with reference to Figs. 9 and 10. It is good as composition.
[0135] また、本実施例 2では、カメラなどの搭乗者撮影部 313から搭乗者を撮影して、搭 乗者の識別情報ある 、は挙動情報を生成する構成として 、るが、搭乗者を撮影する 代わりに、その他センサーによって取得する情報から、搭乗者を識別する識別情報 あるいは搭乗者の挙動情報を生成する構成としてもょ 、。  [0135] In the second embodiment, the passenger is photographed from the passenger photographing unit 313 such as a camera, and the identification information of the passenger or the behavior information is generated. Instead of taking a picture, it may be configured to generate identification information that identifies the passenger or passenger behavior information from information acquired by other sensors.
[0136] 識別情報あるいは挙動情報の生成は、たとえば、搭乗者が着座したシートにおける 荷重分布や全荷重を検知する着座センサーを用いてもよい。着座センサーにより、 搭乗者の人員や体格に関する情報などが取得できる。また、車内の所定の位置に 1 つ以上指紋センサーを設けてもよい。指紋センサーにより、搭乗者の指紋情報を取 得して、搭乗者の識別ができる。また、車内にマイクなど音声センサーを設けてもよい 。音声センサーにより、搭乗者の音量や音質や音程などの音声情報を取得して、搭 乗者の識別ができ、また、人員や性別や眠気などが判断できる。  [0136] Identification information or behavior information may be generated using, for example, a seating sensor that detects a load distribution and a total load on a seat on which a passenger is seated. By using the seating sensor, it is possible to obtain information on the passenger's personnel and physique. One or more fingerprint sensors may be provided at predetermined positions in the vehicle. The fingerprint sensor can obtain the passenger's fingerprint information and identify the passenger. In addition, a voice sensor such as a microphone may be provided in the vehicle. Voice information such as the volume, sound quality, and pitch of the passenger can be obtained by the voice sensor, so that the passenger can be identified, and the personnel, gender, sleepiness, etc. can be determined.
[0137] また、搭乗者の所有する携帯電話から個人の情報を取得する構成としてもよ!、。あ るいは、シートベルトやチャイルドシートの脱着を検知して、人員などの情報を取得し てもよい。  [0137] It is also possible to obtain personal information from the mobile phone owned by the passenger! Alternatively, information such as personnel may be acquired by detecting the attachment / detachment of a seat belt or child seat.
[0138] 以上説明したように、本実施の形態 2によれば、ユーザは、自らの入力操作により地 点情報を設定しなくても、搭乗者の搭乗者情報および挙動に基づいて、地点情報を 出力できる。したがって、効率的かつ的確に、搭乗者に適した地点情報の出力を図 ることがでさる。 [0138] As described above, according to the second embodiment, the user can input the ground by his / her input operation. Even without setting point information, point information can be output based on the passenger information and behavior of the passenger. Therefore, it is possible to output point information suitable for the passenger efficiently and accurately.
[0139] また、本実施例 2によれば、搭乗者を撮影して搭乗者の識別情報と挙動情報を生 成する。そして、識別情報、挙動情報と登録識別情報、特定挙動情報を比較して目 的地点あるいは立ち寄り地点を設定できるため、搭乗者が自ら設定することなぐ搭 乗者の状況に最適な地点の設定を図ることができる。  [0139] According to the second embodiment, the passenger is photographed to generate the identification information and behavior information of the passenger. In addition, because the identification point, behavior information, registered identification information, and specific behavior information can be compared to set the target point or stopover point, it is possible to set the optimal point for the passenger's situation that the passenger does not set himself. Can be planned.
[0140] また、目的地点あるいは立ち寄り地点の設定に際して、搭乗者の人員に応じた施 設の予約などをおこなって、設定できるため、目的地点あるいは立ち寄り地点に到達 後に、無駄な待ち時間などの削減を図ることができる。また、搭乗者の人員や年齢な どに応じた割引情報、おすすめ情報、施設の空席情報などを取得できるため、搭乗 者に最適な目的地点あるいは立ち寄り地点の設定を図ることができる。  [0140] In addition, when setting a destination point or stopover point, it is possible to make a reservation by setting a facility according to the passenger's personnel, etc., thus reducing wasteful waiting time after reaching the destination point or stopover point Can be achieved. In addition, discount information, recommended information, and vacancy information of facilities according to the passenger's personnel and age, etc. can be acquired, so it is possible to set the optimum destination or stopover point for the passenger.
[0141] また、搭乗者の撮影をする代わりに、着座センサーや指紋センサーや音声センサ 一など、様々な手段で取得する情報を利用することもできるため、汎用性の向上を図 ることがでさる。  [0141] In addition, instead of taking a picture of the passenger, information obtained by various means such as a seating sensor, a fingerprint sensor, and a voice sensor can be used, so that versatility can be improved. Monkey.
[0142] また、搭乗者の挙動にあわせて立ち寄り地点を設定するため、運転手が眠気にお そわれた場合に休憩できる地点へ誘導すれば、眠気の解消につながり、安全運転を 図ることができる。  [0142] In addition, since the stop point is set in accordance with the behavior of the passenger, if the driver is rested by drowsiness, guiding him to a place where he can take a break can lead to relieving drowsiness and driving safely. it can.
[0143] また、搭乗者の目的地点ある 、は立ち寄り地点の履歴に基づ 、て、目的地点ある いは立ち寄り地点を設定できるため、たとえば、搭乗者が最近行った地点へ繰り返し 誘導することを防止できる。一方、搭乗者に評判のよい地点へ誘導することもできる。 したがって、搭乗者が自ら目的地点や立ち寄り地点を設定しなくても、地点設定の最 適化を図ることができる。  [0143] In addition, since the destination point or stop point can be set based on the history of the destination point or stop point of the passenger, for example, the passenger may be repeatedly guided to the most recent point. Can be prevented. On the other hand, it is also possible to guide to a point that is reputable to the passenger. Therefore, the point setting can be optimized even if the passenger does not set the destination point or the stop point by himself / herself.
[0144] なお、本実施の形態 1および実施の形態 2で説明したコンテンツ提供方法は、あら 力じめ用意されたプログラムをパーソナル 'コンピュータやワークステーションなどのコ ンピュータで実行することにより実現することができる。このプログラムは、ハードディ スク、フレキシブルディスク、 CD— ROMゝ MO、 DVDなどのコンピュータで読み取り 可能な記録媒体に記録され、コンピュータによって記録媒体力 読み出されることに よって実行される。またこのプログラムは、インターネットなどのネットワークを介して配 布することが可能な伝送媒体であってもよ 、。 [0144] Note that the content providing method described in the first and second embodiments is realized by executing a program prepared in advance on a computer such as a personal computer or a workstation. Can do. This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM ゝ MO, and a DVD, and the recording medium power is read by the computer. Therefore, it is executed. The program may be a transmission medium that can be distributed via a network such as the Internet.

Claims

請求の範囲 The scope of the claims
[1] 移動体においてコンテンツを提供するコンテンツ提供装置において、  [1] In a content providing apparatus that provides content on a mobile object,
前記移動体の搭乗者を特定する特定手段と、  A specifying means for specifying a passenger of the moving body;
前記特定手段によって特定された搭乗者に関する情報 (以下「搭乗者情報」という) を取得する搭乗者情報取得手段と、  Passenger information acquisition means for acquiring information related to the passenger specified by the specifying means (hereinafter referred to as “passenger information”);
前記搭乗者情報取得手段によって取得された搭乗者情報に基づいて、出力するコ ンテンッを決定する決定手段と、  Determining means for determining content to be output based on the passenger information acquired by the passenger information acquiring means;
前記決定手段によって決定されたコンテンツを出力する出力手段と、  Output means for outputting the content determined by the determination means;
を備えることを特徴とするコンテンツ提供装置。  A content providing apparatus comprising:
[2] 前記コンテンツは地点に関する情報 (以下「地点情報」 、う)であり、  [2] The content is information about the location (hereinafter “location information”).
前記決定手段は、前記搭乗者情報に基づいて、出力する地点情報を決定し、 前記出力手段は、前記決定手段によって決定された地点情報を出力することを特 徴とする請求項 1に記載のコンテンツ提供装置。  The said determination means determines the spot information to output based on the said passenger information, The said output means outputs the spot information determined by the said determination means, The feature of Claim 1 characterized by the above-mentioned. Content providing device.
[3] 前記コンテンツは地点に関する情報 (以下「地点情報」 、う)であり、 [3] The content is information about the location (hereinafter “location information”).
前記決定手段は、前記搭乗者情報に基づいて、地点情報を設定する設定手段を 備え、  The determining means comprises setting means for setting point information based on the passenger information,
前記出力手段は、前記設定手段によって設定された地点情報を出力することを特 徴とする請求項 1に記載のコンテンツ提供装置。  The content providing apparatus according to claim 1, wherein the output unit outputs the spot information set by the setting unit.
[4] 前記搭乗者情報は、前記搭乗者の嗜好を含む情報であることを特徴とする請求項4. The occupant information is information including a preference of the occupant.
1〜3のいずれか一つに記載のコンテンツ提供装置。 The content providing apparatus according to any one of 1 to 3.
[5] 前記搭乗者情報は、前記搭乗者のコンテンツの視聴履歴を含む情報であることを 特徴とする請求項 1〜3のいずれか一つに記載のコンテンツ提供装置。 5. The content providing device according to any one of claims 1 to 3, wherein the passenger information is information including a viewing history of the content of the passenger.
[6] 前記搭乗者情報は、前記搭乗者の人員を含む情報であることを特徴とする請求項6. The passenger information is information including personnel of the passenger.
1〜3のいずれか一つに記載のコンテンツ提供装置。 The content providing apparatus according to any one of 1 to 3.
[7] 前記地点情報は、前記搭乗者が利用する施設の情報を含むものであり、 [7] The point information includes information on facilities used by the passenger,
前記設定手段は、前記施設の利用に関する要求をおこなうことを特徴とする請求項 The said setting means makes the request | requirement regarding utilization of the said facility.
3に記載のコンテンッ提供装置。 4. The content providing device according to 3.
[8] 移動体においてコンテンツを提供するコンテンツ提供装置において、 前記搭乗者の挙動を検出する挙動検出手段と、 [8] In a content providing apparatus that provides content on a mobile object, Behavior detecting means for detecting the behavior of the passenger;
前記挙動検出手段によって検出された検出結果に基づいて、出力するコンテンツ を決定する決定手段と、  Determining means for determining content to be output based on the detection result detected by the behavior detecting means;
前記決定手段によって決定されたコンテンツを出力する出力手段と、  Output means for outputting the content determined by the determination means;
を備えることを特徴とするコンテンツ提供装置。  A content providing apparatus comprising:
[9] 前記コンテンツは地点に関する情報 (以下「地点情報」 、う)であり、  [9] The content is information about the location (hereinafter “location information”).
前記決定手段は、前記検出結果に基づいて、出力する地点情報を設定する設定 手段を備え、  The determining means includes setting means for setting point information to be output based on the detection result,
前記出力手段は、前記設定手段によって設定された地点情報を出力することを特 徴とする請求項 8に記載のコンテンッ提供装置。  9. The content providing apparatus according to claim 8, wherein the output unit outputs the spot information set by the setting unit.
[10] 移動体においてコンテンツを提供するコンテンツ提供方法において、 [10] In a content providing method for providing content on a mobile object,
前記移動体の搭乗者を特定する特定工程と、  A specifying step of specifying a passenger of the moving body;
前記特定工程によって特定された搭乗者情報を取得する搭乗者情報取得工程と、 前記搭乗者情報取得工程によって取得された搭乗者情報に基づ 、て、出力するコ ンテンッを決定する決定工程と、  A passenger information acquiring step for acquiring the passenger information specified by the specifying step; a determining step for determining content to be output based on the passenger information acquired by the passenger information acquiring step;
前記決定工程によって決定されたコンテンツを出力する出力工程と、  An output step of outputting the content determined by the determination step;
を含むことを特徴とするコンテンツ提供方法。  A content providing method comprising:
[11] 移動体においてコンテンツを提供するコンテンツ提供方法において、 [11] In a content providing method for providing content on a mobile object,
前記搭乗者の挙動を検出する挙動検出工程と、  A behavior detecting step for detecting the behavior of the passenger;
前記挙動検出工程によって検出された検出結果に基づいて、出力するコンテンツ を決定する決定工程と、  A determination step of determining content to be output based on the detection result detected by the behavior detection step;
前記決定工程によって決定されたコンテンツを出力する出力工程と、  An output step of outputting the content determined by the determination step;
を含むことを特徴とするコンテンツ提供方法。  A content providing method comprising:
[12] 請求項 10または 11に記載のコンテンツ提供方法をコンピュータに実行させることを 特徴とするコンテンッ提供プログラム。 [12] A content providing program that causes a computer to execute the content providing method according to claim 10 or 11.
[13] 請求項 12に記載のコンテンツ提供プログラムを記録したことを特徴とするコンビユー タに読み取り可能な記録媒体。 [13] A computer-readable recording medium in which the content providing program according to claim 12 is recorded.
PCT/JP2006/315405 2005-08-19 2006-08-03 Content providing device, content providing method, content providing program, and computer readable recording medium WO2007020808A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2005238718 2005-08-19
JP2005238717 2005-08-19
JP2005-238718 2005-08-19
JP2005-238717 2005-08-19

Publications (1)

Publication Number Publication Date
WO2007020808A1 true WO2007020808A1 (en) 2007-02-22

Family

ID=37757471

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/315405 WO2007020808A1 (en) 2005-08-19 2006-08-03 Content providing device, content providing method, content providing program, and computer readable recording medium

Country Status (1)

Country Link
WO (1) WO2007020808A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015195014A (en) * 2014-03-28 2015-11-05 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America information presentation method
JP2019164477A (en) * 2018-03-19 2019-09-26 本田技研工業株式会社 Information provision system, information provision method and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003306091A (en) * 2002-04-15 2003-10-28 Nissan Motor Co Ltd Driver determining device
JP2004037292A (en) * 2002-07-04 2004-02-05 Sony Corp Navigation apparatus, service apparatus of navigation apparatus and service provision method by navigation apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003306091A (en) * 2002-04-15 2003-10-28 Nissan Motor Co Ltd Driver determining device
JP2004037292A (en) * 2002-07-04 2004-02-05 Sony Corp Navigation apparatus, service apparatus of navigation apparatus and service provision method by navigation apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015195014A (en) * 2014-03-28 2015-11-05 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America information presentation method
JP2019164477A (en) * 2018-03-19 2019-09-26 本田技研工業株式会社 Information provision system, information provision method and program
CN110287386A (en) * 2018-03-19 2019-09-27 本田技研工业株式会社 Information providing system, information providing method and medium

Similar Documents

Publication Publication Date Title
JP4533897B2 (en) PROCESS CONTROL DEVICE, ITS PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM
US10332495B1 (en) In vehicle karaoke
EP1267590A2 (en) Contents presenting system and method
WO2007032278A1 (en) Path search apparatus, path search method, path search program, and computer readable recording medium
JP2007108134A (en) Content data regeneration device
JP3752159B2 (en) Information recording device
JP2024041746A (en) information processing equipment
JP4903505B2 (en) Navigation apparatus and method, navigation program, and storage medium.
US20220299334A1 (en) Recommendation information providing method and recommendation information providing system
JP2019132928A (en) Music providing device for vehicle
WO2007023900A1 (en) Content providing device, content providing method, content providing program, and computer readable recording medium
WO2007020808A1 (en) Content providing device, content providing method, content providing program, and computer readable recording medium
WO2006095688A1 (en) Information reproduction device, information reproduction method, information reproduction program, and computer-readable recording medium
JP6785889B2 (en) Service provider
WO2007043464A1 (en) Output control device, output control method, output control program, and computer-readable recording medium
JP6884605B2 (en) Judgment device
JP6642401B2 (en) Information provision system
WO2007108337A1 (en) Content reproduction device, content reproduction method, content reproduction program, and computer-readable recording medium
CN113450177A (en) Information providing system, information providing apparatus, control method thereof, server, and recording medium
JP2022054821A (en) Video editing device
JP2008305239A (en) Communication device and program
US11413530B2 (en) Information processing apparatus, information processing system, non-transitory computer readable medium, and vehicle
JP4741933B2 (en) Information providing apparatus, information providing method, information providing program, and computer-readable recording medium
JP2001304896A (en) Vehicular navigation device
JP2019163984A (en) Information provider and method for controlling the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06782265

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP