WO2007023900A1 - Content providing device, content providing method, content providing program, and computer readable recording medium - Google Patents

Content providing device, content providing method, content providing program, and computer readable recording medium Download PDF

Info

Publication number
WO2007023900A1
WO2007023900A1 PCT/JP2006/316614 JP2006316614W WO2007023900A1 WO 2007023900 A1 WO2007023900 A1 WO 2007023900A1 JP 2006316614 W JP2006316614 W JP 2006316614W WO 2007023900 A1 WO2007023900 A1 WO 2007023900A1
Authority
WO
WIPO (PCT)
Prior art keywords
passenger
content
information
content providing
output
Prior art date
Application number
PCT/JP2006/316614
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroaki Shibasaki
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Publication of WO2007023900A1 publication Critical patent/WO2007023900A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • B60R16/0231Circuits relating to the driving or the functioning of the vehicle

Definitions

  • Content providing apparatus content providing method, content providing program, and computer-readable recording medium
  • the present invention relates to a content providing apparatus that provides content in a mobile body, a content providing method, a content providing program, and a computer-readable recording medium.
  • the present invention is not limited to the above-described content providing apparatus, content providing method, content providing program, and computer-readable recording medium.
  • a passenger of the moving object can view various contents via a display device such as a display and an acoustic device such as a speaker mounted on the moving object.
  • a display device such as a display
  • an acoustic device such as a speaker mounted on the moving object.
  • Various types of content are, for example, radio and television broadcasts, music and video recorded on recording media such as CD (Compact Disk) and DVD (Digital Versatile Disk). Adjust the screen to view the content.
  • ID IDentification
  • IC Integrated Circuit
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2002-104105
  • each passenger needs to create and possess an ID card storing profile information, and to take measures to prevent loss and damage of the ID card.
  • I One example is the problem that the interior environment cannot be set according to the preferences of passengers who do not have an ID card because the interior environment is adjusted using the D card.
  • the content providing apparatus is a content providing apparatus that provides content in a mobile object, and identifies a passenger of the mobile object. Identifying means, passenger information obtaining means for obtaining information relating to the passenger identified by the identifying means (hereinafter referred to as “passenger information” t), output means for outputting the content, and the output means And control means for outputting the content based on passenger information acquired by the passenger information acquisition means.
  • the content providing apparatus is a content providing apparatus that provides content on a mobile body, and behavior detecting means for detecting the behavior of the passenger, and output means for outputting the content. And a control means for controlling the output means and outputting the content based on a result detected by the behavior detecting means.
  • the content providing method according to the invention of claim 6 is a specific process for specifying a passenger of the mobile object in the content providing method for providing content to the mobile object. And a passenger information acquisition step for acquiring information related to the passenger specified in the specifying step (hereinafter referred to as “passenger information” and V), and the passenger information acquired in the passenger information acquisition step. And a control step for controlling the output of the content.
  • the content providing method according to the invention of claim 7 is a content providing method for providing content to a mobile object, a behavior detecting step for detecting the behavior of the passenger, and the behavior detection And a control step of controlling the output of the content based on the result detected by the step.
  • a content providing program according to claim 8 causes a computer to execute the content providing method according to claim 6 or 7.
  • a computer-readable recording medium according to the invention of claim 9 records the content providing program according to claim 8.
  • FIG. 1 is a block diagram illustrating an example of a functional configuration of a content providing apparatus according to the present embodiment.
  • FIG. 2 is a flow chart showing the contents of processing of the content providing apparatus which works on the present embodiment.
  • FIG. 3 is a block diagram showing an example of a hardware configuration of a navigation device that works well with the present embodiment.
  • FIG. 4 is an explanatory view showing an example of the inside of a vehicle equipped with a navigation device that is effective in the present embodiment.
  • FIG. 5 is a flowchart showing the contents of processing using the passenger information in the navigation device empowering the present embodiment.
  • FIG. 6 is a flowchart showing the contents of processing using behavior information in the navigation apparatus which is useful in the present embodiment.
  • FIG. 1 is a block diagram showing an example of a functional configuration of a content providing apparatus that works on the present embodiment.
  • the content providing device 100 includes a passenger identification unit 101, a passenger information acquisition unit 102, an output control unit 103, a content output unit 104, and a behavior detection unit 105. ing.
  • the passenger identification unit 101 identifies a passenger.
  • the identification of the passenger is, for example, the identification of the relationship (owner, relative, friend, other person, etc.) with the owner of the moving body in each passenger.
  • the passenger information acquisition unit 102 acquires the passenger information related to the passenger specified by the passenger specification unit 101.
  • Passenger information is, for example, information including characteristics such as preferences, age, and gender related to the passenger, and the configuration such as the arrangement and number of passengers.
  • the behavior detection unit 105 detects the behavior of the passenger.
  • the passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue and physical condition, and may be information including the arrangement and number of passengers exhibiting the predetermined behavior.
  • the output control unit 103 outputs the content based on at least one of the passenger information acquired by the passenger information acquisition unit 102 and the behavior of the passenger detected by the movement detection unit 105.
  • Control For example, the output is controlled by controlling the sound and display, and controlling the volume and sound quality related to the sound or the size and brightness of the caption related to the display. In addition, it may be configured to control on / off of audio and display output.
  • the content output unit 104 outputs content according to control by the output control unit 103.
  • the content is output by, for example, a display device such as a display mounted on a moving body or an acoustic device such as a speaker.
  • FIG. 2 is a flowchart showing the contents of the processing of the content providing apparatus that works on the present embodiment.
  • the content providing apparatus 100 determines whether or not there is an instruction to provide the content (step S201).
  • the instruction to provide the content is, for example, a content type such as an operation unit not shown by the passenger. It may be configured to instruct.
  • step S201 after waiting for the content provision instruction (step S201: Yes), the passenger identification unit 101 identifies the passenger (step S202).
  • the identification of the passenger is, for example, the identification of the relationship between the owner of the mobile object and the passenger.
  • the occupant information acquisition unit 102 acquires occupant information regarding the occupant identified in step S202 (step S203).
  • Passenger information is, for example, information including characteristics such as preferences, age, and sex regarding the passenger, and configuration such as the arrangement and number of passengers.
  • the behavior detecting unit 105 detects the behavior of the passenger (step S204).
  • the passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue, and physical condition.
  • the passenger information is acquired (step S203), and the behavior of the passenger to be detected (step S204) is detected (step S204). ), Acquire passenger information (Step S203) Do not perform either step in the order!
  • the content output unit 104 outputs content according to the control of the output control unit 103 (step S205).
  • the output control is performed, for example, by the output control unit 103 based on at least one of the passenger information acquired in step S203 and the behavior of the passenger detected in step S204. Then, a series of processing ends.
  • step S201 After waiting for an instruction to provide content, if there is an instruction (step S201: Yes), the passenger is identified (step S202), and the passenger information is stored. Although it is configured to acquire (step S203) and detect behavior (step S204), it may be configured to perform the above-described steps S202 to S204 before instructing content provision. For example, at the time of boarding by force, the passenger is identified (step S202), the passenger information is obtained (step S203), the behavior is detected (step S204), and there is an instruction to provide content. If there is an instruction after waiting (Step S201: Yes), the content may be controlled and output (Step S205).
  • the content providing apparatus and the content providing that are relevant to the present embodiment According to the method, the content providing program, and the computer-readable recording medium, the user can control the output based on the passenger information and behavior of the passenger without having to control the content output by his input operation. Can do. Therefore, it is possible to efficiently provide comfortable content to passengers.
  • FIG. 3 is a block diagram showing an example of a hardware configuration of a navigation apparatus that works on the present embodiment.
  • a navigation device 300 is mounted on a moving body such as a vehicle, and includes a navigation control unit 301, a user operation unit 302, a display unit 303, a position acquisition unit 304, and a recording medium. 305, a recording medium decoding unit 306, an audio output unit 307, a communication unit 308, a route search unit 309, a route guidance unit 310, an audio generation unit 311, a speaker 312, a passenger photographing unit 313, And an audio processing unit 314.
  • the navigation control unit 301 controls the entire navigation device 300.
  • the navigation control unit 301 includes, for example, a CPU (Central Processing Unit) that executes predetermined arithmetic processing, a ROM (Read Only Memory) that stores various control programs, and a RAM (Random) that functions as a work area for the CPU. It can be realized by a microcomputer constituted by an Access Memory).
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random
  • the navigation control unit 301 On the basis of the information on the current position acquired by the position acquisition unit 304 and the map information obtained from the recording medium 305 via the recording medium decoding unit 306, the navigation control unit 301 performs the route guidance, The driving force at which position on the map is calculated, and the calculation result is output to the display unit 303.
  • the navigation control unit 301 inputs / outputs information related to route guidance to / from the route search unit 309, the route guidance unit 310, and the voice generation unit 311 during route guidance, and displays information obtained as a result.
  • 303 and audio output unit 307 Output to.
  • the navigation control unit 301 generates passenger identification information and behavior information, which will be described later, based on the passenger image or behavior of the passenger photographed by the passenger photographing unit 313. Then, content output such as audio and video is controlled based on the identification information and behavior information of the occupant according to the content reproduction instruction input by the user operating the user operation unit 302.
  • the content output control can be performed, for example, when the playback of a music or video recorded on the recording medium 305 or a radio broadcast or a television broadcast received by the communication unit 308 is instructed to set the volume level or sound quality (high This is control of the balance between the low and high frequencies), the size of the subtitles, the brightness of the display 303, or the output on / off. Note that if there are a plurality of display units 303 and speakers 312, a configuration for controlling each of them is appropriate.
  • the user operation unit 302 acquires information input by the user by operating operation means such as a remote controller, a switch, and a touch panel, and outputs the acquired information to the navigation control unit 301.
  • operating operation means such as a remote controller, a switch, and a touch panel
  • Display unit 303 includes, for example, a CRT (Cathode Ray Tube), a TFT liquid crystal display, an organic EL display, a plasma display, and the like.
  • the display unit 303 can be configured by, for example, a video IZF (interface) or a video display device connected to the video IZF.
  • the video IZF is, for example, a graphic controller that controls the entire display device, a buffer memory such as VRAM (Video RAM) that temporarily stores image information that can be displayed immediately, and a graphics controller. It is composed of a control IC that controls the display of the display device based on the image information output from the device.
  • the display unit 303 displays traffic information, map information, information on route guidance, content related to video output from the navigation control unit 301, and other various information according to the control related to the output of the navigation control unit 301. To do.
  • the position acquisition unit 304 includes a GPS receiver and various sensor forces such as a vehicle speed sensor, an angular velocity sensor, and an acceleration sensor, and acquires information on the current position of the moving object (current position of the navigation device 300).
  • the GPS receiver receives the radio wave from the GPS satellite and determines the geometric position with the GPS satellite.
  • GPS means Global Positioning It is an abbreviation for System, and is a system that accurately determines the position on the ground by receiving radio waves from four or more satellites.
  • the GPS receiver is composed of an antenna for receiving radio waves from GPS satellites, a tuner for demodulating the received radio waves, and an arithmetic circuit for calculating the current position based on the demodulated information.
  • the recording medium 305 various control programs and various information are recorded in a state that can be read by a computer.
  • the recording medium 305 can be realized by, for example, an HD (Hard Disk), a DVD (Digital Versatile Disk), a CD (Compact Disk), or a memory card.
  • the recording medium 305 may accept writing of information by the recording medium decoding unit 306 and record the written information in a nonvolatile manner.
  • map information used for route search and route guidance is recorded in the recording medium 305.
  • the map information recorded in the recording medium 305 includes background data representing features (features) such as buildings, rivers, and the ground surface, and road shape data representing the shape of the road. It is drawn in 2D or 3D on the display screen.
  • the navigation device 300 is guiding a route, the map information read from the recording medium 305 by the recording medium decoding unit 306 and the mark indicating the position of the moving body acquired by the position acquisition unit 304 are displayed on the display unit 303. Will be displayed.
  • registered identification information for identifying a passenger, specific behavior information for determining the behavior of the passenger, output form information for determining content output, Content such as video and music is recorded.
  • the registered identification information includes, for example, information obtained by extracting feature points of a passenger image taken with a camera or the like, such as a face pattern, eye iris, fingerprint data, or voice data.
  • the specific behavior information includes, for example, information obtained by extracting features related to the specific behavior state such as drowsiness and fatigue, such as eyelid movement, volume level, and heart rate.
  • the output form information recorded in the recording medium 305 is information related to content output.
  • the passenger information of the passenger identified based on the registered identification information of the passenger, or the passenger It is related to the specific behavior state of the passenger determined based on the specific behavior information. Then, identification and enumeration of the passenger in the navigation control unit 301 are performed.
  • a configuration may be adopted in which the navigation control unit 301 reads when performing control related to output in accordance with the result of determination of movement.
  • Passenger information is information including features such as preferences, age, sex, etc., and the arrangement of passengers and number of passengers.
  • the relationship between the occupant information of the occupant, the specific behavior state, and the output form information may be, for example, a setting that makes the sound volume small and the sound quality mild for elderly people and female occupants.
  • the setting may be such that the caption displayed on the display unit 303 is enlarged.
  • the general application may be set to a low volume and a mild sound quality.
  • the luminance of the display unit 303 may be lowered by decreasing the volume.
  • a preset output format of each passenger may be used.
  • the passenger's viewing history may be recorded to match the previous viewing setting.
  • the output form information may have a structure in which the relationship between the output form and the age and sex is recorded in advance, or a structure that can be registered by the user.
  • the force for recording map information, content, and the like on the recording medium 305 is not limited to this.
  • Map information, content, and the like may be recorded on a server outside the navigation apparatus 300.
  • the navigation device 300 acquires map information and content from the Sano via the network, for example, through the communication unit 308.
  • the acquired information is stored in RAM or the like.
  • the recording medium decoding unit 306 controls reading of information on the recording medium 305 and Z writing.
  • the recording medium decoding unit 306 is an HDD (Hard Disk Drive).
  • the audio output unit 307 reproduces a sound such as an internal sound by controlling the output to the connected speaker 312. There may be one speaker 312 or a plurality of speakers.
  • the audio output unit 307 includes, for example, a D ZA converter that performs DZA conversion of audio digital information, an amplifier that amplifies an audio analog signal output from the DZA converter, and an AZD converter that performs AZD conversion of audio analog information. It can consist of [0046]
  • the communication unit 308 obtains various types of information from the outside.
  • the communication unit 308 is an FM multiplex tuner, a VICS (registered trademark) Z beacon receiver, a wireless communication device and other communication devices, and other communication via a communication medium such as a mobile phone, PHS, communication card and wireless LAN. Communicate with the device. Alternatively, it may be a device that can communicate by radio broadcast radio waves, television broadcast radio waves, or satellite broadcasts.
  • Information acquired by the communication unit 308 includes traffic information such as traffic jams and traffic regulations that are also distributed by the road traffic information communication system center, traffic information acquired by operators in their own way, and other information on the Internet. Public data and content.
  • the communication unit 308 may request traffic information or content from a server storing traffic information and content nationwide via the network and obtain the requested information.
  • it may be configured to receive video signals or audio signals from radio broadcasts, television broadcasts, or satellite broadcasts.
  • the route search unit 309 uses the map information acquired from the recording medium 305 via the recording medium decoding unit 306, the traffic information acquired via the communication unit 308, and the like. Search for the optimal route.
  • the optimal route is the route that best matches the user's request.
  • the route guidance unit 310 is obtained from the optimum route information searched by the route search unit 309, the position information of the moving body acquired by the position acquisition unit 304, and the recording medium 305 via the recording medium decoding unit 306. Based on the obtained map information, route guidance information for guiding the user to the destination point is generated.
  • the route guidance information generated at this time may be information that considers the traffic jam information received by the communication unit 308.
  • the route guidance information generated by the route guidance unit 310 is output to the display unit 303 via the navigation control unit 301.
  • the sound generation unit 311 generates information of various sounds such as a guide sound. That is, based on the route guidance information generated by the route guidance unit 310, the virtual sound source corresponding to the guidance point is set and the voice guidance information is generated, and this is output as voice via the navigation control unit 301. Output to part 307.
  • the passenger photographing unit 313 photographs a passenger. Passengers can shoot video or still images For example, the image of the passenger and the behavior of the passenger are taken and output to the navigation control unit 301.
  • the sound processing unit 314 reproduces sound such as music by controlling the output to the connected speaker 312 according to the control related to the output of the navigation control unit 301.
  • the sound processing unit 314 may have substantially the same configuration as the sound output unit 307.
  • the passenger identification unit 101, the passenger information acquisition unit 102, and the behavior detection unit 105 which are functional configurations of the content providing apparatus 100 according to the embodiment, are the navigation control unit 301 and the passenger imaging.
  • the output control unit 103 is realized by the navigation control unit 301
  • the content output unit 104 is realized by the display unit 303 and the speaker 312 by the unit 313.
  • FIG. 4 is an explanatory diagram showing an example of the inside of a vehicle in which a navigation device that is useful in this embodiment is mounted.
  • the interior of the vehicle has a driver seat 411, a passenger seat 412, and a rear seat 413, around the driver seat 411 and the passenger seat 412.
  • a display device display unit 303
  • an acoustic device speaker 312
  • an information reproducing device 426a The passenger seat 412 is provided with a display device 421b and an information reproducing device 426b for the passenger of the rear seat 413, and an acoustic device (not shown) is provided behind the rear seat 413. It has been.
  • each information reproducing device 426 (426a, 426b) are provided with a photographing device (passenger photographing unit 313) 423, which can photograph a passenger.
  • a photographing device passingenger photographing unit 313) 423, which can photograph a passenger.
  • Each information reproducing device 426 (426a, 426b) may have a structure that can be attached to and detached from the vehicle.
  • FIG. 5 is a flowchart showing the contents of the process using the passenger information in the navigation device that is useful in the present embodiment.
  • a case where preference information which is a passenger's preference is used as the passenger information will be described.
  • Figure 5 Flow In one chart, first, the navigation apparatus 300 determines whether or not a content reproduction instruction has been given (step S501).
  • the content reproduction instruction may be configured, for example, by a passenger operating the user operation unit 302.
  • step S501 waiting for the content playback instruction, and if there is an instruction (step S501: Yes), next, the passenger photographing unit 313 photographs the passenger image (step S501).
  • step S502 The passenger image is shot, for example, by taking a still image of the passenger's face.
  • the navigation control unit 301 generates passenger identification information from the passenger image captured in step S502 (step S503).
  • the identification information includes, for example, information obtained by extracting the feature points of the passenger's face, and is collated with the registered identification information recorded on the recording medium 305.
  • the navigation control unit 301 collates the registered identification information pre-registered in the recording medium 305 with the identification information generated in step S503, so that the identification information matches. It is determined whether or not (step S504).
  • the navigation control unit 301 reads the output form information associated with the registered identification information that matches the identification information from the recording medium 305 (step S505).
  • the output form information is, for example, information related to the output of content suitable for the passenger, and is information including volume and sound quality for audio or subtitles and brightness for video.
  • navigation control section 301 determines the content output form based on the output form information read in step S 505 (step S 506), and outputs it to display section 303 or audio processing section 314. To do.
  • the output form is, for example, volume level, sound quality, subtitle size, brightness brightness, or the like.
  • the output form is determined based on, for example, the age, gender, number of passengers, personal preferences and viewing history of the passengers.
  • the audio processing unit 314 performs audio processing and outputs the content to the speaker 312.
  • the display unit 303 or the speaker 312 reproduces the content based on the output form determined in step S506 (step S506).
  • S507 a series of processing ends.
  • step S504 If the identification information does not match in step S504 (step S504: No), Then, it is determined whether or not to register a passenger (step S508).
  • the passenger registration may be configured to display a message indicating that the passenger registration is required on the display unit 303 and to prompt the passenger to determine whether or not to register!
  • step S508 If the passenger is not registered in step S508 (step S508: No), the navigation control unit 301 outputs an output form selection request to the display unit 303 (step S510). The selection of the output form is accepted from (Step S511). Then, the navigation control unit 301 controls the output of the content output to the display unit 303 or the audio processing unit 314 based on the output mode selection. Then, the audio processing unit 314 performs audio processing and outputs the content to the speaker 312. The display unit 303 or the speaker 312 reproduces the content (step S507), and the series of processing ends.
  • step S508 When the passenger is registered in step S508 (step S508: Yes), a message that prompts the passenger to be registered is displayed on the display unit 303. Information is registered (step S509).
  • the registration identification information of the passenger is registered, for example, by extracting the feature point of the passenger image power photographed by the passenger photographing unit 313 or by the user operating the user operation unit 302 to register the age and sex. May be. Then, the process returns to step S504 and the same processing is repeated.
  • the configuration is such that a passenger image is photographed (step S502) after waiting for the content reproduction instruction (step S501: Yes).
  • the passenger image may be taken before the content reproduction instruction is given (step S502). For example, take a picture of the passenger at predetermined intervals during boarding, when starting the vehicle engine, when operating the passenger, and during travel (step S502), and wait for the content playback indication. Moho.
  • identification information is generated based on information obtained by taking a passenger image and extracting feature points of the passenger's face, but the passenger is not the identification information of each passenger.
  • the configuration may be such that the number of people and the configuration are generated as identification information. Specifically, for example, by identifying the number and composition of passengers, if you are riding with a large number of people, you can play content such as lively music, or if you are a family with children, programs for children, etc. The content may be reproduced. Next, a case where the passenger's behavior is photographed by the passenger photographing unit 313 will be described.
  • FIG. 6 is a flowchart showing the contents of the process using the behavior information in the navigation device that works on the present embodiment.
  • the navigation apparatus 300 determines whether or not there is an instruction for content reproduction (step S601).
  • the content reproduction instruction may be, for example, a configuration in which the passenger operates the user operation unit 302.
  • step S601 waiting for the content playback instruction, and if there is an instruction (step S601: Yes), then the passenger photographing unit 313 photographs the behavior of the passenger. (Step 602). For example, the movement of the occupant's eyeball may be photographed.
  • the navigation control unit 301 generates occupant behavior information from the movement of the occupant's eye shot in step S602 (step S603).
  • the behavior information includes, for example, information obtained by extracting feature points of the passenger's eye movements, and specific behavior information including characteristics of eye movements such as sleepiness and fatigue recorded in the recording medium 305 in advance. Are to be verified.
  • the navigation control unit 301 collates the specific behavior information preliminarily registered in the recording medium 305 with the behavior information generated in step S603! /, Then, it is determined whether or not the passenger is in a specific behavior state (step S604). If it is in the specific behavior state (step S604: Yes), the navigation control unit 301 reads the output form information associated with the specific behavior state from the recording medium 305 (step S605).
  • the output form information is, for example, information related to the output of content suitable for the passenger's behavior. If the passenger shows sleepy behavior, the volume and sound quality are reduced, the brightness of the video is reduced, or the display unit Information including on / off of 303 and speaker 312.
  • the navigation control unit 301 determines the output form of the content based on the output form information read in step S605 (step S606). Specifically, for example, when the passenger shows sleepy behavior, if the passenger is a driver, the volume of the content to be played is determined to be high, and if the passenger is a child, the volume is determined to be low. You It may be configured as well.
  • the navigation control unit 301 controls the output of content in the display unit 303 or the audio processing unit 314 based on the output mode determined in step S606. Then, the audio processing unit 314 performs audio processing and outputs the content to the speaker 312. The display unit 303 or the speaker 312 reproduces the content (step S607), and the series of processing ends.
  • step S604 if the occupant is not in the specific behavior state (step S604: No), the process returns to step S602 and the same processing is repeated. If the passenger is not in the specific behavior state, the content output form when the passenger is not in the specific behavior state may be set in advance and played back.
  • Fig. 6 is configured to wait for the instruction to play the content, and when the instruction is given (step S601: Yes), photograph the behavior of the passenger (step S602).
  • the configuration may be such that the behavior of the passenger is photographed before the content reproduction instruction is given (step S602). For example, brute force, take a picture of the passenger's behavior at predetermined intervals during boarding, when the engine of the vehicle is started, when the passenger operates, and during travel (step S602), and a content playback instruction is issued. You can wait.
  • step S603 movement information is generated by photographing the movement of the passenger's eyeball, but the opening / closing of the window, the movement of the entire body of the passenger, the volume in the vehicle, and the like.
  • force may generate behavior information. For example, when a passenger opens a window, it is assumed that the behavior is such that it gets hot and feels uncomfortable. Also good.
  • the content output in the present embodiment may be controlled for each of the forces that are configured through one or more display units 303 or the spin force 312.
  • a display unit 303 may be provided for each seat of the vehicle, and content suitable for each passenger of each seat may be reproduced.
  • the processing using the passenger information and the processing using the behavior information are processed by combining the functions of the respective forces described with reference to Figs. 5 and 6. It is good also as a structure.
  • the passenger is photographed from the passenger photographing unit 313 such as a camera, and the identification information of the passenger or the behavior information is generated.
  • the passenger is photographed. Instead, it may be configured to generate identification information for identifying the passenger or passenger behavior information from information acquired by other sensors.
  • the identification information or the behavior information may be generated by using, for example, a seating sensor that detects a load distribution and a total load on a seat on which a passenger is seated. Information on the number and physique of the passengers can be obtained by the seating sensor.
  • One or more fingerprint sensors may be provided at predetermined positions in the vehicle. The fingerprint sensor can acquire the passenger's fingerprint information and identify the passenger.
  • a voice sensor such as a microphone may be provided in the car. Voice information such as the volume, sound quality, and pitch of the passenger can be acquired by the voice sensor, so that the passenger can be identified, and the number, gender, sleepiness, etc. can be determined.
  • a human body sensor that measures a pulse or the like may also be used. For example, by using information such as the pulse, the physical condition of the passenger can be grasped, and the behavior of the passenger can be determined.
  • the user can output the content by his / her input operation.
  • the output can be controlled based on the occupant information and behavior of the occupant even without control. Therefore, it is possible to efficiently provide comfortable content to passengers.
  • the navigation apparatus 300 photographs the passenger and generates identification information and behavior information of the passenger. And since the identification information, the behavior information, the registered identification information, and the specific behavior information can be compared to determine the output form of the content, it is possible to provide comfortable content to the passenger who is not set by the passenger himself / herself
  • content can be output based on the passenger's viewing history !, for example, it is possible to prevent content that has recently been viewed by the passenger or to continue the content that the passenger has viewed halfway. Can be used to provide complete content to passengers. Therefore, even if the passenger does not set the content himself, the content provision can be optimized.
  • the content providing method described in the present embodiment can be realized by executing a program prepared in advance on a computer such as a personal computer or a workstation.
  • This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, an MO, or a DVD, and is executed by being read by the computer.
  • the program may be a transmission medium that can be distributed via a network such as the Internet.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Navigation (AREA)

Abstract

A content providing device (100) that, in a mobile body, provides contents. In the device, an occupant specifying section (101) specifies an occupant in the mobile body, and an occupant information acquisition section (102) acquires information on the occupant (hereinafter referred to as “occupant information”) specified by the occupant specifying section (101). Further, a content output section (104) outputs contents, and an output control section (103) controls the content output section (104) to output contents based on the occupant information acquired by the occupant information acquisition section (102).

Description

明 細 書  Specification
コンテンツ提供装置、コンテンツ提供方法、コンテンツ提供プログラムおよ びコンピュータに読み取り可能な記録媒体  Content providing apparatus, content providing method, content providing program, and computer-readable recording medium
技術分野  Technical field
[0001] この発明は、移動体においてコンテンツを提供するコンテンツ提供装置、コンテンツ 提供方法、コンテンツ提供プログラムおよびコンピュータに読み取り可能な記録媒体 に関する。ただし、この発明は、上述したコンテンツ提供装置、コンテンツ提供方法、 コンテンッ提供プログラムおよびコンピュータに読み取り可能な記録媒体には限られ ない。  The present invention relates to a content providing apparatus that provides content in a mobile body, a content providing method, a content providing program, and a computer-readable recording medium. However, the present invention is not limited to the above-described content providing apparatus, content providing method, content providing program, and computer-readable recording medium.
背景技術  Background art
[0002] 従来より、車両などの移動体において、移動体の搭乗者は、移動体に搭載された ディスプレイなどの表示装置やスピーカなどの音響装置を介して様々なコンテンツを 視聴できる。様々な種類のコンテンツは、たとえば、ラジオ放送やテレビ放送、 CD (C ompact Disk)や DVD (Digital Versatile Disk)などの記録媒体に記録された 音楽や映像などで、搭乗者は、適宜音量や表示画面の調整をおこなってコンテンツ を視聴する。  Conventionally, in a moving object such as a vehicle, a passenger of the moving object can view various contents via a display device such as a display and an acoustic device such as a speaker mounted on the moving object. Various types of content are, for example, radio and television broadcasts, music and video recorded on recording media such as CD (Compact Disk) and DVD (Digital Versatile Disk). Adjust the screen to view the content.
[0003] 一方、コンテンツの視聴に関して、各搭乗者における車内環境 (車載機器の位置や 動作状況など)の好みを各搭乗者のプロファイル情報として、 IC (Integrated Circ uit)カードを用いた ID (IDentification)カードに記憶させる。そして、 IDカードから 各搭乗者のプロファイル情報を読み取って、車内環境の調整をおこなう提案がされ ている(たとえば、下記特許文献 1参照。 ) o  [0003] On the other hand, with regard to content viewing, ID (IDentification) using IC (Integrated Circuit) card with each passenger's preference of the in-vehicle environment (position of vehicle equipment, operation status, etc.) as the profile information of each passenger ) Save to card. And it has been proposed to adjust the in-vehicle environment by reading the profile information of each passenger from the ID card (see, for example, Patent Document 1 below).
[0004] 特許文献 1:特開 2002— 104105号公報  [0004] Patent Document 1: Japanese Patent Application Laid-Open No. 2002-104105
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0005] し力しながら、上述した従来技術は、各搭乗者が、プロファイル情報を記憶した ID カードを作成して所持する必要や、 IDカードの紛失および破損を予防する措置をと る必要があり、搭乗者の負担が増大するという問題が一例として挙げられる。また、 I Dカードを利用して、車内環境の調整をおこなうため、 IDカードを所持していない搭 乗者の嗜好にあわせた車内環境の設定をおこなえないという問題が一例として挙げ られる。 However, with the above-described conventional technology, each passenger needs to create and possess an ID card storing profile information, and to take measures to prevent loss and damage of the ID card. There is a problem that the burden on the passenger increases. I One example is the problem that the interior environment cannot be set according to the preferences of passengers who do not have an ID card because the interior environment is adjusted using the D card.
課題を解決するための手段  Means for solving the problem
[0006] 上述した課題を解決し、目的を達成するため、請求項 1の発明にカゝかるコンテンツ 提供装置は、移動体においてコンテンツを提供するコンテンツ提供装置において、 前記移動体の搭乗者を特定する特定手段と、前記特定手段によって特定された搭 乗者に関する情報 (以下「搭乗者情報」 t 、う)を取得する搭乗者情報取得手段と、 前記コンテンツを出力する出力手段と、前記出力手段を制御して、前記搭乗者情報 取得手段によって取得された搭乗者情報に基づ 、て、前記コンテンツを出力する制 御手段と、を備えることを特徴とする。 [0006] In order to solve the above-described problems and achieve the object, the content providing apparatus according to the invention of claim 1 is a content providing apparatus that provides content in a mobile object, and identifies a passenger of the mobile object. Identifying means, passenger information obtaining means for obtaining information relating to the passenger identified by the identifying means (hereinafter referred to as “passenger information” t), output means for outputting the content, and the output means And control means for outputting the content based on passenger information acquired by the passenger information acquisition means.
[0007] また、請求項 4の発明に力かるコンテンツ提供装置は、移動体においてコンテンツ を提供するコンテンツ提供装置において、前記搭乗者の挙動を検出する挙動検出 手段と、前記コンテンツを出力する出力手段と、前記出力手段を制御して、前記挙動 検出手段によって検出された結果に基づいて、前記コンテンツを出力する制御手段 と、を備えることを特徴とする。  [0007] Further, the content providing apparatus according to the invention of claim 4 is a content providing apparatus that provides content on a mobile body, and behavior detecting means for detecting the behavior of the passenger, and output means for outputting the content. And a control means for controlling the output means and outputting the content based on a result detected by the behavior detecting means.
[0008] また、請求項 6の発明に力かるコンテンツ提供方法は、移動体にお!、てコンテンツ を提供するコンテンツ提供方法にお!ヽて、前記移動体の搭乗者を特定する特定ェ 程と、前記特定工程によって特定された搭乗者に関する情報 (以下「搭乗者情報」と V、う)を取得する搭乗者情報取得工程と、前記搭乗者情報取得工程によって取得さ れた搭乗者情報に基づいて、コンテンツの出力を制御する制御工程と、を含むことを 特徴とする。  [0008] In addition, the content providing method according to the invention of claim 6 is a specific process for specifying a passenger of the mobile object in the content providing method for providing content to the mobile object. And a passenger information acquisition step for acquiring information related to the passenger specified in the specifying step (hereinafter referred to as “passenger information” and V), and the passenger information acquired in the passenger information acquisition step. And a control step for controlling the output of the content.
[0009] また、請求項 7の発明に力かるコンテンツ提供方法は、移動体にお!、てコンテンツ を提供するコンテンツ提供方法において、前記搭乗者の挙動を検出する挙動検出 工程と、前記挙動検出工程によって検出された結果に基づいて、コンテンツの出力 を制御する制御工程と、を含むことを特徴とする。  [0009] Furthermore, the content providing method according to the invention of claim 7 is a content providing method for providing content to a mobile object, a behavior detecting step for detecting the behavior of the passenger, and the behavior detection And a control step of controlling the output of the content based on the result detected by the step.
[0010] また、請求項 8の発明に力かるコンテンツ提供プログラムは、請求項 6または 7に記 載のコンテンツ提供方法をコンピュータに実行させることを特徴とする。 [0011] また、請求項 9の発明にかかるコンピュータに読み取り可能な記録媒体は、請求項 8に記載のコンテンツ提供プログラムを記録したことを特徴とする。 [0010] A content providing program according to claim 8 causes a computer to execute the content providing method according to claim 6 or 7. [0011] Further, a computer-readable recording medium according to the invention of claim 9 records the content providing program according to claim 8.
図面の簡単な説明  Brief Description of Drawings
[0012] [図 1]図 1は、本実施の形態にカゝかるコンテンツ提供装置の機能的構成の一例を示 すブロック図である。  FIG. 1 is a block diagram illustrating an example of a functional configuration of a content providing apparatus according to the present embodiment.
[図 2]図 2は、本実施の形態に力かるコンテンツ提供装置の処理の内容を示すフロー チャートである。  [FIG. 2] FIG. 2 is a flow chart showing the contents of processing of the content providing apparatus which works on the present embodiment.
[図 3]図 3は、本実施例に力かるナビゲーシヨン装置のハードウェア構成の一例を示 すブロック図である。  FIG. 3 is a block diagram showing an example of a hardware configuration of a navigation device that works well with the present embodiment.
[図 4]図 4は、本実施例に力かるナビゲーシヨン装置が搭載された車両内部の一例を 示す説明図である。  [FIG. 4] FIG. 4 is an explanatory view showing an example of the inside of a vehicle equipped with a navigation device that is effective in the present embodiment.
[図 5]図 5は、本実施例に力かるナビゲーシヨン装置において搭乗者情報を用いた処 理の内容を示すフローチャートである。  [FIG. 5] FIG. 5 is a flowchart showing the contents of processing using the passenger information in the navigation device empowering the present embodiment.
[図 6]図 6は、本実施例に力かるナビゲーシヨン装置において挙動情報を用いた処理 の内容を示すフローチャートである。  [FIG. 6] FIG. 6 is a flowchart showing the contents of processing using behavior information in the navigation apparatus which is useful in the present embodiment.
符号の説明  Explanation of symbols
[0013] 100 コンテンツ提供装置 [0013] 100 content providing apparatus
101 搭乗者特定部  101 Passenger Identification Department
102 搭乗者情報取得部  102 Passenger information acquisition unit
103 出力制御部  103 Output controller
104 コンテンツ出力部  104 Content output section
105 挙動検出部  105 Behavior detector
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0014] 以下に添付図面を参照して、この発明にかかるコンテンツ提供装置、コンテンツ提 供方法、コンテンツ提供プログラムおよびコンピュータに読み取り可能な記録媒体の 好適な実施の形態を詳細に説明する。 Hereinafter, preferred embodiments of a content providing apparatus, a content providing method, a content providing program, and a computer-readable recording medium according to the present invention will be described in detail with reference to the accompanying drawings.
[0015] (実施の形態) [0015] (Embodiment)
(コンテンツ提供装置の機能的構成) 図 1を用いて、本実施の形態にカゝかるコンテンツ提供装置の機能的構成について 説明する。図 1は、本実施の形態に力かるコンテンツ提供装置の機能的構成の一例 を示すブロック図である。 (Functional configuration of content providing device) A functional configuration of the content providing apparatus according to the present embodiment will be described with reference to FIG. FIG. 1 is a block diagram showing an example of a functional configuration of a content providing apparatus that works on the present embodiment.
[0016] 図 1において、コンテンツ提供装置 100は、搭乗者特定部 101と、搭乗者情報取得 部 102と、出力制御部 103と、コンテンツ出力部 104と、挙動検出部 105と、を含み 構成されている。  In FIG. 1, the content providing device 100 includes a passenger identification unit 101, a passenger information acquisition unit 102, an output control unit 103, a content output unit 104, and a behavior detection unit 105. ing.
[0017] 搭乗者特定部 101は、搭乗者を特定する。搭乗者の特定は、たとえば、各搭乗者 における、移動体の所有者との関係 (本人、親類、友人、他人など)の特定である。  [0017] The passenger identification unit 101 identifies a passenger. The identification of the passenger is, for example, the identification of the relationship (owner, relative, friend, other person, etc.) with the owner of the moving body in each passenger.
[0018] 搭乗者情報取得部 102は、搭乗者特定部 101によって特定された搭乗者に関する 搭乗者情報を取得する。搭乗者情報は、たとえば、搭乗者に関する嗜好、年齢、性 別などの特徴や、搭乗者の配置や人数などの構成を含む情報である。  [0018] The passenger information acquisition unit 102 acquires the passenger information related to the passenger specified by the passenger specification unit 101. Passenger information is, for example, information including characteristics such as preferences, age, and gender related to the passenger, and the configuration such as the arrangement and number of passengers.
[0019] 挙動検出部 105は、搭乗者の挙動を検出する。搭乗者の挙動は、たとえば、眠気 や疲れや体調など搭乗者の身体的状況を含む情報であり、所定の挙動を示す搭乗 者の配置や人数などを含む情報でもよ 、。  [0019] The behavior detection unit 105 detects the behavior of the passenger. The passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue and physical condition, and may be information including the arrangement and number of passengers exhibiting the predetermined behavior.
[0020] 出力制御部 103は、搭乗者情報取得部 102によって取得された搭乗者情報と、挙 動検出部 105によって検出された搭乗者の挙動の少なくともどちらかに基づいて、コ ンテンッの出力を制御する。出力の制御は、たとえば、音声や表示を制御し、音声に 関する音量や音質、あるいは表示に関する字幕の大きさや輝度などを制御する。ま た、音声や表示の出力のオン Zオフを制御する構成でもよい。  [0020] The output control unit 103 outputs the content based on at least one of the passenger information acquired by the passenger information acquisition unit 102 and the behavior of the passenger detected by the movement detection unit 105. Control. For example, the output is controlled by controlling the sound and display, and controlling the volume and sound quality related to the sound or the size and brightness of the caption related to the display. In addition, it may be configured to control on / off of audio and display output.
[0021] コンテンツ出力部 104は、出力制御部 103による制御にしたがいコンテンツを出力 する。コンテンツの出力は、たとえば、移動体に搭載されたディスプレイなどの表示装 置やスピーカなどの音響装置などによりおこなう。  The content output unit 104 outputs content according to control by the output control unit 103. The content is output by, for example, a display device such as a display mounted on a moving body or an acoustic device such as a speaker.
[0022] (コンテンツ提供装置の処理の内容)  [0022] (Details of processing of content providing apparatus)
つぎに、図 2を用いて、本実施の形態に力かるコンテンツ提供装置の処理の内容に ついて説明する。図 2は、本実施の形態に力かるコンテンツ提供装置の処理の内容 を示すフローチャートである。図 2のフローチャートにおいて、まず、コンテンツ提供装 置 100は、コンテンツ提供の指示があった力否かを判断する(ステップ S201)。コン テンッ提供の指示は、たとえば、搭乗者が図示しない操作部カゝらコンテンツの種類な どを指示する構成でもよい。 Next, with reference to FIG. 2, the contents of the processing of the content providing apparatus that is useful for the present embodiment will be described. FIG. 2 is a flowchart showing the contents of the processing of the content providing apparatus that works on the present embodiment. In the flowchart of FIG. 2, first, the content providing apparatus 100 determines whether or not there is an instruction to provide the content (step S201). The instruction to provide the content is, for example, a content type such as an operation unit not shown by the passenger. It may be configured to instruct.
[0023] ステップ S201において、コンテンツ提供の指示があるのを待って、指示があった場 合 (ステップ S201: Yes)は、搭乗者特定部 101は、搭乗者を特定する (ステップ S20 2)。搭乗者の特定は、たとえば、移動体の所有者と搭乗者との関係の特定である。  In step S201, after waiting for the content provision instruction (step S201: Yes), the passenger identification unit 101 identifies the passenger (step S202). The identification of the passenger is, for example, the identification of the relationship between the owner of the mobile object and the passenger.
[0024] つぎに、搭乗者情報取得部 102は、ステップ S202において特定された搭乗者に 関する搭乗者情報を取得する (ステップ S203)。搭乗者情報は、たとえば、搭乗者に 関する嗜好、年齢、性別などの特徴や、搭乗者の配置や人数などの構成を含む情 報である。  Next, the occupant information acquisition unit 102 acquires occupant information regarding the occupant identified in step S202 (step S203). Passenger information is, for example, information including characteristics such as preferences, age, and sex regarding the passenger, and configuration such as the arrangement and number of passengers.
[0025] また、挙動検出部 105は、搭乗者の挙動を検出する (ステップ S204)。搭乗者の挙 動は、たとえば、眠気や疲れや体調など搭乗者の身体的状況を含む情報である。な お、本フローチャートにおいては、搭乗者情報を取得して (ステップ S203)、搭乗者 の挙動を検出する (ステップ S 204)こととしている力 搭乗者の挙動を検出して (ステ ップ S 204)、搭乗者情報を取得する (ステップ S203)順序でもよぐどちらかのステツ プをおこなわな!/ヽ構成でもよ!/、。  [0025] The behavior detecting unit 105 detects the behavior of the passenger (step S204). The passenger's behavior is information including the physical condition of the passenger such as sleepiness, fatigue, and physical condition. In this flowchart, the passenger information is acquired (step S203), and the behavior of the passenger to be detected (step S204) is detected (step S204). ), Acquire passenger information (Step S203) Do not perform either step in the order!
[0026] つづいて、コンテンツ出力部 104は、出力制御部 103の制御にしたがって、コンテ ンッを出力する (ステップ S205)。出力の制御は、たとえば、出力制御部 103により、 ステップ S203において取得された搭乗者情報と、ステップ S204において検出され た搭乗者の挙動の少なくともどちらか一方に基づいておこなわれる。そして、一連の 処理を終了する。  Next, the content output unit 104 outputs content according to the control of the output control unit 103 (step S205). The output control is performed, for example, by the output control unit 103 based on at least one of the passenger information acquired in step S203 and the behavior of the passenger detected in step S204. Then, a series of processing ends.
[0027] なお、上述のフローチャートにおいては、コンテンツ提供の指示があるのを待って、 指示があった場合 (ステップ S 201: Yes)に、搭乗者を特定し (ステップ S202)、搭乗 者情報を取得し (ステップ S 203)、挙動を検出する (ステップ S 204)構成としているが 、コンテンツ提供の指示がある前に、前述のステップ S202〜ステップ S204をおこな う構成としてもよい。たとえば、あら力じめ搭乗時に、搭乗者を特定し (ステップ S202) 、搭乗者情報を取得し (ステップ S 203)、挙動を検出して (ステップ S204)、コンテン ッ提供の指示があるのを待って、指示があった場合 (ステップ S201 : Yes)に、コンテ ンッを制御して出力する (ステップ S205)構成でもよ!/、。  [0027] In the above flowchart, after waiting for an instruction to provide content, if there is an instruction (step S201: Yes), the passenger is identified (step S202), and the passenger information is stored. Although it is configured to acquire (step S203) and detect behavior (step S204), it may be configured to perform the above-described steps S202 to S204 before instructing content provision. For example, at the time of boarding by force, the passenger is identified (step S202), the passenger information is obtained (step S203), the behavior is detected (step S204), and there is an instruction to provide content. If there is an instruction after waiting (Step S201: Yes), the content may be controlled and output (Step S205).
[0028] 以上説明したように、本実施の形態に力かるコンテンツ提供装置、コンテンツ提供 方法、コンテンツ提供プログラムおよびコンピュータに読み取り可能な記録媒体によ れば、ユーザは、自らの入力操作によりコンテンツ出力に関して制御しなくても、搭乗 者の搭乗者情報および挙動に基づいて、出力の制御ができる。したがって、効率的 に搭乗者に快適なコンテンツ提供を図ることができる。 [0028] As described above, the content providing apparatus and the content providing that are relevant to the present embodiment According to the method, the content providing program, and the computer-readable recording medium, the user can control the output based on the passenger information and behavior of the passenger without having to control the content output by his input operation. Can do. Therefore, it is possible to efficiently provide comfortable content to passengers.
実施例  Example
[0029] 以下に、本発明の実施例について説明する。本実施例では、たとえば、車両(四輪 車、二輪車を含む)などの移動体に搭載されるナビゲーシヨン装置によって、本発明 のコンテンツ提供装置を実施した場合の一例について説明する。  [0029] Hereinafter, examples of the present invention will be described. In the present embodiment, an example in which the content providing device of the present invention is implemented by a navigation device mounted on a moving body such as a vehicle (including a four-wheeled vehicle and a two-wheeled vehicle) will be described.
[0030] (ナビゲーシヨン装置のハードウェア構成)  [0030] (Hardware configuration of navigation device)
図 3を用いて、本実施例に力かるナビゲーシヨン装置のハードウェア構成について 説明する。図 3は、本実施例に力かるナビゲーシヨン装置のハードウェア構成の一例 を示すブロック図である。  With reference to FIG. 3, the hardware configuration of the navigation apparatus that is useful in this embodiment will be described. FIG. 3 is a block diagram showing an example of a hardware configuration of a navigation apparatus that works on the present embodiment.
[0031] 図 3において、ナビゲーシヨン装置 300は、車両などの移動体に搭載されており、ナ ピゲーシヨン制御部 301と、ユーザ操作部 302と、表示部 303と、位置取得部 304と 、記録媒体 305と、記録媒体デコード部 306と、音声出力部 307と、通信部 308と、 経路探索部 309と、経路誘導部 310と、音声生成部 311と、スピーカ 312と、搭乗者 撮影部 313と、音声処理部 314と、によって構成される。  In FIG. 3, a navigation device 300 is mounted on a moving body such as a vehicle, and includes a navigation control unit 301, a user operation unit 302, a display unit 303, a position acquisition unit 304, and a recording medium. 305, a recording medium decoding unit 306, an audio output unit 307, a communication unit 308, a route search unit 309, a route guidance unit 310, an audio generation unit 311, a speaker 312, a passenger photographing unit 313, And an audio processing unit 314.
[0032] ナビゲーシヨン制御部 301は、ナビゲーシヨン装置 300全体を制御する。ナビゲー シヨン制御部 301は、たとえば所定の演算処理を実行する CPU (Central Process ing Unit)や、各種制御プログラムを格納する ROM (Read Only Memory)、お よび、 CPUのワークエリアとして機能する RAM (Random Access Memory)など によって構成されるマイクロコンピュータなどによって実現することができる。  [0032] The navigation control unit 301 controls the entire navigation device 300. The navigation control unit 301 includes, for example, a CPU (Central Processing Unit) that executes predetermined arithmetic processing, a ROM (Read Only Memory) that stores various control programs, and a RAM (Random) that functions as a work area for the CPU. It can be realized by a microcomputer constituted by an Access Memory).
[0033] ナビゲーシヨン制御部 301は、経路誘導に際し、位置取得部 304によって取得され た現在位置に関する情報と、記録媒体 305から記録媒体デコード部 306を経由して 得られた地図情報に基づいて、地図上のどの位置を走行している力を算出し、算出 結果を表示部 303へ出力する。また、ナビゲーシヨン制御部 301は、経路誘導に際し 、経路探索部 309、経路誘導部 310、音声生成部 311との間で経路誘導に関する情 報の入出力をおこない、その結果得られる情報を表示部 303および音声出力部 307 へ出力する。 [0033] On the basis of the information on the current position acquired by the position acquisition unit 304 and the map information obtained from the recording medium 305 via the recording medium decoding unit 306, the navigation control unit 301 performs the route guidance, The driving force at which position on the map is calculated, and the calculation result is output to the display unit 303. In addition, the navigation control unit 301 inputs / outputs information related to route guidance to / from the route search unit 309, the route guidance unit 310, and the voice generation unit 311 during route guidance, and displays information obtained as a result. 303 and audio output unit 307 Output to.
[0034] また、ナビゲーシヨン制御部 301は、搭乗者撮影部 313により撮影された搭乗者の 搭乗者像あるいは挙動に基づいて、後述する搭乗者の識別情報や挙動情報を生成 する。そして、ユーザがユーザ操作部 302を操作して入力するコンテンツ再生の指示 にしたがい、搭乗者の識別情報や挙動情報に基づいて、音声や映像などのコンテン ッ出力を制御する。コンテンツ出力の制御は、たとえば、記録媒体 305に記録された 音楽や映像、あるいは通信部 308により受信するラジオ放送やテレビ放送などの再 生を指示された際、音量の大小や音質の設定 (高域と低域のバランス)や字幕の大 小や表示部 303の輝度、あるいは出力のオン Zオフなどの制御である。なお、表示 部 303やスピーカ 312が複数であれば、それぞれについて制御をおこなう構成として ちょい。  Further, the navigation control unit 301 generates passenger identification information and behavior information, which will be described later, based on the passenger image or behavior of the passenger photographed by the passenger photographing unit 313. Then, content output such as audio and video is controlled based on the identification information and behavior information of the occupant according to the content reproduction instruction input by the user operating the user operation unit 302. The content output control can be performed, for example, when the playback of a music or video recorded on the recording medium 305 or a radio broadcast or a television broadcast received by the communication unit 308 is instructed to set the volume level or sound quality (high This is control of the balance between the low and high frequencies), the size of the subtitles, the brightness of the display 303, or the output on / off. Note that if there are a plurality of display units 303 and speakers 312, a configuration for controlling each of them is appropriate.
[0035] ユーザ操作部 302は、ユーザがリモコンやスィッチ、タツチパネルなどの操作手段 を操作して入力した情報を取得してナビゲーシヨン制御部 301に対して出力する。  The user operation unit 302 acquires information input by the user by operating operation means such as a remote controller, a switch, and a touch panel, and outputs the acquired information to the navigation control unit 301.
[0036] 表示部 303は、たとえば、 CRT (Cathode Ray Tube)、 TFT液晶ディスプレイ、 有機 ELディスプレイ、プラズマディスプレイなどを含む。表示部 303は、具体的には 、たとえば、映像 IZF (インターフェース)や映像 IZFに接続された映像表示用のデ イスプレイ装置によって構成することができる。映像 IZFは、具体的には、たとえば、 ディスプレイ装置全体の制御をおこなうグラフィックコントローラと、即時表示可能な画 像情報を一時的に記憶する VRAM (Video RAM)などのバッファメモリと、グラフィ ックコントローラから出力される画像情報に基づいて、ディスプレイ装置を表示制御す る制御 ICなどによって構成される。そして、表示部 303は、ナビゲーシヨン制御部 30 1の出力に関する制御にしたがい、交通情報や地図情報や経路誘導に関する情報 やナビゲーシヨン制御部 301から出力される映像に関するコンテンツ、その他各種情 報を表示する。  Display unit 303 includes, for example, a CRT (Cathode Ray Tube), a TFT liquid crystal display, an organic EL display, a plasma display, and the like. Specifically, the display unit 303 can be configured by, for example, a video IZF (interface) or a video display device connected to the video IZF. Specifically, the video IZF is, for example, a graphic controller that controls the entire display device, a buffer memory such as VRAM (Video RAM) that temporarily stores image information that can be displayed immediately, and a graphics controller. It is composed of a control IC that controls the display of the display device based on the image information output from the device. The display unit 303 displays traffic information, map information, information on route guidance, content related to video output from the navigation control unit 301, and other various information according to the control related to the output of the navigation control unit 301. To do.
[0037] 位置取得部 304は、 GPSレシーバおよび車速センサーや角速度センサーや加速 度センサーなどの各種センサー力も構成され、移動体の現在位置 (ナビゲーシヨン装 置 300の現在位置)の情報を取得する。 GPSレシーバは、 GPS衛星からの電波を受 信し、 GPS衛星との幾何学的位置を求める。なお、 GPSとは、 Global Positioning Systemの略称であり、 4つ以上の衛星からの電波を受信することによって地上で の位置を正確に求めるシステムである。 GPSレシーバは、 GPS衛星からの電波を受 信するためのアンテナ、受信した電波を復調するチューナーおよび復調した情報に 基づいて現在位置を算出する演算回路などによって構成される。 [0037] The position acquisition unit 304 includes a GPS receiver and various sensor forces such as a vehicle speed sensor, an angular velocity sensor, and an acceleration sensor, and acquires information on the current position of the moving object (current position of the navigation device 300). The GPS receiver receives the radio wave from the GPS satellite and determines the geometric position with the GPS satellite. GPS means Global Positioning It is an abbreviation for System, and is a system that accurately determines the position on the ground by receiving radio waves from four or more satellites. The GPS receiver is composed of an antenna for receiving radio waves from GPS satellites, a tuner for demodulating the received radio waves, and an arithmetic circuit for calculating the current position based on the demodulated information.
[0038] 記録媒体 305には、各種制御プログラムや各種情報がコンピュータに読み取り可 能な状態で記録されている。この記録媒体 305は、たとえば、 HD (Hard Disk)や D VD (Digital Versatile Disk)、 CD (Compact Disk)、メモリカードによって実現 することができる。なお、記録媒体 305は、記録媒体デコード部 306による情報の書 き込みを受け付けるとともに、書き込まれた情報を不揮発に記録するようにしてもよい In the recording medium 305, various control programs and various information are recorded in a state that can be read by a computer. The recording medium 305 can be realized by, for example, an HD (Hard Disk), a DVD (Digital Versatile Disk), a CD (Compact Disk), or a memory card. Note that the recording medium 305 may accept writing of information by the recording medium decoding unit 306 and record the written information in a nonvolatile manner.
[0039] また、記録媒体 305には、経路探索および経路誘導に用いられる地図情報が記録 されている。記録媒体 305に記録されている地図情報は、建物、河川、地表面などの 地物 (フィーチャ)をあらわす背景データと、道路の形状をあらわす道路形状データと を有しており、表示部 303の表示画面において 2次元または 3次元に描画される。ナ ピゲーシヨン装置 300が経路誘導中の場合は、記録媒体デコード部 306によって記 録媒体 305から読み取られた地図情報と位置取得部 304によって取得された移動体 の位置を示すマークとが表示部 303に表示されることとなる。 [0039] Further, map information used for route search and route guidance is recorded in the recording medium 305. The map information recorded in the recording medium 305 includes background data representing features (features) such as buildings, rivers, and the ground surface, and road shape data representing the shape of the road. It is drawn in 2D or 3D on the display screen. When the navigation device 300 is guiding a route, the map information read from the recording medium 305 by the recording medium decoding unit 306 and the mark indicating the position of the moving body acquired by the position acquisition unit 304 are displayed on the display unit 303. Will be displayed.
[0040] また、記録媒体 305には、あらかじめ登録された、搭乗者を識別するための登録識 別情報や搭乗者の挙動を判断する特定挙動情報やコンテンツ出力を決定するため の出力形態情報、および映像や音楽などのコンテンツが記録されている。登録識別 情報は、たとえば、カメラなどで撮影する搭乗者像の特徴点を抽出した情報を含むも ので、顔のパターンや目の虹彩や指紋データや音声データなどである。また、特定 挙動情報は、たとえば、眠気や疲れなど特定挙動状態に関する特徴を抽出した情報 を含むもので、まぶたの動きや音量の大小や心拍数などである。  [0040] Further, in the recording medium 305, registered identification information for identifying a passenger, specific behavior information for determining the behavior of the passenger, output form information for determining content output, Content such as video and music is recorded. The registered identification information includes, for example, information obtained by extracting feature points of a passenger image taken with a camera or the like, such as a face pattern, eye iris, fingerprint data, or voice data. The specific behavior information includes, for example, information obtained by extracting features related to the specific behavior state such as drowsiness and fatigue, such as eyelid movement, volume level, and heart rate.
[0041] また、記録媒体 305に記録された出力形態情報は、コンテンツの出力に関する情 報で、たとえば、搭乗者の登録識別情報に基づいて識別された搭乗者の搭乗者情 報や、搭乗者の特定挙動情報に基づいて判断される搭乗者の特定挙動状態に関連 づけられている。そして、ナビゲーシヨン制御部 301における搭乗者の識別および挙 動の判断の結果に応じて、ナビゲーシヨン制御部 301が出力に関する制御をおこな う際に読み込む構成でもよい。搭乗者情報は、たとえば、嗜好、年齢、性別などの特 徴ゃ、搭乗者の配置、人数などの構成を含む情報である。 [0041] The output form information recorded in the recording medium 305 is information related to content output. For example, the passenger information of the passenger identified based on the registered identification information of the passenger, or the passenger It is related to the specific behavior state of the passenger determined based on the specific behavior information. Then, identification and enumeration of the passenger in the navigation control unit 301 are performed. A configuration may be adopted in which the navigation control unit 301 reads when performing control related to output in accordance with the result of determination of movement. Passenger information is information including features such as preferences, age, sex, etc., and the arrangement of passengers and number of passengers.
[0042] 搭乗者の搭乗者情報や特定挙動状態と出力形態情報との関連は、たとえば、高齢 者や女性の搭乗者に対しては、音量を小さくかつ音質をマイルドにする設定でもよい 。また、高齢者の搭乗者に対しては、表示部 303に表示する字幕を大きくする設定で もよい。また複数の搭乗者に対しては、一般的な適用として音量を小さくかつ音質を マイルドにする設定でもよい。また、複数の搭乗者に対して、好みが異なる場合は、 一般的な適用や多数側好みの設定をしてもよい。また、眠くなつた搭乗者に対しては 、音量を小さくして、表示部 303の輝度を下げてもよい。また、各搭乗者の好みの出 力形態をあらかじめ設定したものなどでもよい。また、搭乗者の視聴履歴を記録して、 前回の視聴設定にあわせる構成でもよい。なお、上記出力形態情報は、あらかじめ 年齢や性別などと出力形態の関連を記録する構成や、ユーザによる登録ができる構 成でもよい。 [0042] The relationship between the occupant information of the occupant, the specific behavior state, and the output form information may be, for example, a setting that makes the sound volume small and the sound quality mild for elderly people and female occupants. For elderly passengers, the setting may be such that the caption displayed on the display unit 303 is enlarged. For multiple passengers, the general application may be set to a low volume and a mild sound quality. In addition, when preferences differ for multiple passengers, general application or multi-side preference may be set. Further, for passengers who become sleepy, the luminance of the display unit 303 may be lowered by decreasing the volume. In addition, a preset output format of each passenger may be used. Alternatively, the passenger's viewing history may be recorded to match the previous viewing setting. Note that the output form information may have a structure in which the relationship between the output form and the age and sex is recorded in advance, or a structure that can be registered by the user.
[0043] なお、本実施例では地図情報やコンテンツなどを記録媒体 305に記録するようにし た力 これに限るものではない。地図情報やコンテンツなどは、ナビゲーシヨン装置 3 00外部のサーバなどに記録されていてもよい。その場合、ナビゲーシヨン装置 300は 、たとえば、通信部 308を通じて、ネットワークを介してサーノから地図情報やコンテ ンッを取得する。取得された情報は RAMなどに記憶される。  In this embodiment, the force for recording map information, content, and the like on the recording medium 305 is not limited to this. Map information, content, and the like may be recorded on a server outside the navigation apparatus 300. In that case, the navigation device 300 acquires map information and content from the Sano via the network, for example, through the communication unit 308. The acquired information is stored in RAM or the like.
[0044] 記録媒体デコード部 306は、記録媒体 305に対する情報の読み取り Z書き込みの 制御をおこなう。たとえば、記録媒体 305として HDを用いた場合には、記録媒体デ コード部 306は、 HDD (Hard Disk Drive)となる。  The recording medium decoding unit 306 controls reading of information on the recording medium 305 and Z writing. For example, when HD is used as the recording medium 305, the recording medium decoding unit 306 is an HDD (Hard Disk Drive).
[0045] 音声出力部 307は、接続されたスピーカ 312への出力を制御することによって、案 内音などの音声を再生する。スピーカ 312は、 1つであってもよいし、複数であっても よい。この音声出力部 307は、たとえば、音声デジタル情報の DZA変換をおこなう D ZAコンバータと、 DZAコンバータから出力される音声アナログ信号を増幅する増 幅器と、音声アナログ情報の AZD変換をおこなう AZDコンバータと、から構成する ことができる。 [0046] 通信部 308は、各種情報を外部から取得する。たとえば、通信部 308は、 FM多重 チューナー、 VICS (登録商標) Zビーコンレシーバ、無線通信機器およびその他の 通信機器や、携帯電話、 PHS、通信カードおよび無線 LANなどの通信媒体を介し て他の通信機器との通信をおこなう。あるいは、ラジオ放送による電波やテレビ放送 による電波や衛星放送により通信をおこなうことのできる機器などでもよい。 [0045] The audio output unit 307 reproduces a sound such as an internal sound by controlling the output to the connected speaker 312. There may be one speaker 312 or a plurality of speakers. The audio output unit 307 includes, for example, a D ZA converter that performs DZA conversion of audio digital information, an amplifier that amplifies an audio analog signal output from the DZA converter, and an AZD converter that performs AZD conversion of audio analog information. It can consist of [0046] The communication unit 308 obtains various types of information from the outside. For example, the communication unit 308 is an FM multiplex tuner, a VICS (registered trademark) Z beacon receiver, a wireless communication device and other communication devices, and other communication via a communication medium such as a mobile phone, PHS, communication card and wireless LAN. Communicate with the device. Alternatively, it may be a device that can communicate by radio broadcast radio waves, television broadcast radio waves, or satellite broadcasts.
[0047] 通信部 308によって取得される情報は、道路交通情報通信システムセンター力も配 信される渋滞や交通規制などの交通情報や、事業者が独自の方式で取得した交通 情報や、その他インターネット上の公開データおよびコンテンツなどである。通信部 3 08は、たとえば、全国の交通情報やコンテンツを蓄積しているサーバに対しネットヮ ークを介して、交通情報あるいはコンテンツを要求し、要求した情報を取得するように してもよい。また、ラジオ放送による電波やテレビ放送などの電波や衛星放送などか ら、映像ある ヽは音声に関する信号を受信する構成でもよ ヽ。  [0047] Information acquired by the communication unit 308 includes traffic information such as traffic jams and traffic regulations that are also distributed by the road traffic information communication system center, traffic information acquired by operators in their own way, and other information on the Internet. Public data and content. For example, the communication unit 308 may request traffic information or content from a server storing traffic information and content nationwide via the network and obtain the requested information. In addition, it may be configured to receive video signals or audio signals from radio broadcasts, television broadcasts, or satellite broadcasts.
[0048] 経路探索部 309は、記録媒体 305から記録媒体デコード部 306を介して取得され る地図情報や、通信部 308を介して取得される交通情報などを利用して、出発地点 力 目的地点までの最適経路を探索する。ここで、最適経路とは、ユーザの要望に 最も合致する経路である。  [0048] The route search unit 309 uses the map information acquired from the recording medium 305 via the recording medium decoding unit 306, the traffic information acquired via the communication unit 308, and the like. Search for the optimal route. Here, the optimal route is the route that best matches the user's request.
[0049] 経路誘導部 310は、経路探索部 309によって探索された最適経路情報、位置取得 部 304によって取得された移動体の位置情報、記録媒体 305から記録媒体デコード 部 306を経由して得られた地図情報に基づいて、ユーザを目的地点まで誘導するた めの経路誘導情報の生成をおこなう。このとき生成される経路誘導情報は、通信部 3 08によって受信した渋滞情報を考慮したものであってもよい。経路誘導部 310で生 成された経路誘導情報は、ナビゲーシヨン制御部 301を介して表示部 303へ出力さ れる。  The route guidance unit 310 is obtained from the optimum route information searched by the route search unit 309, the position information of the moving body acquired by the position acquisition unit 304, and the recording medium 305 via the recording medium decoding unit 306. Based on the obtained map information, route guidance information for guiding the user to the destination point is generated. The route guidance information generated at this time may be information that considers the traffic jam information received by the communication unit 308. The route guidance information generated by the route guidance unit 310 is output to the display unit 303 via the navigation control unit 301.
[0050] 音声生成部 311は、案内音などの各種音声の情報を生成する。すなわち、経路誘 導部 310で生成された経路誘導情報に基づいて、案内ポイントに対応した仮想音源 の設定と音声ガイダンス情報の生成をおこな 、、これをナビゲーシヨン制御部 301を 介して音声出力部 307へ出力する。  [0050] The sound generation unit 311 generates information of various sounds such as a guide sound. That is, based on the route guidance information generated by the route guidance unit 310, the virtual sound source corresponding to the guidance point is set and the voice guidance information is generated, and this is output as voice via the navigation control unit 301. Output to part 307.
[0051] 搭乗者撮影部 313は、搭乗者を撮影する。搭乗者の撮影は、動画あるいは静止画 のどちらでもよぐたとえば、搭乗者像や搭乗者の挙動を撮影して、ナビゲーシヨン制 御部 301へ出力する。 [0051] The passenger photographing unit 313 photographs a passenger. Passengers can shoot video or still images For example, the image of the passenger and the behavior of the passenger are taken and output to the navigation control unit 301.
[0052] 音声処理部 314は、ナビゲーシヨン制御部 301の出力に関する制御にしたがい、 接続されたスピーカ 312への出力を制御することによって、音楽などの音声を再生す る。スピーカ 312は、 1つであってもよいし、複数であってもよい。この音声処理部 314 は、たとえば、音声出力部 307とほぼ同様の構成であってもよい。  [0052] The sound processing unit 314 reproduces sound such as music by controlling the output to the connected speaker 312 according to the control related to the output of the navigation control unit 301. There may be one speaker 312 or a plurality of speakers. For example, the sound processing unit 314 may have substantially the same configuration as the sound output unit 307.
[0053] なお、実施の形態に力かるコンテンツ提供装置 100の機能的構成である搭乗者特 定部 101、搭乗者情報取得部 102および挙動検出部 105はナビゲーシヨン制御部 3 01および搭乗者撮影部 313によって、出力制御部 103はナビゲーシヨン制御部 301 によって、コンテンツ出力部 104は表示部 303やスピーカ 312によって、それぞれそ の機能を実現する。  Note that the passenger identification unit 101, the passenger information acquisition unit 102, and the behavior detection unit 105, which are functional configurations of the content providing apparatus 100 according to the embodiment, are the navigation control unit 301 and the passenger imaging. The output control unit 103 is realized by the navigation control unit 301, the content output unit 104 is realized by the display unit 303 and the speaker 312 by the unit 313.
[0054] つぎに、図 4を用いて、本実施例に力かるナビゲーシヨン装置 300が搭載された車 両内部について説明する。図 4は、本実施例に力かるナビゲーシヨン装置が搭載され た車両内部の一例を示す説明図である。  Next, with reference to FIG. 4, the interior of the vehicle in which the navigation device 300 that is useful in the present embodiment is mounted will be described. FIG. 4 is an explanatory diagram showing an example of the inside of a vehicle in which a navigation device that is useful in this embodiment is mounted.
[0055] 図 4において、車両内部は、運転席シート 411と、助手席シート 412と、後部座席シ ート 413と、を有しており、運転席シート 411と助手席シート 412との周囲には、表示 装置 (表示部 303) 421aおよび音響装置 (スピーカ 312) 422ならびに情報再生機器 426aが設けられている。また、助手席シート 412には、後部座席シート 413の搭乗者 に向けて、表示装置 421bおよび情報再生機器 426bが設けられており、後部座席シ ート 413の後方には図示しない音響装置が設けられている。  In FIG. 4, the interior of the vehicle has a driver seat 411, a passenger seat 412, and a rear seat 413, around the driver seat 411 and the passenger seat 412. Are provided with a display device (display unit 303) 421a, an acoustic device (speaker 312) 422, and an information reproducing device 426a. The passenger seat 412 is provided with a display device 421b and an information reproducing device 426b for the passenger of the rear seat 413, and an acoustic device (not shown) is provided behind the rear seat 413. It has been.
[0056] 車両の天井部 414および各情報再生機器 426 (426a, 426b)には、撮影装置 (搭 乗者撮影部 313) 423が設けられており、搭乗者を撮影できる。なお、各情報再生機 器 426 (426a, 426b)は、車両に対して着脱可能な構造を備えていてもよい。  [0056] The vehicle's ceiling 414 and each information reproducing device 426 (426a, 426b) are provided with a photographing device (passenger photographing unit 313) 423, which can photograph a passenger. Each information reproducing device 426 (426a, 426b) may have a structure that can be attached to and detached from the vehicle.
[0057] (ナビゲーシヨン装置 300の処理の内容)  [0057] (Contents of processing of navigation device 300)
つぎに、図 5、図 6を用いて、本実施例に力かるナビゲーシヨン装置 300の処理の 内容について説明する。図 5は、本実施例に力かるナビゲーシヨン装置において搭 乗者情報を用いた処理の内容を示すフローチャートである。本図では、搭乗者情報 として、搭乗者の好みである嗜好情報を利用した場合について説明する。図 5のフロ 一チャートにおいて、まず、ナビゲーシヨン装置 300は、コンテンツ再生の指示があつ たカゝ否かを判断する (ステップ S501)。コンテンツ再生の指示は、たとえば、搭乗者 がユーザ操作部 302を操作しておこなう構成でもよい。 Next, with reference to FIGS. 5 and 6, the contents of the processing of the navigation device 300 that is useful in the present embodiment will be described. FIG. 5 is a flowchart showing the contents of the process using the passenger information in the navigation device that is useful in the present embodiment. In this figure, a case where preference information which is a passenger's preference is used as the passenger information will be described. Figure 5 Flow In one chart, first, the navigation apparatus 300 determines whether or not a content reproduction instruction has been given (step S501). The content reproduction instruction may be configured, for example, by a passenger operating the user operation unit 302.
[0058] ステップ S501において、コンテンツ再生の指示があるのを待って、指示があった場 合 (ステップ S501: Yes)は、つぎに、搭乗者撮影部 313は、搭乗者像を撮影する (ス テツプ S502)。搭乗者像の撮影は、たとえば、搭乗者の顔について静止画を撮影す る。 [0058] In step S501, waiting for the content playback instruction, and if there is an instruction (step S501: Yes), next, the passenger photographing unit 313 photographs the passenger image (step S501). Step S502). The passenger image is shot, for example, by taking a still image of the passenger's face.
[0059] そして、ナビゲーシヨン制御部 301は、ステップ S502において撮影した搭乗者像か ら、搭乗者の識別情報を生成する (ステップ S503)。識別情報は、たとえば、搭乗者 の顔の特徴点を抽出した情報を含むもので、あら力じめ記録媒体 305に記録された 登録識別情報と照合をおこなうものである。  [0059] Then, the navigation control unit 301 generates passenger identification information from the passenger image captured in step S502 (step S503). The identification information includes, for example, information obtained by extracting the feature points of the passenger's face, and is collated with the registered identification information recorded on the recording medium 305.
[0060] つづいて、ナビゲーシヨン制御部 301は、記録媒体 305にあら力じめ登録されてい る登録識別情報と、ステップ S503において生成された識別情報との照合をおこない 、識別情報が一致する力否かを判断する (ステップ S504)。そして、識別情報が一致 する場合 (ステップ S 504 : Yes)は、ナビゲーシヨン制御部 301は、記録媒体 305から 、識別情報と一致した登録識別情報に関連づけられた出力形態情報を読み込む (ス テツプ S505)。出力形態情報は、たとえば、搭乗者に適したコンテンツの出力に関す る情報であり、音声であれば音量や音質、あるいは映像であれば字幕や輝度などを 含む情報である。  [0060] Subsequently, the navigation control unit 301 collates the registered identification information pre-registered in the recording medium 305 with the identification information generated in step S503, so that the identification information matches. It is determined whether or not (step S504). When the identification information matches (step S504: Yes), the navigation control unit 301 reads the output form information associated with the registered identification information that matches the identification information from the recording medium 305 (step S505). ). The output form information is, for example, information related to the output of content suitable for the passenger, and is information including volume and sound quality for audio or subtitles and brightness for video.
[0061] つぎに、ナビゲーシヨン制御部 301は、ステップ S505において読み込んだ出力形 態情報に基づいて、コンテンツの出力形態を決定し (ステップ S506)、表示部 303あ るいは音声処理部 314へ出力する。出力形態は、たとえば、音量の大小や音質の高 低、あるいは字幕の大小や輝度の明暗などである。出力形態の決定は、たとえば、搭 乗者の年齢や性別や人数、搭乗者個人の好みや視聴履歴に基づ!、て決定する。  Next, navigation control section 301 determines the content output form based on the output form information read in step S 505 (step S 506), and outputs it to display section 303 or audio processing section 314. To do. The output form is, for example, volume level, sound quality, subtitle size, brightness brightness, or the like. The output form is determined based on, for example, the age, gender, number of passengers, personal preferences and viewing history of the passengers.
[0062] そして、音声処理部 314は音声処理を施して、スピーカ 312へコンテンツを出力し、 表示部 303あるいはスピーカ 312は、ステップ S506において決定した出力形態に基 づいて、コンテンツを再生し (ステップ S507)、一連の処理を終了する。  [0062] Then, the audio processing unit 314 performs audio processing and outputs the content to the speaker 312. The display unit 303 or the speaker 312 reproduces the content based on the output form determined in step S506 (step S506). S507), a series of processing ends.
[0063] また、ステップ S504において、識別情報が一致しない場合 (ステップ S504 : No)は 、搭乗者の登録をおこなうか否かを判断する (ステップ S508)。搭乗者の登録は、た とえば、表示部 303に搭乗者の登録を要求する旨を表示して、搭乗者に登録をおこ なうか否かの判断を促す構成としてもよ!ヽ。 [0063] If the identification information does not match in step S504 (step S504: No), Then, it is determined whether or not to register a passenger (step S508). For example, the passenger registration may be configured to display a message indicating that the passenger registration is required on the display unit 303 and to prompt the passenger to determine whether or not to register!
[0064] ステップ S508において、搭乗者の登録をおこなわない場合 (ステップ S 508 : No) は、ナビゲーシヨン制御部 301は、表示部 303へ出力形態選択要求を出力し (ステツ プ S510)、搭乗者から出力形態の選択を受け付ける (ステップ S511)。そして、ナビ ゲーシヨン制御部 301は、出力形態の選択に基づいて、表示部 303あるいは音声処 理部 314に出力したコンテンツの出力を制御する。そして、音声処理部 314は音声 処理を施して、スピーカ 312へコンテンツを出力し、表示部 303あるいはスピーカ 31 2は、コンテンツを再生し (ステップ S507)、一連の処理を終了する。  [0064] If the passenger is not registered in step S508 (step S508: No), the navigation control unit 301 outputs an output form selection request to the display unit 303 (step S510). The selection of the output form is accepted from (Step S511). Then, the navigation control unit 301 controls the output of the content output to the display unit 303 or the audio processing unit 314 based on the output mode selection. Then, the audio processing unit 314 performs audio processing and outputs the content to the speaker 312. The display unit 303 or the speaker 312 reproduces the content (step S507), and the series of processing ends.
[0065] また、ステップ S 508において、搭乗者の登録をおこなう場合 (ステップ S 508 : Yes) は、表示部 303に搭乗者の登録を促す旨を表示するなどして、搭乗者の登録識別情 報を登録する (ステップ S509)。搭乗者の登録識別情報の登録は、たとえば、搭乗 者撮影部 313により撮影した搭乗者像力も特徴点を抽出したり、ユーザがユーザ操 作部 302を操作して、年齢や性別などを登録してもよい。そして、ステップ S504にも どり、同様の処理を繰り返す。  [0065] When the passenger is registered in step S508 (step S508: Yes), a message that prompts the passenger to be registered is displayed on the display unit 303. Information is registered (step S509). The registration identification information of the passenger is registered, for example, by extracting the feature point of the passenger image power photographed by the passenger photographing unit 313 or by the user operating the user operation unit 302 to register the age and sex. May be. Then, the process returns to step S504 and the same processing is repeated.
[0066] なお、図 5の説明では、コンテンツ再生の指示があるのを待って、指示があった場 合 (ステップ S501: Yes)に、搭乗者像を撮影する (ステップ S502)構成としているが 、コンテンツ再生の指示がある前に、搭乗者像を撮影する (ステップ S502)構成とし てもよい。たとえば、あら力じめ、搭乗時や車両のエンジンスタート時や搭乗者の操作 時および走行中における所定の間隔ごとに搭乗者像を撮影して (ステップ S502)、コ ンテンッ再生の旨示を待ってもょ ヽ。  In the description of FIG. 5, the configuration is such that a passenger image is photographed (step S502) after waiting for the content reproduction instruction (step S501: Yes). The passenger image may be taken before the content reproduction instruction is given (step S502). For example, take a picture of the passenger at predetermined intervals during boarding, when starting the vehicle engine, when operating the passenger, and during travel (step S502), and wait for the content playback indication. Moho.
[0067] また、図 5の説明では、搭乗者像を撮影して、搭乗者の顔の特徴点を抽出した情報 により識別情報を生成しているが、各搭乗者の識別情報ではなぐ搭乗者の人数や 構成を識別情報として生成する構成でもよい。具体的には、たとえば、搭乗者の人数 や構成を識別して、大人数での乗車であれば、にぎやかな音楽などのコンテンツ再 生や、子供連れの家族であれば、子供向けの番組などのコンテンツ再生をおこなう 構成としてもよい。 [0068] つづいて、搭乗者撮影部 313により搭乗者の挙動を撮影した場合について説明す る。図 6は、本実施例に力かるナビゲーシヨン装置において挙動情報を用いた処理の 内容を示すフローチャートである。図 6のフローチャートにおいて、まず、ナビゲーショ ン装置 300は、コンテンツ再生の指示があった力否かを判断する(ステップ S601)。 コンテンツ再生の指示は、たとえば、搭乗者がユーザ操作部 302を操作しておこなう 構成でもよい。 Further, in the description of FIG. 5, identification information is generated based on information obtained by taking a passenger image and extracting feature points of the passenger's face, but the passenger is not the identification information of each passenger. The configuration may be such that the number of people and the configuration are generated as identification information. Specifically, for example, by identifying the number and composition of passengers, if you are riding with a large number of people, you can play content such as lively music, or if you are a family with children, programs for children, etc. The content may be reproduced. Next, a case where the passenger's behavior is photographed by the passenger photographing unit 313 will be described. FIG. 6 is a flowchart showing the contents of the process using the behavior information in the navigation device that works on the present embodiment. In the flowchart of FIG. 6, first, the navigation apparatus 300 determines whether or not there is an instruction for content reproduction (step S601). The content reproduction instruction may be, for example, a configuration in which the passenger operates the user operation unit 302.
[0069] ステップ S601において、コンテンツ再生の指示があるのを待って、指示があった場 合 (ステップ S601 :Yes)は、つぎに、搭乗者撮影部 313は、搭乗者の挙動を撮影す る (ステップ 602)。搭乗者の挙動の撮影は、たとえば、搭乗者の眼球の動きを撮影し てもよい。  [0069] In step S601, waiting for the content playback instruction, and if there is an instruction (step S601: Yes), then the passenger photographing unit 313 photographs the behavior of the passenger. (Step 602). For example, the movement of the occupant's eyeball may be photographed.
[0070] そして、ナビゲーシヨン制御部 301は、ステップ S602において撮影した搭乗者の眼 球の動きから、搭乗者の挙動情報を生成する (ステップ S603)。挙動情報は、たとえ ば、搭乗者の眼球の動きの特徴点を抽出した情報を含むもので、あらかじめ記録媒 体 305に記録された眠気や疲れなどにおける眼球の動きの特徴を含む特定挙動情 報と照合をおこなうものである。  [0070] Then, the navigation control unit 301 generates occupant behavior information from the movement of the occupant's eye shot in step S602 (step S603). The behavior information includes, for example, information obtained by extracting feature points of the passenger's eye movements, and specific behavior information including characteristics of eye movements such as sleepiness and fatigue recorded in the recording medium 305 in advance. Are to be verified.
[0071] つづいて、ナビゲーシヨン制御部 301は、記録媒体 305にあら力じめ登録されてい る特定挙動情報と、ステップ S603にお 、て生成された挙動情報との照合をおこな!/、 、搭乗者が特定挙動状態である力否かを判断する (ステップ S604)。そして、特定挙 動状態である場合 (ステップ S604 : Yes)は、ナビゲーシヨン制御部 301は、記録媒 体 305から、特定挙動状態に関連づけられた出力形態情報を読み込む (ステップ S6 05)。出力形態情報は、たとえば、搭乗者の挙動に適したコンテンツの出力に関する 情報であり、搭乗者が眠い挙動を示すのであれば音量や音質を落としたり、映像の 輝度などを下げたり、あるいは表示部 303やスピーカ 312のオン/オフを含む情報 である。  [0071] Subsequently, the navigation control unit 301 collates the specific behavior information preliminarily registered in the recording medium 305 with the behavior information generated in step S603! /, Then, it is determined whether or not the passenger is in a specific behavior state (step S604). If it is in the specific behavior state (step S604: Yes), the navigation control unit 301 reads the output form information associated with the specific behavior state from the recording medium 305 (step S605). The output form information is, for example, information related to the output of content suitable for the passenger's behavior. If the passenger shows sleepy behavior, the volume and sound quality are reduced, the brightness of the video is reduced, or the display unit Information including on / off of 303 and speaker 312.
[0072] そして、ナビゲーシヨン制御部 301は、ステップ S605において読み込んだ出力形 態情報に基づいて、コンテンツの出力形態を決定する (ステップ S606)。具体的には 、たとえば、搭乗者が眠い挙動を示した場合、搭乗者が運転者であれば、再生するコ ンテンッの音量を大きなものに決定し、搭乗者が子供であれば小さな音量に決定す る構成でもよ 、。 [0072] Then, the navigation control unit 301 determines the output form of the content based on the output form information read in step S605 (step S606). Specifically, for example, when the passenger shows sleepy behavior, if the passenger is a driver, the volume of the content to be played is determined to be high, and if the passenger is a child, the volume is determined to be low. You It may be configured as well.
[0073] つぎに、ナビゲーシヨン制御部 301は、ステップ S606において決定された出力形 態に基づいて、表示部 303あるいは音声処理部 314におけるコンテンツの出力を制 御する。そして、音声処理部 314は音声処理を施して、スピーカ 312へコンテンツを 出力し、表示部 303あるいはスピーカ 312は、コンテンツを再生し (ステップ S607)、 一連の処理を終了する。  Next, the navigation control unit 301 controls the output of content in the display unit 303 or the audio processing unit 314 based on the output mode determined in step S606. Then, the audio processing unit 314 performs audio processing and outputs the content to the speaker 312. The display unit 303 or the speaker 312 reproduces the content (step S607), and the series of processing ends.
[0074] また、ステップ S604にお 、て、搭乗者が特定挙動状態でな 、場合 (ステップ S604 : No)は、ステップ S602へ戻り、同様の処理を繰り返す。なお、搭乗者が特定挙動状 態でなければ、特定挙動状態でない場合のコンテンツの出力形態をあらかじめ設定 しておき、再生する構成としてもよい。  In step S604, if the occupant is not in the specific behavior state (step S604: No), the process returns to step S602 and the same processing is repeated. If the passenger is not in the specific behavior state, the content output form when the passenger is not in the specific behavior state may be set in advance and played back.
[0075] なお、図 6の説明では、コンテンツ再生の指示があるのを待って、指示があった場 合 (ステップ S601: Yes)に、搭乗者の挙動を撮影する (ステップ S602)構成としてい る力 コンテンツ再生の指示がある前に、搭乗者の挙動を撮影する (ステップ S602) 構成としてもよい。たとえば、あら力じめ、搭乗時や車両のエンジンスタート時や搭乗 者の操作時および走行中における所定の間隔ごとに搭乗者の挙動を撮影して (ステ ップ S602)、コンテンツ再生の指示を待ってもよい。  [0075] It should be noted that the description of Fig. 6 is configured to wait for the instruction to play the content, and when the instruction is given (step S601: Yes), photograph the behavior of the passenger (step S602). The configuration may be such that the behavior of the passenger is photographed before the content reproduction instruction is given (step S602). For example, brute force, take a picture of the passenger's behavior at predetermined intervals during boarding, when the engine of the vehicle is started, when the passenger operates, and during travel (step S602), and a content playback instruction is issued. You can wait.
[0076] また、図 6の説明では、ステップ S603において搭乗者の眼球の動きを撮影して挙 動情報を生成しているが、窓の開け閉めや搭乗者の体全体の動きや車内の音量な ど力も挙動情報を生成してもよい。たとえば、搭乗者が窓を開けた場合は、暑がつて V、たり気分がわるくなつて 、たりする挙動とし、体全体の動きや音量で騒 、で 、る挙 動に関する挙動情報を生成してもよい。  [0076] In the description of Fig. 6, in step S603, movement information is generated by photographing the movement of the passenger's eyeball, but the opening / closing of the window, the movement of the entire body of the passenger, the volume in the vehicle, and the like. For example, force may generate behavior information. For example, when a passenger opens a window, it is assumed that the behavior is such that it gets hot and feels uncomfortable. Also good.
[0077] なお、本実施例におけるコンテンツの出力は、 1つ以上の表示部 303あるいはスピ 一力 312を介しておこなう構成である力 それぞれについて、コンテンツの出力を制 御してもよい。たとえば、車両の各シートごとに表示部 303を設けて、各シートの搭乗 者ごとに適したコンテンツを再生してもよい。  It should be noted that the content output in the present embodiment may be controlled for each of the forces that are configured through one or more display units 303 or the spin force 312. For example, a display unit 303 may be provided for each seat of the vehicle, and content suitable for each passenger of each seat may be reproduced.
[0078] また、本実施例では、図 5および図 6を用いて、搭乗者情報を用いた処理と挙動情 報を用いた処理にっ 、てそれぞれ説明した力 それぞれの機能をあわせて処理する 構成としてもよい。 [0079] また、本実施例では、カメラなどの搭乗者撮影部 313から搭乗者を撮影して、搭乗 者の識別情報ある 、は挙動情報を生成する構成として 、るが、搭乗者を撮影する代 わりに、その他センサーによって取得する情報から、搭乗者を識別する識別情報ある いは搭乗者の挙動情報を生成する構成としてもょ ヽ。 [0078] Further, in the present embodiment, the processing using the passenger information and the processing using the behavior information are processed by combining the functions of the respective forces described with reference to Figs. 5 and 6. It is good also as a structure. In this embodiment, the passenger is photographed from the passenger photographing unit 313 such as a camera, and the identification information of the passenger or the behavior information is generated. However, the passenger is photographed. Instead, it may be configured to generate identification information for identifying the passenger or passenger behavior information from information acquired by other sensors.
[0080] 識別情報あるいは挙動情報の生成は、たとえば、搭乗者が着座したシートにおける 荷重分布や全荷重を検知する着座センサーを用いてもよい。着座センサーにより、 搭乗者の人数や体格に関する情報などが取得できる。また、車内の所定の位置に 1 つ以上の指紋センサーを設けてもよい。指紋センサーにより、搭乗者の指紋情報を 取得して、搭乗者の識別ができる。また、車内にマイクなど音声センサーを設けてもよ い。音声センサーにより、搭乗者の音量や音質や音程などの音声情報を取得して、 搭乗者の識別ができ、また、人数や性別や眠気などが判断できる。また、脈拍などを 計測する人体センサーを用いてもよい。たとえば、脈拍などの情報を利用することで 、搭乗者の身体状況を把握するでき、搭乗者の挙動を判断することができる。  [0080] The identification information or the behavior information may be generated by using, for example, a seating sensor that detects a load distribution and a total load on a seat on which a passenger is seated. Information on the number and physique of the passengers can be obtained by the seating sensor. One or more fingerprint sensors may be provided at predetermined positions in the vehicle. The fingerprint sensor can acquire the passenger's fingerprint information and identify the passenger. A voice sensor such as a microphone may be provided in the car. Voice information such as the volume, sound quality, and pitch of the passenger can be acquired by the voice sensor, so that the passenger can be identified, and the number, gender, sleepiness, etc. can be determined. A human body sensor that measures a pulse or the like may also be used. For example, by using information such as the pulse, the physical condition of the passenger can be grasped, and the behavior of the passenger can be determined.
[0081] また、搭乗者の所有する携帯電話から個人の情報を取得する構成としてもよ!、。あ るいは、シートベルトやチャイルドシートの脱着を検知して、人数などの情報を取得し てもよい。  [0081] In addition, it is also possible to obtain personal information from a mobile phone owned by the passenger! Alternatively, information such as the number of people may be acquired by detecting the attachment / detachment of a seat belt or a child seat.
[0082] 以上説明したように、本実施の形態に力かるコンテンツ提供装置、コンテンツ提供 方法、コンテンツ提供プログラムおよびコンピュータに読み取り可能な記録媒体によ れば、ユーザは、自らの入力操作によりコンテンツ出力に関して制御しなくても、搭乗 者の搭乗者情報および挙動に基づいて、出力の制御ができる。したがって、効率的 に搭乗者に快適なコンテンツ提供を図ることができる。  [0082] As described above, according to the content providing apparatus, the content providing method, the content providing program, and the computer-readable recording medium according to the present embodiment, the user can output the content by his / her input operation. The output can be controlled based on the occupant information and behavior of the occupant even without control. Therefore, it is possible to efficiently provide comfortable content to passengers.
[0083] また、以上説明したように、本実施例に力かるナビゲーシヨン装置 300は、搭乗者を 撮影して搭乗者の識別情報と挙動情報を生成する。そして、識別情報、挙動情報と 登録識別情報、特定挙動情報を比較して、コンテンツの出力形態を決定できるため 、搭乗者が自ら設定することなぐ搭乗者に快適なコンテンツ提供を図ることができる  Further, as described above, the navigation apparatus 300 according to the present embodiment photographs the passenger and generates identification information and behavior information of the passenger. And since the identification information, the behavior information, the registered identification information, and the specific behavior information can be compared to determine the output form of the content, it is possible to provide comfortable content to the passenger who is not set by the passenger himself / herself
[0084] また、搭乗者の撮影をする代わりに、着座センサーや指紋センサーや音声センサ 一など、様々な手段で取得する情報を利用することもできるため、汎用性の向上を図 ることがでさる。 [0084] In addition, instead of taking a picture of a passenger, information obtained by various means such as a seating sensor, a fingerprint sensor, and a voice sensor can be used, so that versatility is improved. It can be done.
[0085] また、搭乗者の挙動にあわせたコンテンツを再生するため、運転手が眠気におそわ れた場合に大音量の音楽をかければ、眠気の解消につながり、不都合のある挙動の 墙正を図ることができる。  [0085] In addition, if the driver plays drowsiness to play content that matches the behavior of the passenger, if the driver plays loud music, drowsiness can be eliminated, and inconvenient behavior can be corrected. Can be planned.
[0086] また、搭乗者の視聴履歴に基づ!/、てコンテンツを出力できるため、たとえば、搭乗 者が最近視聴したコンテンツの繰り返しを防止したり、搭乗者が途中まで視聴したコ ンテンッの続きを再生して、搭乗者に完結したコンテンツを提供できる。したがって、 搭乗者が自らコンテンツを設定しなくても、コンテンツ提供の最適化を図ることができ る。  [0086] In addition, because content can be output based on the passenger's viewing history !, for example, it is possible to prevent content that has recently been viewed by the passenger or to continue the content that the passenger has viewed halfway. Can be used to provide complete content to passengers. Therefore, even if the passenger does not set the content himself, the content provision can be optimized.
[0087] なお、本実施の形態で説明したコンテンツ提供方法は、あらかじめ用意されたプロ グラムをパーソナル 'コンピュータやワークステーションなどのコンピュータで実行する こと〖こより実現することができる。このプログラムは、ハードディスク、フレキシブルディ スク、 CD-ROM, MO、 DVDなどのコンピュータで読み取り可能な記録媒体に記 録され、コンピュータによって記録媒体力 読み出されることによって実行される。ま たこのプログラムは、インターネットなどのネットワークを介して配布することが可能な 伝送媒体であってもよい。  Note that the content providing method described in the present embodiment can be realized by executing a program prepared in advance on a computer such as a personal computer or a workstation. This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, an MO, or a DVD, and is executed by being read by the computer. The program may be a transmission medium that can be distributed via a network such as the Internet.

Claims

請求の範囲 The scope of the claims
[1] 移動体においてコンテンツを提供するコンテンツ提供装置において、  [1] In a content providing apparatus that provides content on a mobile object,
前記移動体の搭乗者を特定する特定手段と、  A specifying means for specifying a passenger of the moving body;
前記特定手段によって特定された搭乗者に関する情報 (以下「搭乗者情報」という) を取得する搭乗者情報取得手段と、  Passenger information acquisition means for acquiring information related to the passenger specified by the specifying means (hereinafter referred to as “passenger information”);
前記コンテンツを出力する出力手段と、  Output means for outputting the content;
前記出力手段を制御して、前記搭乗者情報取得手段によって取得された搭乗者 情報に基づいて、前記コンテンツを出力する制御手段と、  Control means for controlling the output means to output the content based on the passenger information acquired by the passenger information acquisition means;
を備えることを特徴とするコンテンツ提供装置。  A content providing apparatus comprising:
[2] 前記搭乗者情報は、前記搭乗者の嗜好を含む情報であることを特徴とする請求項 [2] The passenger information is information including preferences of the passenger.
1に記載のコンテンツ提供装置。 The content providing apparatus according to 1.
[3] 前記搭乗者情報は、前記搭乗者のコンテンツの視聴履歴を含む情報であることを 特徴とする請求項 1に記載のコンテンツ提供装置。 3. The content providing device according to claim 1, wherein the passenger information is information including a viewing history of the content of the passenger.
[4] 移動体にぉ 、てコンテンツを提供するコンテンツ提供装置にぉ ヽて、 [4] A content providing device that provides content to a mobile object,
前記搭乗者の挙動を検出する挙動検出手段と、  Behavior detecting means for detecting the behavior of the passenger;
前記コンテンツを出力する出力手段と、  Output means for outputting the content;
前記出力手段を制御して、前記挙動検出手段によって検出された結果に基づ 、て 、前記コンテンツを出力する制御手段と、  Control means for controlling the output means to output the content based on the result detected by the behavior detection means;
を備えることを特徴とするコンテンツ提供装置。  A content providing apparatus comprising:
[5] 前記制御手段は、前記コンテンツの音声出力あるいは表示出力の少なくとも一方を 制御することを特徴とする請求項 1〜4のいずれか一つに記載のコンテンツ提供装置 5. The content providing apparatus according to claim 1, wherein the control unit controls at least one of audio output and display output of the content.
[6] 移動体にぉ ヽてコンテンツを提供するコンテンツ提供方法にぉ ヽて、 [6] A content provision method that provides content to mobiles,
前記移動体の搭乗者を特定する特定工程と、  A specifying step of specifying a passenger of the moving body;
前記特定工程によって特定された搭乗者に関する情報 (以下「搭乗者情報」と 、う) を取得する搭乗者情報取得工程と、  A passenger information acquisition step of acquiring information related to the passenger specified in the specifying step (hereinafter referred to as “passenger information”);
前記搭乗者情報取得工程によって取得された搭乗者情報に基づ 、て、コンテンツ の出力を制御する制御工程と、 を含むことを特徴とするコンテンツ提供方法。 A control process for controlling output of content based on the passenger information acquired by the passenger information acquisition process; A content providing method comprising:
[7] 移動体にぉ 、てコンテンツを提供するコンテンツ提供方法にぉ ヽて、 [7] In order to provide content that provides content to mobiles,
前記搭乗者の挙動を検出する挙動検出工程と、  A behavior detecting step for detecting the behavior of the passenger;
前記挙動検出工程によって検出された結果に基づ 、て、コンテンッの出力を制御 する制御工程と、  A control step for controlling the output of the content based on the result detected by the behavior detection step;
を含むことを特徴とするコンテンツ提供方法。  A content providing method comprising:
[8] 請求項 6または 7に記載のコンテンツ提供方法をコンピュータに実行させることを特 徴とするコンテンッ提供プログラム。 [8] A content providing program characterized by causing a computer to execute the content providing method according to claim 6 or 7.
[9] 請求項 8に記載のコンテンツ提供プログラムを記録したことを特徴とするコンビユー タに読み取り可能な記録媒体。 [9] A computer-readable recording medium in which the content providing program according to claim 8 is recorded.
PCT/JP2006/316614 2005-08-24 2006-08-24 Content providing device, content providing method, content providing program, and computer readable recording medium WO2007023900A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-242638 2005-08-24
JP2005242638 2005-08-24

Publications (1)

Publication Number Publication Date
WO2007023900A1 true WO2007023900A1 (en) 2007-03-01

Family

ID=37771643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/316614 WO2007023900A1 (en) 2005-08-24 2006-08-24 Content providing device, content providing method, content providing program, and computer readable recording medium

Country Status (1)

Country Link
WO (1) WO2007023900A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009208592A (en) * 2008-03-04 2009-09-17 Pioneer Electronic Corp System for setting apparatus mounted on moving body, and server device for system for setting apparatus mounted on moving body
JP2019119445A (en) * 2018-01-04 2019-07-22 ハーマン インターナショナル インダストリーズ インコーポレイテッド Moodroof for augmented media experience in vehicle cabin
CN110287386A (en) * 2018-03-19 2019-09-27 本田技研工业株式会社 Information providing system, information providing method and medium
US20210155250A1 (en) * 2019-11-22 2021-05-27 Mobile Drive Technology Co.,Ltd. Human-computer interaction method, vehicle-mounted device and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003306091A (en) * 2002-04-15 2003-10-28 Nissan Motor Co Ltd Driver determining device
JP2004037292A (en) * 2002-07-04 2004-02-05 Sony Corp Navigation apparatus, service apparatus of navigation apparatus and service provision method by navigation apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003306091A (en) * 2002-04-15 2003-10-28 Nissan Motor Co Ltd Driver determining device
JP2004037292A (en) * 2002-07-04 2004-02-05 Sony Corp Navigation apparatus, service apparatus of navigation apparatus and service provision method by navigation apparatus

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009208592A (en) * 2008-03-04 2009-09-17 Pioneer Electronic Corp System for setting apparatus mounted on moving body, and server device for system for setting apparatus mounted on moving body
JP2019119445A (en) * 2018-01-04 2019-07-22 ハーマン インターナショナル インダストリーズ インコーポレイテッド Moodroof for augmented media experience in vehicle cabin
US11958345B2 (en) 2018-01-04 2024-04-16 Harman International Industries, Incorporated Augmented media experience in a vehicle cabin
JP7475812B2 (en) 2018-01-04 2024-04-30 ハーマン インターナショナル インダストリーズ インコーポレイテッド Mood roof for an enhanced media experience in the vehicle cabin
CN110287386A (en) * 2018-03-19 2019-09-27 本田技研工业株式会社 Information providing system, information providing method and medium
US20210155250A1 (en) * 2019-11-22 2021-05-27 Mobile Drive Technology Co.,Ltd. Human-computer interaction method, vehicle-mounted device and readable storage medium

Similar Documents

Publication Publication Date Title
JP4533897B2 (en) PROCESS CONTROL DEVICE, ITS PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM
JP4516111B2 (en) Image editing apparatus, image editing method, image editing program, and computer-readable recording medium
CN108327667A (en) Vehicle audio control method and device
WO2020011203A1 (en) In vehicle karaoke
US7106184B2 (en) Rear entertainment system and control method thereof
WO2007046269A1 (en) Information presenting apparatus, information presenting method, information presenting program, and computer readable recording medium
WO2007032278A1 (en) Path search apparatus, path search method, path search program, and computer readable recording medium
WO2007023900A1 (en) Content providing device, content providing method, content providing program, and computer readable recording medium
JP2024041746A (en) Information processing device
WO2006095688A1 (en) Information reproduction device, information reproduction method, information reproduction program, and computer-readable recording medium
JPH07286854A (en) Electronic map device
JP2008083184A (en) Driving evaluation system
JP2010134507A (en) Reproduction device
WO2007043464A1 (en) Output control device, output control method, output control program, and computer-readable recording medium
WO2007108337A1 (en) Content reproduction device, content reproduction method, content reproduction program, and computer-readable recording medium
WO2007020808A1 (en) Content providing device, content providing method, content providing program, and computer readable recording medium
JP2005333226A (en) Video conference system for vehicle
JP2008252589A (en) Sound volume controller, sound volume control method, sound volume control program, and recording medium
JP2000155893A (en) Information announcing device, navigation device, on- vehicle information processor and automobile
JP2009223187A (en) Display content controller, display content control method and display content control method program
JP2006190206A (en) Processor, and its method, its program and its program recording medium
JP3883619B2 (en) Navigation apparatus and method
JP2006189977A (en) Device, method and program for image editing, and medium for computer-readable recording medium
JP2000213951A (en) Car navigation system
JP7386076B2 (en) On-vehicle device and response output control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06796727

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP