WO2017179201A1 - Vehicle-mounted information processing device and vehicle-mounted information processing method - Google Patents
Vehicle-mounted information processing device and vehicle-mounted information processing method Download PDFInfo
- Publication number
- WO2017179201A1 WO2017179201A1 PCT/JP2016/062143 JP2016062143W WO2017179201A1 WO 2017179201 A1 WO2017179201 A1 WO 2017179201A1 JP 2016062143 W JP2016062143 W JP 2016062143W WO 2017179201 A1 WO2017179201 A1 WO 2017179201A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- operator
- vehicle
- authentication
- detection information
- information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/0969—Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
Definitions
- This invention relates to a technique for controlling an operation input to an in-vehicle device.
- the navigation device and audio device mounted on the vehicle are a driver or an assistant seated in the driver's seat via an input operation device such as a touch panel or a hardware switch arranged between the driver's seat and the passenger seat of the vehicle.
- An input operation by a passenger seated in the seat and the rear seat is accepted.
- a technique for restricting a predetermined input operation when the vehicle is in a traveling state has been used so that an input operation by an operator does not interfere with the vehicle traveling.
- the input operation device described in Patent Literature 1 can detect the shape of the operator touching the display with the touch panel, and the operator who has operated the display with the touch panel from the detected shape can be seated in the driver's seat.
- the vehicle is traveling and the vehicle is traveling, it is determined that the operation of the operator hinders traveling of the vehicle, and processing is performed so as not to accept the operation.
- the present invention has been made to solve the above-described problems, and an object thereof is to reduce a processing load for specifying an operator who has performed an input operation.
- An in-vehicle information processing apparatus includes a detection information acquisition unit that acquires detection information indicating that an input operation and an authentication operation of an operator have been detected, and a vehicle information acquisition unit that acquires vehicle information indicating a running state of the vehicle And control for controlling the output of the information for requesting the input of the authentication operation or the output of the detection information of the input operation according to the detection information acquired by the detection information acquisition unit and the vehicle information acquired by the vehicle information acquisition unit And an authentication processing unit for collating the input authentication operation with respect to the information requesting input of the output authentication operation.
- FIG. 1 is a block diagram illustrating a configuration of an in-vehicle information processing apparatus according to Embodiment 1.
- FIG. 2 is a diagram illustrating a hardware configuration example of an in-vehicle information processing apparatus according to Embodiment 1.
- FIG. It is a figure which shows the structural example of the authentication database of the vehicle-mounted information processing apparatus which concerns on Embodiment 1.
- FIG. 3 is a flowchart illustrating an operation of the in-vehicle information processing apparatus according to the first embodiment.
- 4 is a flowchart illustrating an operation of authentication processing of the in-vehicle information processing apparatus according to the first embodiment.
- FIG. 6A, 6B, and 6D are diagrams illustrating an example of display control by the control unit of the in-vehicle information processing apparatus according to Embodiment 1, and FIG. 6C illustrates authentication of the in-vehicle information processing apparatus according to Embodiment 1. It is a figure which shows an example of the information stored in a database. It is a figure which shows an example in case the control part of the vehicle-mounted information processing apparatus which concerns on Embodiment 1 performs initialization. 4 is a block diagram illustrating a configuration of an in-vehicle information processing apparatus according to Embodiment 2.
- FIG. 10 is a block diagram illustrating a configuration of an in-vehicle information processing apparatus according to a third embodiment. It is a figure which shows the structural example of the driver
- FIG. 1 is a block diagram showing the configuration of the in-vehicle information processing apparatus 100 according to the first embodiment.
- the in-vehicle information processing apparatus 100 includes a detection information acquisition unit 101, a vehicle information acquisition unit 102, a control unit 103, an authentication processing unit 104, and an authentication database 105.
- the in-vehicle information processing apparatus 100 is connected to a touch panel 200, an in-vehicle apparatus 300, a display apparatus 400, and a speaker 500.
- the detection information acquisition unit 101 acquires detection information from the touch panel 200.
- the touch panel 200 detects the proximity or contact of an operator's body or a finger (hereinafter referred to as an object)
- the touch panel 200 outputs detection information.
- the touch panel 200 is configured by applying a capacitance method capable of detecting the proximity or contact of an object, or a resistive film method capable of detecting contact of an object.
- the touch panel 200 includes an input operation for bringing an object close to or in contact with the touch panel 200 for an operator to perform an operation input, and an authentication operation for bringing an object close to or in contact with the touch panel 200 for an operator to perform authentication input. Detect.
- the detection information acquisition unit 101 acquires the detection information from the touch panel 200.
- the detection information acquisition unit 101 may be configured to acquire the detection information from a touch pad or the like.
- the vehicle information acquisition unit 102 acquires information indicating the traveling state of the vehicle, such as vehicle speed or parking brake state information, via an in-vehicle network (not shown).
- the control unit 103 When the detection information of the input operation is input from the detection information acquisition unit 101, the control unit 103 performs processing according to the information indicating the traveling state of the vehicle acquired by the vehicle information acquisition unit 102. When the control unit 103 determines that the vehicle is stopped or parked from the information indicating the running state of the vehicle, the control unit 103 uses the input operation detection information input from the detection information acquisition unit 101 as control information to the in-vehicle device 300. Output. Further, when the vehicle is traveling and the operator is not authenticated, the control unit 103 outputs control information for requesting the operator to input an authentication operation to the display device 400 or the speaker 500.
- the authentication operation requested by the control unit 103 is a preset operation such as “hold the right hand and hold”.
- the control unit 103 analyzes the input operation detection information acquired in response to the authentication operation input request, extracts the feature amount of the object at the time of the authentication operation, and outputs it to the authentication processing unit 104. In addition, when the vehicle is traveling and the operator has been authenticated, the control unit 103 outputs the detection information of the input operation input from the detection information acquisition unit 101 to the in-vehicle device 300 as control information.
- the feature amount of the object at the time of the authentication operation extracted by the control unit 103 is, for example, when the operator holds the hand open as the authentication operation, For example, the positions of the vertices of the fingers, or combinations of the vertices of each finger of the operator and the shapes of the operator's hand and the contour of the finger. Note that the above-described feature amount is an example, and any information that can identify the authentication operation of the operator can be used as the feature amount. Note that more detailed control contents of the control unit 103 will be described later.
- the authentication processing unit 104 collates the feature amount of the object at the time of the authentication operation extracted by the control unit 103 with the feature amount stored in the authentication database 105, and the operator who performed the authentication operation is the driver. Or whether the passenger is a passenger other than the driver.
- the authentication processing unit 104 outputs the identification result of the operator who performed the authentication operation to the control unit 103. Specifically, the authentication processing unit 104, when the extracted feature quantity of the object at the time of the authentication operation matches any one of the feature quantities stored in the authentication database 105, the operator is a rider other than the driver. Identified as a person.
- the authentication processing unit 104 specifies that the operator is a driver.
- the control unit 103 has extracted the contour shape of the operator's hand and finger as the feature amount of the object during the authentication operation, the authentication processing unit 104 determines the extracted shape of the hand and finger contour and the authentication. Matching with the shape data of the hand and finger contours stored in the database 105 is performed to determine whether the operator is a passenger other than the driver.
- the feature amount of the object at the time of the authentication operation does not have to be completely coincident with any one of the feature amounts stored in the authentication database 105, and the degree of coincidence is not less than a preset threshold value
- the feature amounts may be regarded as matching.
- the authentication database 105 stores the feature quantity of an object that can be identified as an authentication operation for a passenger other than the driver. For example, it is estimated that the authentication database 105 is extracted from the right hand that is close to or close to the touch panel 200 when the passenger in the passenger seat opens the right hand over the touch panel 200 when the vehicle is a right steering wheel. Stores feature quantities of shapes.
- the in-vehicle device 300 is a navigation device, an audio device, or the like mounted on the vehicle.
- the in-vehicle device 300 controls the device itself based on information indicating the position or range where the input operation indicated by the detection information input from the control unit 103 is detected.
- the display device 400 is configured by, for example, a liquid crystal display or an organic EL (electroluminescence), and the like, for notifying the driver and passengers based on control information input from the in-vehicle information processing device 100 and the in-vehicle device 300. Display information. Specifically, based on the control information input from the control unit 103, a screen for prompting an input of an authentication operation, a screen for notifying that the authentication processing has been completed, and the like are displayed. Further, when the in-vehicle device 300 is a navigation device, the display device 400 displays information such as a map, a departure place, a destination, and a guidance route. Note that the display device 400 and the touch panel 200 may be integrated, and an input to the touch panel 200 may be received as an operation for selecting information displayed on the display device 400.
- Speaker 500 outputs information for notifying the driver and passengers based on control information input from in-vehicle information processing apparatus 100 and in-vehicle apparatus 300. Specifically, based on the control information input from the control unit 103, a voice prompting input of an authentication operation, a voice notifying that the authentication process has been completed, and the like are output.
- FIG. 2 is a diagram illustrating a hardware configuration example of the in-vehicle information processing apparatus 100 according to the first embodiment.
- the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 in the in-vehicle information processing apparatus 100 are realized by a processing circuit. That is, the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 extract the feature points of the object from the authentication operation detection information input in response to the authentication operation request, and A processing circuit for identifying whether the driver is a passenger other than the driver.
- the processing circuit can be, for example, a single circuit, a composite circuit, a programmed processor, a processor programmed in parallel, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-programmable Gate Array). ) Or a combination of these.
- the functions of the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 may be realized by a processing circuit, or the functions of the units may be realized by a processing circuit. .
- the processing circuit is a CPU (Central Processing Unit)
- the processing circuit is a CPU 110 that executes a program stored in the memory 120 shown in FIG.
- the functions of the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 are realized by software, firmware, or a combination of software and firmware.
- Software or firmware is described as a program and stored in the memory 120.
- CPU110 implement
- a memory 120 is provided for storing the program to be changed. These programs can be said to cause a computer to execute the procedures or methods of the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104.
- the CPU 110 is, for example, a central processing unit, a processing unit, an arithmetic unit, a processor, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
- the memory 120 may be a nonvolatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable ROM), and an EEPROM (Electrically EPROM). Further, it may be a magnetic disk such as a hard disk or a flexible disk, or an optical disk such as a mini disk, CD (Compact Disc), or DVD (Digital Versatile Disc).
- the control unit 103 refers to the traveling state of the vehicle acquired from the vehicle information acquisition unit 102, and when the vehicle speed is “0” or the parking brake is ON. Determine that the vehicle is parked or parked.
- the control unit 103 outputs information indicating the position or range where the input operation is detected, described in the detection information acquired by the detection information acquisition unit 101, to the in-vehicle device 300.
- the in-vehicle device 300 identifies the operation of the operator based on the input information indicating the position or range, and executes a process corresponding to the identified operation.
- the control unit 103 refers to the traveling state of the vehicle acquired from the vehicle information acquisition unit 102, and a state equal to or higher than a preset vehicle speed is predetermined. It is determined that the vehicle is traveling when the vehicle continues for a predetermined time.
- the preset vehicle speed is, for example, 5 km / h.
- the predetermined time is, for example, 3 seconds.
- the control unit 103 outputs control information for requesting the operator to input an authentication operation to at least one of the display device 400 and the speaker 500.
- the authentication operation requested from the operator is, for example, “hold the right hand open” at a position away from the touch panel 200 by a predetermined distance d or a predetermined position on the touch panel 200.
- the distance d is a distance at which the touch panel 200 can detect the proximity of an object, and is a value satisfying a condition such as 0 cm ⁇ d ⁇ 5 cm.
- the control unit 103 analyzes the detection information of the authentication operation acquired by the detection information acquisition unit 101, and extracts the feature amount of the object during the authentication operation. The control unit 103 outputs the extracted feature amount to the authentication processing unit 104.
- the control unit 103 refers to the operator identification result input from the authentication processing unit 104 and performs an operation.
- the detection information of the input operation input from the detection information acquisition unit 101 is not output to the in-vehicle device 300 thereafter. That is, the control unit 103 does not accept an operator's operation via the touch panel 200.
- the control unit 103 displays the operator identification result input from the authentication processing unit 104.
- the detection information of the input operation input from the detection information acquisition unit 101 is output to the in-vehicle device 300 thereafter. That is, the control unit 103 receives an operator's operation via the touch panel 200.
- FIG. 3 is a diagram illustrating an example of an authentication pattern stored in the authentication database 105 of the in-vehicle information processing apparatus 100 according to the first embodiment.
- the authentication database 105 stores a plurality of operator states that can be specified as authentication operations for passengers other than the driver as authentication patterns. Although not shown, each authentication pattern stores a feature amount of the object in association with each other. In the example of FIG.
- an authentication pattern for performing an authentication operation in which a passenger on the passenger seat in the right-hand drive vehicle opens the right hand an authentication pattern for performing an authentication operation in which the passenger on the passenger seat in the left-handle vehicle opens the left hand
- an authentication pattern in which the passenger in the left rear seat in the right-hand drive vehicle performs an authentication operation with the right hand open an authentication pattern in which the passenger in the right rear seat in the vehicle with the left handle performs an authentication operation with the left hand open is stored Is shown.
- the authentication pattern is not limited to the example shown in FIG. 3, and various authentication patterns can be stored as long as the operator can be identified as a passenger other than the driver.
- step ST1 initialization of the in-vehicle information processing apparatus 100 includes, for example, processing for setting the authentication state of the operator to unauthenticated when an authenticated operator is set in the control unit 103.
- step ST2 the detection information acquisition unit 101 determines whether or not input operation detection information indicating that an object has approached or touched the touch panel 200 has been acquired (step ST2). When the input operation detection information is not acquired (step ST2; NO), the determination process of step ST2 is repeated.
- step ST2 when the input operation detection information is acquired (step ST2; YES), the detection information acquisition unit 101 outputs the acquired input operation detection information to the control unit 103.
- the control unit 103 refers to the information indicating the driving state of the vehicle that is input from the vehicle information acquisition unit 102 constantly or at a predetermined interval, and determines whether the vehicle is traveling. A determination is made (step ST3).
- step ST3 When the vehicle is not traveling (step ST3; NO), the control unit 103 proceeds to a process of step ST5 described later.
- step ST4 determines whether or not the operator has been authenticated (step ST4).
- step ST5 When the operator has been authenticated (step ST4; YES), the control unit 103 outputs the input operation detection information to the in-vehicle device 300 as control information (step ST5).
- the detection information acquisition unit 101 determines whether or not the input operation detection information is acquired again within a predetermined time (step ST6).
- step ST6; YES When the input operation detection information is acquired within the predetermined time (step ST6; YES), the process returns to step ST3.
- step ST6; NO On the other hand, when the detection information of the input operation is not acquired within the predetermined time (step ST6; NO), the process returns to step ST1. If the operator is not authenticated (step ST4; NO), the control unit 103 performs an authentication process (step ST7) and returns to the process of step ST2.
- step ST6 in determining whether or not the input operation detection information has been acquired again within a predetermined time, for example, the detection information acquisition unit 101 can acquire the detection information for detecting the proximity of an object from the touch panel 200 for a predetermined time. The determination is made based on whether (for example, 1 second) has elapsed. In addition, for example, the determination is performed based on whether or not a predetermined time (for example, 3 seconds) has elapsed since the detection information acquisition unit 101 can no longer acquire detection information for detecting contact of an object from the touch panel 200.
- a predetermined time for example, 3 seconds
- step ST6 instead of determining whether or not input operation detection information has been acquired again within a predetermined time, it may be determined whether or not a series of operation tasks has been completed. In this case, when it is determined that a series of operation tasks has been completed, the process returns to step ST1, and when it is determined that a series of operation tasks has not been completed, the process returns to step ST2. Constitute. In addition, for example, in the task of searching for facilities by genre in the navigation device, whether or not the “genre search” button has been pressed or “genre search” This is performed based on whether the “select” button has been pressed or the “search” button has been pressed.
- the control unit 103 outputs control information requesting the operator to input an authentication operation to at least one of the display device 400 and the speaker 500 (step ST11).
- the detection information acquisition unit 101 determines whether or not an authentication operation has been input in response to an input request for the authentication operation displayed or output based on the control information output in step ST11 (step ST12).
- the detection information acquisition unit 101 outputs the detection information to the control unit 103, and the control unit 103 analyzes the input detection information and detects the shape of the object. Then, the feature amount of the detected shape is extracted (step ST13).
- the authentication processing unit 104 collates the feature amount extracted in step ST13 with the feature amount associated with each authentication pattern stored in the authentication database 105, and identifies the operator (step ST14).
- the control unit 103 refers to the identification result of the operator and determines whether or not the operator is a passenger other than the driver (step ST15).
- step ST15; NO If the operator is not a passenger other than the driver (step ST15; NO), the process proceeds to step ST17.
- the control unit 103 authenticates the operator (step ST16) and ends the authentication process (step ST17). Thereafter, the flow returns to the process of step ST2 in the flowchart of FIG.
- the control unit 103 may output control information to the display device 400 or the speaker 500 so as to perform at least one of display and sound output for notifying that the operator has been authenticated. .
- step ST12 when the authentication operation is not input (step ST12; NO), the control unit 103 determines whether or not a predetermined time has elapsed after outputting the authentication operation input request (step ST18). If the predetermined time has elapsed (step ST18; YES), the process proceeds to step ST17, and the authentication process is terminated.
- step ST18 when the predetermined time has not elapsed (step ST18; NO), the control unit 103 refers to the information indicating the traveling state of the vehicle input from the vehicle information acquisition unit 102 and determines whether the vehicle is traveling. Is performed (step ST19).
- step ST19 When the vehicle is traveling (step ST19; YES), the process returns to step ST12, and the detection information acquisition unit 101 waits for an authentication operation to be input.
- step ST19; NO the process proceeds to step ST17, and the authentication process is terminated.
- FIG. 6A, 6B, and 6D are diagrams illustrating an example of display control by the control unit 103 of the in-vehicle information processing apparatus 100 according to Embodiment 1.
- FIG. FIG. 6C is a diagram illustrating an example of information stored in the authentication database 105 of the in-vehicle information processing apparatus 100 according to Embodiment 1.
- 6A and 6B show display examples based on the control information output from the control unit 103 to the display device 400 in step ST5 of the flowchart of FIG. 4 described above.
- a comment 402 that prompts the operator to perform an authentication operation is displayed on the screen 401 of the display device 400. Further, an image 403 showing the shape of the hand that guides the authentication operation is displayed on the screen 401 of the display device 400. The operator performs an authentication operation according to the comment 402 or the image 403.
- the shape of a hand for guiding the authentication operation may be displayed on the screen 401 of the display device 400 like an image 403a in consideration of the display position.
- the control unit 103 displays the screen 401a which is a screen farther away from the position of the driver's seat of the vehicle. Control information for displaying the image 403a is output.
- the control unit 103 controls the display of a comment for designating a position where the authentication operation is performed, such as “Please hold your right hand over the position of the hand image” as the comment 402a that prompts the operator to perform the authentication operation. I do.
- the above-described screen 401 is divided into two vertically and is set as a screen 401a and a screen 401b. This is an example.
- the control unit 103 uses three regions.
- the control unit 103 sets the position for performing the authentication operation using four areas. Control information may be output.
- the authentication processing unit 104 is an area (in FIG. 6B) in which the feature amount of the object is specified during the authentication operation extracted by the control unit 103.
- the feature quantity stored in the authentication database 105 is checked in consideration of whether it exists in the screen 401a). For example, as shown in FIG. 6C, when an image 403a is displayed, areas 405a, 405b, 405c, 405d, and 405e centered on the vertices 404a, 404b, 404c, 404d, and 404e of each finger of the image 403a are set. And stored in the authentication database 105.
- the position of the vertex of each finger of the operator extracted as the feature amount by the control unit 103 is located in the areas 405a, 405b, 405c, 405d, and 405e stored in the authentication database 105.
- FIG. 6D shows a display example based on the control information output from the control unit 103 to the display device 400 in step ST16 of the flowchart of FIG. 5 described above.
- a comment 406 indicating that the operator is authenticated and an input operation for operating the in-vehicle device 300 is received is displayed.
- 6A, 6B, and 6D illustrate an example in which the comment 402, the comment 402a, or the comment 406 is displayed on the screen 401 of the display device 400.
- the control unit 103 is a voice that prompts the speaker 500 to perform an authentication operation. Or control information for outputting a voice indicating that the operator has been authenticated may be output.
- the speaker 500 is based on the control information output from the control unit 103, for example, “Perform authentication, hold your right hand up”, “Perform authentication, open your right hand and hold your hand over the position of the image. Please output "," Authenticated, operation is possible ", etc.
- the control unit 103 may be configured to shift to a process for requesting an authentication operation of the operator, triggered by the detection information acquisition unit 101 acquiring the detection information from the hardware switch.
- the detection information acquisition unit 101 acquires, as detection information, pressing information indicating that a switch has been pressed from a hardware switch (not shown).
- the control unit 103 Based on the detection information, the control unit 103 generates control information that requests the touch panel 200 to input an authentication operation, and outputs the control information to at least one of the display device 400 and the speaker 500.
- the operator inputs an authentication operation to the touch panel 200.
- the detection information acquisition unit 101 acquires detection information from the touch panel 200, and an operator authentication process is performed in the control unit 103 and the authentication processing unit 104.
- the detection information acquired by the detection information acquisition unit 101 and the vehicle information acquired by the vehicle information acquisition unit 102 It is configured to include a control unit 103 that controls output of output or detection information, and an authentication processing unit 104 that collates the input authentication operation with respect to the information that requests input of the output authentication operation. Therefore, it is possible to suppress the processing load for verifying the authentication operation.
- the control unit 103 outputs the information requesting the input of the authentication operation when the vehicle is running and the operator who has performed the input operation is not authenticated. Since it is configured to extract the feature amount of the operator from the authentication operation detection information input to the control request and the authentication operation input requested, every time the operator performs the input operation There is no need to perform an authentication process for the operator, and the load of the process for verifying the authentication operation can be suppressed.
- control unit 103 may be configured to execute initialization to make an authenticated operator unauthenticated.
- FIG. 8 is a block diagram showing the configuration of the in-vehicle information processing apparatus 100 according to the second embodiment.
- the detection information acquisition unit 101 of the in-vehicle information processing apparatus 100 according to the second embodiment acquires detection information from the infrared camera 201 and a hardware switch (hereinafter referred to as H / W switch) 202 in addition to the touch panel 200.
- H / W switch a hardware switch
- the infrared camera 201 is installed on, for example, the upper part of the touch panel 200, the upper part of the in-vehicle device 300 fitted in the dashboard, or the upper part of the display device 400.
- the infrared camera 201 is configured to be able to image a wide area so that the operator can image the area where the input operation and the authentication operation are performed.
- a plurality of infrared cameras 201 are arranged or a high angle of view infrared camera 201 is applied.
- the H / W switch 202 is a switch provided for operating the in-vehicle device 300. When the switch is pressed, information indicating that the switch is pressed is output as detection information.
- the detection information acquisition unit 101 uses the detection information acquired from any one of the touch panel 200, the infrared camera 201, and the H / W switch 202 as a trigger, and the control unit 103 performs an authentication process using the captured image of the infrared camera 201.
- a configuration to be performed will be described.
- the detection information acquired from the touch panel 200 is used as a trigger
- the detection information acquired from the captured image of the infrared camera 201 is used as a trigger
- the detection information acquired by the H / W switch 202 is used as a trigger.
- the explanation will be made by dividing cases.
- Detection Information acquisition unit 101 is connected to touch panel 200 and infrared camera 201.
- the detection information acquisition unit 101 acquires detection information indicating that the proximity or contact of an object has been detected from the touch panel 200.
- the control unit 103 determines that the vehicle is running and the operator is not authenticated, the control unit 103 requests control information to be input to the infrared camera 201. Is output to at least one of the display device 400 or the speaker 500.
- the detection information acquisition unit 101 is connected to the infrared camera 201.
- the detection information acquisition unit 101 refers to a captured image of the infrared camera 201 and detects a captured image indicating that an object is close to or touches the touch panel 200 or the H / W switch 202, the detected information is used as detection information. get.
- the control unit 103 determines that the vehicle is running and the operator is not authenticated, the control unit 103 requests control information to be input to the infrared camera 201. Is output to at least one of the display device 400 or the speaker 500.
- the detection information acquisition unit 101 stores an area set in advance when an object performing an input operation is captured in the captured image of the infrared camera 201, and the captured image of the area has a predetermined luminance or higher. Is detected for a predetermined time (for example, 1 second) or longer, it is detected that the object is close to or in contact with the touch panel 200 or the H / W switch 202, and the captured image is acquired as detection information.
- the brightness of the captured image is indicated in 255 levels using values from “0” to “254”, for example.
- the detection information acquisition unit 101 determines that the object performing the input operation has been captured when the luminance value in a preset region of the captured image is, for example, “150” or more, and the proximity state of the object.
- the contact state is associated in advance.
- the detection information acquisition unit 101 includes the touch panel 200 or the H / W switch 202 in the captured image of the infrared camera 201.
- An area corresponding to the arrangement position is stored, and it is determined whether or not a predetermined luminance or higher is detected for a predetermined time or longer in a captured image of the area.
- the detection information acquisition unit 101 is connected to the infrared camera 201 and the H / W switch 202.
- the detection information acquisition unit 101 acquires detection information indicating that the switch has been pressed from the H / W switch 202.
- the control unit 103 determines that the vehicle is running and the operator is not authenticated, the control unit 103 requests control information to be input to the infrared camera 201. Is output to at least one of the display device 400 or the speaker 500.
- the authentication operation input to the infrared camera 201 includes the arrival of the operator's hand and arm in addition to the operation capable of recognizing the shape of the operator's hand.
- An operation capable of recognizing the direction, an operation capable of recognizing the feature amount of the operator's face, and the like are applicable.
- the detection information acquisition unit 101 acquires, as detection information, a captured image obtained by capturing the authentication operation input in response to the authentication operation input request displayed or output in (2-1) to (2-3) above. To do.
- the control unit 103 analyzes the captured image that is the detection information acquired by the detection information acquisition unit 101. For example, the shape of the operator's hand, the shape of the operator's hand and the arrival direction of the arm, or the face of the operator Etc., at least one of the authentications is performed, and the feature amount is extracted.
- the infrared camera 201 By acquiring a captured image from the infrared camera 201 as detection information, it is possible to recognize the arrival direction of the arm and the face in addition to the detection of the hand shape.
- the control unit 103 analyzes the captured image, and extracts the feature amount of the region whose luminance value is equal to or greater than a predetermined value.
- the operator points the palm toward the infrared camera 301, the palm approaches the infrared camera 301. Therefore, when the brightness value of the captured image is shown in 255 levels from 0 to 254, the imaging region of the palm is, for example, 200 or more It shows a high value such as a luminance value. Therefore, the control unit 103 extracts a region having a luminance value of 200 or more, for example, and extracts the contour shape of the hand and finger or the position of the vertex of each finger as a feature amount from this region.
- the control unit 103 When extracting the feature amount from the arrival direction of the operator's arm, the control unit 103 analyzes the captured image, identifies a region having a luminance value equal to or greater than a predetermined value, and extracts the inclination of the identified region as the feature amount. To do. When the operator points the palm toward the infrared camera 301 and the infrared camera 301 images the palm and arm of the operator, the palm of the operator is closest to the infrared camera 301 and the operator's arm is closer to the infrared camera 301 than the palm. A little away from.
- the control unit 103 identifies the area where the luminance value is, for example, 200 or more as the palm imaging area, and extracts the above-described feature amount. Further, the control unit 103 identifies an area where the luminance value is, for example, 150 to 199 as an arm imaging area, and extracts a slope of a rectangular area that approximates the identified imaging area as a feature amount.
- the inclination of the rectangular area is, for example, an inclination with respect to the vertical axis or the horizontal axis of the captured image.
- the control unit 103 analyzes the captured image, and extracts the eyes, the nose, the shape of the jaw, the inclination of the center axis of the face, and the like as the feature amount.
- the authentication processing unit 104 collates the feature amount extracted by the control unit 103 with the feature amount associated with each authentication pattern stored in the authentication database 105, and identifies the operator.
- the authentication database 105 stores hand shapes, arm arrival directions, and the like as feature amounts associated with each authentication pattern.
- the authentication database 105 defines a passenger of a vehicle as an authentication pattern, and stores a feature amount associated with each passenger's face in association with the passenger's authentication pattern.
- the authentication processing unit 104 collates the extracted feature value of the hand shape with the feature value stored in the authentication database 105, and the operator is a passenger other than the driver. Determine if there is.
- the authentication processing unit 104 collates the feature amount of the operator's hand shape and arm arrival direction with the feature amount stored in the authentication database 105, and the operator is a passenger other than the driver.
- the authentication processing unit 104 collates the feature amount as a result of authenticating the operator's face with the feature amount stored in the authentication database 105, and specifies whether the operator is a passenger other than the driver. Do.
- the authentication processing unit 104 performs verification using the result of authenticating the direction of arrival of the hand and arm or the face in addition to the feature amount of the object. Accuracy can be improved.
- the authentication processing unit 104 impersonates the passenger by inputting the authentication operation with an unreasonable posture when the inclination of the central axis extracted as the feature amount by the control unit 103 is larger than a predetermined value.
- the operator is identified as not being a passenger, as possible.
- the authentication processing unit 104 can improve the accuracy in identifying the operator by considering the inclination of the central axis of the operator's face in identifying whether the passenger is a passenger.
- the operation of the in-vehicle information processing apparatus 100 according to Embodiment 2 will be described.
- the operation of the in-vehicle information processing apparatus 100 according to Embodiment 2 is the same as that of the in-vehicle information processing apparatus 100 according to Embodiment 1 shown in FIGS.
- a case where the detection information of the infrared camera 201 is applied to determine whether or not the input operation detection information has been acquired again within the predetermined time indicated in step ST6 of the flowchart of FIG. 4 will be described.
- the detection information acquisition unit 101 analyzes the captured image captured by the infrared camera 201 and sets a predetermined imaging. The determination is based on whether or not a portion where the luminance is greater than the predetermined value disappears and a predetermined time has elapsed in the image area.
- the preset area in the captured image is a preset area when an object is imaged when the operator performs an input operation or an authentication operation, or the touch panel 200 or the H / W switch 202 is arranged.
- the detection information acquisition unit 101 has the operator's hand or the like touch the touch panel 200 or H It can be determined that the operator is not performing an operation of performing an operation away from the / W switch 202.
- the predetermined luminance value used for determination by the detection information acquisition unit 101 is a value set based on the correlation between the luminance of the captured image and the distance between the infrared camera 201 and the object to be imaged. For example, it is assumed that the object and the infrared camera 201 are separated by a predetermined distance when the operator is performing an operation.
- the detection information acquisition unit 101 is set with a predetermined brightness value (for example, a brightness value “150”) corresponding to the predetermined distance that is set in advance.
- step ST6 instead of determining whether or not the input operation detection information has been acquired again within a predetermined time, it is determined whether or not a series of operation tasks has been completed. You may go.
- the detection information acquisition unit 101 has at least one of the information detected by the touch panel 200 and the H / W switch 202 and the analysis result of the captured image of the infrared camera 201.
- Information indicating the proximity or contact of the operator's object detected at one point is acquired as detection information, and the control unit 103 determines the input of the authentication operation from the analysis result of the captured image of the infrared camera 201.
- the authentication processing unit 104 acquires a captured image of the operator who is performing the authentication operation from the control unit 103, and extracts the inclination of the central axis of the operator's face from the captured image. Since it comprised so, the precision at the time of specifying an operator can be improved further.
- FIG. 9 is a block diagram showing the configuration of the in-vehicle information processing apparatus 100a according to the third embodiment.
- the in-vehicle information processing device 100a includes a driver database 106 and an authentication processing unit 104a instead of the authentication database 105 and the authentication processing unit 104 of the in-vehicle information processing device 100 according to the first embodiment shown in FIG. Yes.
- the same or corresponding parts as those of the in-vehicle information processing apparatus 100 according to the first embodiment are denoted by the same reference numerals as those used in the first embodiment, and the description thereof is omitted or simplified. To do.
- the driver database 106 stores an authentication pattern that can be identified as the driver's authentication operation, and a feature amount of the driver corresponding to each authentication pattern.
- FIG. 10 is a diagram illustrating an example of authentication patterns performed in the driver database 106 of the in-vehicle information processing apparatus 100a according to the third embodiment.
- the authentication pattern includes a pattern in which the driver in the right-hand drive vehicle performs an authentication operation with the left hand open, and a pattern in which the driver in the left-hand drive vehicle performs the authentication operation with the right hand open. Show.
- the authentication pattern is not limited to the example shown in FIG. 10 and can be variously applied as long as it is an operation pattern that can be identified as a passenger other than the driver.
- the authentication processing unit 104a collates the feature amount extracted by the control unit 103 with the feature amount associated with each authentication pattern stored in the driver database 106, and determines whether or not the operator is a driver.
- the operator is specified based on the above. Specifically, the authentication processing unit 104a specifies that the operator is a driver when the extracted feature value matches the feature value in the driver database 106. On the other hand, if the extracted feature quantity does not match the feature quantity in the driver database 106, the authentication processing unit 104 specifies that the operator is a passenger other than the driver.
- the authentication processing unit 104a includes the feature amount extracted in step ST13 and each stored in the driver database 106. The operator is identified by collating with the feature amount associated with the authentication pattern.
- the other processes are the same as those in the flowchart shown in FIG.
- the information for requesting the input of the authentication operation according to the detection information acquired by the detection information acquisition unit 101 and the vehicle information acquired by the vehicle information acquisition unit 102. It is configured to include a control unit 103 that controls output of output or detection information, and an authentication processing unit 104a that collates the input authentication operation with respect to the information that requests input of the output authentication operation. Therefore, it is possible to suppress the processing load for verifying the authentication operation.
- the control unit 103 outputs the information requesting the input of the authentication operation when the vehicle is running and the operator who has performed the input operation is not authenticated. Since it is configured to extract the feature amount of the operator from the authentication operation detection information input to the control request and the authentication operation input requested, every time the operator performs the input operation There is no need to perform an authentication process for the operator, and the load of the process for verifying the authentication operation can be suppressed.
- the driver database 106 and the authentication processing unit 104a are provided in place of the authentication database 105 and the authentication processing unit 104 of the in-vehicle information processing apparatus 100 of the first embodiment.
- a driver database 106 and an authentication processing unit 104a may be provided.
- the case where the feature amount previously associated with each authentication pattern is stored in the authentication database 105 or the driver database 106 is described as an example.
- the passenger and the passenger may determine the shape of the authentication operation and register the feature amount of the shape in the authentication database 105 or the driver database 106.
- passengers include passengers seated in the passenger seat and passengers seated in the rear seat.
- the present invention can be freely combined with each embodiment, modified any component of each embodiment, or omitted any component in each embodiment. Is possible.
- the in-vehicle information processing apparatus does not need to specify the operator every time the operator performs an operation, and thus is applied to an apparatus that determines and controls whether or not to accept an input operation to the in-vehicle apparatus. It is suitable for suppressing the processing load for authenticating the operator.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Navigation (AREA)
Abstract
This vehicle-mounted information processing device is provided with: a detection information acquisition unit (101) which acquires detection information indicating detection of an input operation and an authentication operation performed by an operator; a vehicle information acquisition unit (102) which acquires vehicle information indicating the traveling state of a vehicle; a control unit (103) which, depending on the acquired detection information and vehicle information, controls either output of information for requesting input of an authentication operation, or output of information about detection of an input operation; and an authentication processing unit (104) which verifies an input authentication operation against the output information for requesting input of an authentication operation.
Description
この発明は、車載装置に対する操作入力を制御する技術に関するものである。
This invention relates to a technique for controlling an operation input to an in-vehicle device.
車両に搭載されたナビゲーション機器およびオーディオ機器等は、車両の運転席および助手席の間に配置されたタッチパネルまたはハードウェアスイッチ等の入力操作装置を介して、運転席に着座した運転者、または助手席および後部座席に着座した同乗者による入力操作を受け付ける。従来より、操作者による入力操作が車両走行の妨げにならないように、車両が走行状態にある場合には所定の入力操作を制限する技術が用いられている。
The navigation device and audio device mounted on the vehicle are a driver or an assistant seated in the driver's seat via an input operation device such as a touch panel or a hardware switch arranged between the driver's seat and the passenger seat of the vehicle. An input operation by a passenger seated in the seat and the rear seat is accepted. Conventionally, a technique for restricting a predetermined input operation when the vehicle is in a traveling state has been used so that an input operation by an operator does not interfere with the vehicle traveling.
例えば、特許文献1に記載された入力操作装置は、操作者がタッチパネル付ディスプレイに接触した形状を検出し、検出された形状からタッチパネル付ディスプレイを操作した操作者が運転席に着座している可能性があると判断し、且つ車両が走行中である場合に、当該操作者の操作は車両の走行の妨げになると判断して、操作を受け入れないように処理を行っている。
For example, the input operation device described in Patent Literature 1 can detect the shape of the operator touching the display with the touch panel, and the operator who has operated the display with the touch panel from the detected shape can be seated in the driver's seat. When the vehicle is traveling and the vehicle is traveling, it is determined that the operation of the operator hinders traveling of the vehicle, and processing is performed so as not to accept the operation.
上記特許文献1に記載された入力操作装置によれば、操作者がタッチパネル付ディスプレイに接触する度に、接触した形状を検出し、検出された形状からタッチパネル付ディスプレイを操作した操作者が助手席に着在しているか否かを判断する必要があり、操作者を特定するための処理の負荷が大きいという課題があった。
According to the input operation device described in Patent Document 1, each time the operator touches the display with a touch panel, the touched shape is detected, and the operator who operates the display with the touch panel from the detected shape is the passenger seat. It is necessary to determine whether or not he / she is attached to the mobile phone, and there is a problem that the processing load for specifying the operator is heavy.
この発明は、上記のような課題を解決するためになされたもので、入力動作を行った操作者を特定するための処理の負荷を低減させることを目的とする。
The present invention has been made to solve the above-described problems, and an object thereof is to reduce a processing load for specifying an operator who has performed an input operation.
この発明に係る車載情報処理装置は、操作者の入力動作および認証動作を検知したことを示す検知情報を取得する検知情報取得部と、車両の走行状態を示す車両情報を取得する車両情報取得部と、検知情報取得部が取得した検知情報と、車両情報取得部が取得した車両情報とに応じて、認証動作の入力を要求する情報の出力、または入力動作の検知情報の出力を制御する制御部と、出力された認証動作の入力を要求する情報に対して、入力された認証動作の照合を行う認証処理部とを備えるものである。
An in-vehicle information processing apparatus according to the present invention includes a detection information acquisition unit that acquires detection information indicating that an input operation and an authentication operation of an operator have been detected, and a vehicle information acquisition unit that acquires vehicle information indicating a running state of the vehicle And control for controlling the output of the information for requesting the input of the authentication operation or the output of the detection information of the input operation according to the detection information acquired by the detection information acquisition unit and the vehicle information acquired by the vehicle information acquisition unit And an authentication processing unit for collating the input authentication operation with respect to the information requesting input of the output authentication operation.
この発明によれば、入力動作を行った操作者を特定するための処理を行う頻度を低減させることができ、操作者を特定するための処理の負荷を低減させることができる。
According to this invention, it is possible to reduce the frequency of performing the process for specifying the operator who performed the input operation, and to reduce the processing load for specifying the operator.
以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
図1は、実施の形態1に係る車載情報処理装置100の構成を示すブロック図である。
車載情報処理装置100は、検知情報取得部101、車両情報取得部102、制御部103、認証処理部104および認証データベース105を備えて構成される。また、図1に示すように、車載情報処理装置100は、タッチパネル200、車載装置300、表示装置400およびスピーカ500と接続されている。 Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram showing the configuration of the in-vehicleinformation processing apparatus 100 according to the first embodiment.
The in-vehicleinformation processing apparatus 100 includes a detection information acquisition unit 101, a vehicle information acquisition unit 102, a control unit 103, an authentication processing unit 104, and an authentication database 105. As shown in FIG. 1, the in-vehicle information processing apparatus 100 is connected to a touch panel 200, an in-vehicle apparatus 300, a display apparatus 400, and a speaker 500.
実施の形態1.
図1は、実施の形態1に係る車載情報処理装置100の構成を示すブロック図である。
車載情報処理装置100は、検知情報取得部101、車両情報取得部102、制御部103、認証処理部104および認証データベース105を備えて構成される。また、図1に示すように、車載情報処理装置100は、タッチパネル200、車載装置300、表示装置400およびスピーカ500と接続されている。 Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram showing the configuration of the in-vehicle
The in-vehicle
検知情報取得部101は、タッチパネル200から検知情報を取得する。タッチパネル200は、操作者の身体または指等(以下、物体という)の近接または接触を検知すると、検知情報を出力する。タッチパネル200は、物体の近接または接触を検知可能な静電容量方式、あるいは物体の接触を検知可能な抵抗膜方法等が適用されて構成される。タッチパネル200は、操作者が操作入力を行うために物体を当該タッチパネル200に近接または接触させる入力動作と、操作者が認証入力を行うために物体を当該タッチパネル200に近接または接触させる認証動作とを検知する。タッチパネル200には、操作者の入力動作または認証動作を検知する領域に予め座標値が付されており、入力動作および認証動作を検知した位置または範囲を示す情報を検知情報として出力する。
図1の例では、検知情報取得部101がタッチパネル200から検知情報を取得する例を示したが、タッチパッド等から検知情報を取得するように構成してもよい。 The detectioninformation acquisition unit 101 acquires detection information from the touch panel 200. When the touch panel 200 detects the proximity or contact of an operator's body or a finger (hereinafter referred to as an object), the touch panel 200 outputs detection information. The touch panel 200 is configured by applying a capacitance method capable of detecting the proximity or contact of an object, or a resistive film method capable of detecting contact of an object. The touch panel 200 includes an input operation for bringing an object close to or in contact with the touch panel 200 for an operator to perform an operation input, and an authentication operation for bringing an object close to or in contact with the touch panel 200 for an operator to perform authentication input. Detect. On the touch panel 200, a coordinate value is assigned in advance to a region where an operator's input operation or authentication operation is detected, and information indicating a position or a range where the input operation and authentication operation are detected is output as detection information.
In the example of FIG. 1, the detectioninformation acquisition unit 101 acquires the detection information from the touch panel 200. However, the detection information acquisition unit 101 may be configured to acquire the detection information from a touch pad or the like.
図1の例では、検知情報取得部101がタッチパネル200から検知情報を取得する例を示したが、タッチパッド等から検知情報を取得するように構成してもよい。 The detection
In the example of FIG. 1, the detection
車両情報取得部102は、図示しない車載ネットワーク等を介して、車両の速度、またはパーキングブレーキの状態情報など、車両の走行状態を示す情報を取得する。
The vehicle information acquisition unit 102 acquires information indicating the traveling state of the vehicle, such as vehicle speed or parking brake state information, via an in-vehicle network (not shown).
制御部103は、検知情報取得部101から入力動作の検知情報が入力されると、車両情報取得部102が取得した車両の走行状態を示す情報に応じた処理を行う。制御部103は、車両の走行状態を示す情報から、車両が停車または駐車していると判断した場合に、検知情報取得部101から入力された入力動作の検知情報を制御情報として車載装置300に出力する。また、制御部103は、車両が走行中であり、且つ操作者が認証されていない場合は、操作者に認証動作の入力を要求するための制御情報を表示装置400またはスピーカ500に出力する。制御部103が要求する認証動作とは、例えば「右手を開いてかざす」など、予め設定された動作である。
When the detection information of the input operation is input from the detection information acquisition unit 101, the control unit 103 performs processing according to the information indicating the traveling state of the vehicle acquired by the vehicle information acquisition unit 102. When the control unit 103 determines that the vehicle is stopped or parked from the information indicating the running state of the vehicle, the control unit 103 uses the input operation detection information input from the detection information acquisition unit 101 as control information to the in-vehicle device 300. Output. Further, when the vehicle is traveling and the operator is not authenticated, the control unit 103 outputs control information for requesting the operator to input an authentication operation to the display device 400 or the speaker 500. The authentication operation requested by the control unit 103 is a preset operation such as “hold the right hand and hold”.
制御部103は、認証動作の入力の要求に応じて取得された入力動作の検知情報の解析を行い、認証動作時の物体の特徴量を抽出して認証処理部104に出力する。また、制御部103は、車両が走行中であり、且つ操作者が認証済みの場合は、検知情報取得部101から入力された入力動作の検知情報を制御情報として車載装置300に出力する。
ここで、制御部103が抽出する認証動作時の物体の特徴量とは、例えば認証動作として操作者が手を開いてかざす場合に、操作者の手および指の輪郭の形状、操作者の各指の頂点の位置、または操作者の各指の頂点および操作者の手および指の輪郭の形状の組み合わせ等である。なお、上述した特徴量は一例であり、操作者の認証動作を特定可能な情報であれば、特徴量として用いることができる。
なお、制御部103のより詳細な制御内容については後述する。 Thecontrol unit 103 analyzes the input operation detection information acquired in response to the authentication operation input request, extracts the feature amount of the object at the time of the authentication operation, and outputs it to the authentication processing unit 104. In addition, when the vehicle is traveling and the operator has been authenticated, the control unit 103 outputs the detection information of the input operation input from the detection information acquisition unit 101 to the in-vehicle device 300 as control information.
Here, the feature amount of the object at the time of the authentication operation extracted by thecontrol unit 103 is, for example, when the operator holds the hand open as the authentication operation, For example, the positions of the vertices of the fingers, or combinations of the vertices of each finger of the operator and the shapes of the operator's hand and the contour of the finger. Note that the above-described feature amount is an example, and any information that can identify the authentication operation of the operator can be used as the feature amount.
Note that more detailed control contents of thecontrol unit 103 will be described later.
ここで、制御部103が抽出する認証動作時の物体の特徴量とは、例えば認証動作として操作者が手を開いてかざす場合に、操作者の手および指の輪郭の形状、操作者の各指の頂点の位置、または操作者の各指の頂点および操作者の手および指の輪郭の形状の組み合わせ等である。なお、上述した特徴量は一例であり、操作者の認証動作を特定可能な情報であれば、特徴量として用いることができる。
なお、制御部103のより詳細な制御内容については後述する。 The
Here, the feature amount of the object at the time of the authentication operation extracted by the
Note that more detailed control contents of the
認証処理部104は、制御部103で抽出された認証動作時の物体の特徴量と、認証データベース105に格納された特徴量との照合を行い、認証動作を行った操作者が運転者であるか、または運転者以外の同乗者であるか特定を行う。認証処理部104は、認証動作を行った操作者の特定結果を制御部103に出力する。
具体的には、認証処理部104は、抽出された認証動作時の物体の特徴量と、認証データベース105に格納されたいずれかの特徴量とが一致する場合、操作者は運転者以外の同乗者であると特定する。一方、認証処理部104は、抽出された認証動作時の物体の特徴量が、認証データベース105に格納されたいずれの特徴量とも一致しない場合、操作者は運転者であると特定する。
制御部103が認証動作時の物体の特徴量として、操作者の手および指の輪郭の形状を抽出している場合、認証処理部104は、抽出された手および指の輪郭の形状と、認証データベース105に格納された手および指の輪郭の形状データとのマッチングを行い、操作者が運転者以外の同乗者であるか特定を行う。
なお、特徴量の照合では、認証動作時の物体の特徴量と認証データベース105に格納されたいずれかの特徴量とが完全一致する必要はなく、一致度が予め設定された閾値以上であれば、特徴量が一致しているとみなしてもよい。 Theauthentication processing unit 104 collates the feature amount of the object at the time of the authentication operation extracted by the control unit 103 with the feature amount stored in the authentication database 105, and the operator who performed the authentication operation is the driver. Or whether the passenger is a passenger other than the driver. The authentication processing unit 104 outputs the identification result of the operator who performed the authentication operation to the control unit 103.
Specifically, theauthentication processing unit 104, when the extracted feature quantity of the object at the time of the authentication operation matches any one of the feature quantities stored in the authentication database 105, the operator is a rider other than the driver. Identified as a person. On the other hand, if the feature amount of the extracted object at the time of the authentication operation does not match any feature amount stored in the authentication database 105, the authentication processing unit 104 specifies that the operator is a driver.
When thecontrol unit 103 has extracted the contour shape of the operator's hand and finger as the feature amount of the object during the authentication operation, the authentication processing unit 104 determines the extracted shape of the hand and finger contour and the authentication. Matching with the shape data of the hand and finger contours stored in the database 105 is performed to determine whether the operator is a passenger other than the driver.
It should be noted that in the feature amount collation, the feature amount of the object at the time of the authentication operation does not have to be completely coincident with any one of the feature amounts stored in theauthentication database 105, and the degree of coincidence is not less than a preset threshold value The feature amounts may be regarded as matching.
具体的には、認証処理部104は、抽出された認証動作時の物体の特徴量と、認証データベース105に格納されたいずれかの特徴量とが一致する場合、操作者は運転者以外の同乗者であると特定する。一方、認証処理部104は、抽出された認証動作時の物体の特徴量が、認証データベース105に格納されたいずれの特徴量とも一致しない場合、操作者は運転者であると特定する。
制御部103が認証動作時の物体の特徴量として、操作者の手および指の輪郭の形状を抽出している場合、認証処理部104は、抽出された手および指の輪郭の形状と、認証データベース105に格納された手および指の輪郭の形状データとのマッチングを行い、操作者が運転者以外の同乗者であるか特定を行う。
なお、特徴量の照合では、認証動作時の物体の特徴量と認証データベース105に格納されたいずれかの特徴量とが完全一致する必要はなく、一致度が予め設定された閾値以上であれば、特徴量が一致しているとみなしてもよい。 The
Specifically, the
When the
It should be noted that in the feature amount collation, the feature amount of the object at the time of the authentication operation does not have to be completely coincident with any one of the feature amounts stored in the
認証データベース105は、運転者以外の同乗者の認証動作であると特定可能な物体の特徴量を格納している。例えば、認証データベース105は、車両が右ハンドルである場合に、助手席の同乗者がタッチパネル200に右手を開いてかざした場合に、タッチパネル200に近接または接近した右手から抽出されると推定される形状の特徴量を格納している。
The authentication database 105 stores the feature quantity of an object that can be identified as an authentication operation for a passenger other than the driver. For example, it is estimated that the authentication database 105 is extracted from the right hand that is close to or close to the touch panel 200 when the passenger in the passenger seat opens the right hand over the touch panel 200 when the vehicle is a right steering wheel. Stores feature quantities of shapes.
車載装置300は、車両に搭載されたナビゲーション装置およびオーディオ装置等である。車載装置300は、制御部103から入力される検知情報に示された入力動作が検知された位置または範囲を示す情報に基づいて、自装置の制御を行う。
The in-vehicle device 300 is a navigation device, an audio device, or the like mounted on the vehicle. The in-vehicle device 300 controls the device itself based on information indicating the position or range where the input operation indicated by the detection information input from the control unit 103 is detected.
表示装置400は、例えば液晶ディスプレイ、または有機EL(エレクトロルミネッセンス)等で構成され、車載情報処理装置100および車載装置300から入力される制御情報に基づいて、運転者および同乗者に通知するための情報を表示する。具体的には、制御部103から入力される制御情報に基づいて、認証動作の入力を促す画面、および認証処理が完了したことを通知する画面等を表示する。
さらに表示装置400は、車載装置300がナビゲーション装置である場合、例えば地図、出発地、目的地および案内経路などの情報を表示する。
なお、表示装置400とタッチパネル200とを一体化し、タッチパネル200への入力を、表示装置400に表示された情報を選択する操作として受け付ける構成としてもよい。 Thedisplay device 400 is configured by, for example, a liquid crystal display or an organic EL (electroluminescence), and the like, for notifying the driver and passengers based on control information input from the in-vehicle information processing device 100 and the in-vehicle device 300. Display information. Specifically, based on the control information input from the control unit 103, a screen for prompting an input of an authentication operation, a screen for notifying that the authentication processing has been completed, and the like are displayed.
Further, when the in-vehicle device 300 is a navigation device, the display device 400 displays information such as a map, a departure place, a destination, and a guidance route.
Note that thedisplay device 400 and the touch panel 200 may be integrated, and an input to the touch panel 200 may be received as an operation for selecting information displayed on the display device 400.
さらに表示装置400は、車載装置300がナビゲーション装置である場合、例えば地図、出発地、目的地および案内経路などの情報を表示する。
なお、表示装置400とタッチパネル200とを一体化し、タッチパネル200への入力を、表示装置400に表示された情報を選択する操作として受け付ける構成としてもよい。 The
Further, when the in-
Note that the
スピーカ500は、車載情報処理装置100および車載装置300から入力される制御情報に基づいて、運転者および同乗者に通知するための情報を音声出力する。具体的には、制御部103から入力される制御情報に基づいて、認証動作の入力を促す音声、および認証処理が完了したことを通知する音声等を出力する。
Speaker 500 outputs information for notifying the driver and passengers based on control information input from in-vehicle information processing apparatus 100 and in-vehicle apparatus 300. Specifically, based on the control information input from the control unit 103, a voice prompting input of an authentication operation, a voice notifying that the authentication process has been completed, and the like are output.
次に、車載情報処理装置100のハードウェア構成例を説明する。
図2は、実施の形態1に係る車載情報処理装置100のハードウェア構成例を示す図である。
車載情報処理装置100における検知情報取得部101、車両情報取得部102、制御部103および認証処理部104は、処理回路により実現される。すなわち、検知情報取得部101、車両情報取得部102、制御部103および認証処理部104は、認証動作の要求に対して入力された認証動作の検知情報から物体の特徴点を抽出して、操作者が運転者以外の同乗者であるかの特定を行う処理回路を備える。 Next, a hardware configuration example of the in-vehicleinformation processing apparatus 100 will be described.
FIG. 2 is a diagram illustrating a hardware configuration example of the in-vehicleinformation processing apparatus 100 according to the first embodiment.
The detectioninformation acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 in the in-vehicle information processing apparatus 100 are realized by a processing circuit. That is, the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 extract the feature points of the object from the authentication operation detection information input in response to the authentication operation request, and A processing circuit for identifying whether the driver is a passenger other than the driver.
図2は、実施の形態1に係る車載情報処理装置100のハードウェア構成例を示す図である。
車載情報処理装置100における検知情報取得部101、車両情報取得部102、制御部103および認証処理部104は、処理回路により実現される。すなわち、検知情報取得部101、車両情報取得部102、制御部103および認証処理部104は、認証動作の要求に対して入力された認証動作の検知情報から物体の特徴点を抽出して、操作者が運転者以外の同乗者であるかの特定を行う処理回路を備える。 Next, a hardware configuration example of the in-vehicle
FIG. 2 is a diagram illustrating a hardware configuration example of the in-vehicle
The detection
処理回路が専用のハードウェアである場合、処理回路は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit),FPGA(Field-programmable Gate Array)、またはこれらを組み合わせたものが該当する。検知情報取得部101、車両情報取得部102、制御部103および認証処理部104の各部の機能それぞれを処理回路で実現してもよいし、各部の機能をまとめて処理回路で実現してもよい。
When the processing circuit is dedicated hardware, the processing circuit can be, for example, a single circuit, a composite circuit, a programmed processor, a processor programmed in parallel, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-programmable Gate Array). ) Or a combination of these. The functions of the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 may be realized by a processing circuit, or the functions of the units may be realized by a processing circuit. .
処理回路がCPU(Central Processing Unit)の場合、処理回路は図2に示すメモリ120に格納されるプログラムを実行するCPU110である。検知情報取得部101、車両情報取得部102、制御部103および認証処理部104の機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェアまたはファームウェアはプログラムとして記述され、メモリ120に格納される。CPU110は、メモリ120に記憶されたプログラムを読み出して実行することにより、検知情報取得部101、車両情報取得部102、制御部103および認証処理部104の各機能を実現する。即ち、検知情報取得部101、車両情報取得部102、制御部103および認証処理部104は、CPU110により実行されるときに、後述する図4および図5に示す各ステップが結果的に実行されることになるプログラムを格納するためのメモリ120を備える。また、これらのプログラムは、検知情報取得部101、車両情報取得部102、制御部103および認証処理部104の手順または方法をコンピュータに実行させるものであるともいえる。
When the processing circuit is a CPU (Central Processing Unit), the processing circuit is a CPU 110 that executes a program stored in the memory 120 shown in FIG. The functions of the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 are realized by software, firmware, or a combination of software and firmware. Software or firmware is described as a program and stored in the memory 120. CPU110 implement | achieves each function of the detection information acquisition part 101, the vehicle information acquisition part 102, the control part 103, and the authentication process part 104 by reading and running the program memorize | stored in the memory 120. FIG. That is, when the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104 are executed by the CPU 110, each step shown in FIGS. A memory 120 is provided for storing the program to be changed. These programs can be said to cause a computer to execute the procedures or methods of the detection information acquisition unit 101, the vehicle information acquisition unit 102, the control unit 103, and the authentication processing unit 104.
ここで、CPU110は、例えば、中央処理装置、処理装置、演算装置、プロセッサ、マイクロプロセッサ、マイクロコンピュータ、またはDSP(Digital Signal Processor)などのことである。
メモリ120は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable ROM)、EEPROM(Electrically EPROM)等の不揮発性または揮発性の半導体メモリであってもよいし、ハードディスク、フレキシブルディスク等の磁気ディスクであってもよいし、ミニディスク、CD(Compact Disc)、DVD(Digital Versatile Disc)等の光ディスクであってもよい。 Here, theCPU 110 is, for example, a central processing unit, a processing unit, an arithmetic unit, a processor, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
Thememory 120 may be a nonvolatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable ROM), and an EEPROM (Electrically EPROM). Further, it may be a magnetic disk such as a hard disk or a flexible disk, or an optical disk such as a mini disk, CD (Compact Disc), or DVD (Digital Versatile Disc).
メモリ120は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable ROM)、EEPROM(Electrically EPROM)等の不揮発性または揮発性の半導体メモリであってもよいし、ハードディスク、フレキシブルディスク等の磁気ディスクであってもよいし、ミニディスク、CD(Compact Disc)、DVD(Digital Versatile Disc)等の光ディスクであってもよい。 Here, the
The
次に、制御部103の制御内容をより詳細に説明する。
以下では、車両の走行状態、操作者の認証状態、および操作者の特定結果に応じて、5つに場合分けを行って説明する。
(1-1)車両が停車または駐車している場合
制御部103は、車両情報取得部102から取得した車両の走行状態を参照し、車速が「0」、またはパーキングブレーキがONである場合に、車両が停車または駐車していると判断する。
制御部103は、検知情報取得部101が取得した検知情報に記載された、入力動作が検知された位置または範囲を示す情報を、車載装置300に出力する。
車載装置300は、入力された位置または範囲を示す情報に基づいて、操作者の操作を特定し、特定した操作に対応する処理を実行する。 Next, the control content of thecontrol unit 103 will be described in more detail.
In the following description, five cases are classified according to the traveling state of the vehicle, the authentication state of the operator, and the identification result of the operator.
(1-1) When the vehicle is stopped or parked Thecontrol unit 103 refers to the traveling state of the vehicle acquired from the vehicle information acquisition unit 102, and when the vehicle speed is “0” or the parking brake is ON. Determine that the vehicle is parked or parked.
Thecontrol unit 103 outputs information indicating the position or range where the input operation is detected, described in the detection information acquired by the detection information acquisition unit 101, to the in-vehicle device 300.
The in-vehicle device 300 identifies the operation of the operator based on the input information indicating the position or range, and executes a process corresponding to the identified operation.
以下では、車両の走行状態、操作者の認証状態、および操作者の特定結果に応じて、5つに場合分けを行って説明する。
(1-1)車両が停車または駐車している場合
制御部103は、車両情報取得部102から取得した車両の走行状態を参照し、車速が「0」、またはパーキングブレーキがONである場合に、車両が停車または駐車していると判断する。
制御部103は、検知情報取得部101が取得した検知情報に記載された、入力動作が検知された位置または範囲を示す情報を、車載装置300に出力する。
車載装置300は、入力された位置または範囲を示す情報に基づいて、操作者の操作を特定し、特定した操作に対応する処理を実行する。 Next, the control content of the
In the following description, five cases are classified according to the traveling state of the vehicle, the authentication state of the operator, and the identification result of the operator.
(1-1) When the vehicle is stopped or parked The
The
The in-
(1-2)車両が走行中、且つ操作者が未認証の場合
制御部103は、車両情報取得部102から取得した車両の走行状態を参照し、予め設定された車速以上の状態が、所定の時間継続している場合に、車両が走行中であると判断する。ここで、予め設定された車速とは、例えば時速5km等である。また所定の時間とは、例えば3秒等である。さらに制御部103は、操作者が認証されていない場合に、操作者に認証動作の入力を要求するための制御情報を、表示装置400またはスピーカ500の少なくとも1つに出力する。
ここで、操作者に要求する認証動作とは、例えばタッチパネル200から所定距離d離れた位置、またはタッチパネル200上の所定の位置に、「右手を開いてかざす」等である。距離dは、タッチパネル200が物体の近接を検知可能な距離であり、例えば0cm<d<5cm等の条件を満たす値である。 (1-2) When the vehicle is traveling and the operator is not authenticated Thecontrol unit 103 refers to the traveling state of the vehicle acquired from the vehicle information acquisition unit 102, and a state equal to or higher than a preset vehicle speed is predetermined. It is determined that the vehicle is traveling when the vehicle continues for a predetermined time. Here, the preset vehicle speed is, for example, 5 km / h. The predetermined time is, for example, 3 seconds. Furthermore, when the operator is not authenticated, the control unit 103 outputs control information for requesting the operator to input an authentication operation to at least one of the display device 400 and the speaker 500.
Here, the authentication operation requested from the operator is, for example, “hold the right hand open” at a position away from thetouch panel 200 by a predetermined distance d or a predetermined position on the touch panel 200. The distance d is a distance at which the touch panel 200 can detect the proximity of an object, and is a value satisfying a condition such as 0 cm <d <5 cm.
制御部103は、車両情報取得部102から取得した車両の走行状態を参照し、予め設定された車速以上の状態が、所定の時間継続している場合に、車両が走行中であると判断する。ここで、予め設定された車速とは、例えば時速5km等である。また所定の時間とは、例えば3秒等である。さらに制御部103は、操作者が認証されていない場合に、操作者に認証動作の入力を要求するための制御情報を、表示装置400またはスピーカ500の少なくとも1つに出力する。
ここで、操作者に要求する認証動作とは、例えばタッチパネル200から所定距離d離れた位置、またはタッチパネル200上の所定の位置に、「右手を開いてかざす」等である。距離dは、タッチパネル200が物体の近接を検知可能な距離であり、例えば0cm<d<5cm等の条件を満たす値である。 (1-2) When the vehicle is traveling and the operator is not authenticated The
Here, the authentication operation requested from the operator is, for example, “hold the right hand open” at a position away from the
(1-3)車両が走行中、且つ操作者の認証を実行中である場合
操作者の認証を実行中である場合とは、上述した(1-2)で示した認証動作の入力を要求するための制御情報を出力した後に移行する状態である。
制御部103は、検知情報取得部101が取得した認証動作の検知情報の解析を行い、認証動作時の物体の特徴量を抽出する。制御部103は、抽出した特徴量を認証処理部104に出力する。 (1-3) When the vehicle is running and operator authentication is being executed The operator authentication is being executed when the input of the authentication operation described in (1-2) above is required. This is a state in which a transition is made after outputting the control information for doing so.
Thecontrol unit 103 analyzes the detection information of the authentication operation acquired by the detection information acquisition unit 101, and extracts the feature amount of the object during the authentication operation. The control unit 103 outputs the extracted feature amount to the authentication processing unit 104.
操作者の認証を実行中である場合とは、上述した(1-2)で示した認証動作の入力を要求するための制御情報を出力した後に移行する状態である。
制御部103は、検知情報取得部101が取得した認証動作の検知情報の解析を行い、認証動作時の物体の特徴量を抽出する。制御部103は、抽出した特徴量を認証処理部104に出力する。 (1-3) When the vehicle is running and operator authentication is being executed The operator authentication is being executed when the input of the authentication operation described in (1-2) above is required. This is a state in which a transition is made after outputting the control information for doing so.
The
(1-4)車両が走行中、且つ認証処理終了、且つ操作者の特定結果が運転者である場合
制御部103は、認証処理部104から入力された操作者の特定結果を参照し、操作者が運転者であると判定した場合に、その後に検知情報取得部101から入力される入力動作の検知情報を車載装置300に出力しない。即ち、制御部103は、タッチパネル200を介した操作者の操作を受け付けない。 (1-4) When the vehicle is running, the authentication process is completed, and the operator identification result is a driver. Thecontrol unit 103 refers to the operator identification result input from the authentication processing unit 104 and performs an operation. When it is determined that the driver is the driver, the detection information of the input operation input from the detection information acquisition unit 101 is not output to the in-vehicle device 300 thereafter. That is, the control unit 103 does not accept an operator's operation via the touch panel 200.
制御部103は、認証処理部104から入力された操作者の特定結果を参照し、操作者が運転者であると判定した場合に、その後に検知情報取得部101から入力される入力動作の検知情報を車載装置300に出力しない。即ち、制御部103は、タッチパネル200を介した操作者の操作を受け付けない。 (1-4) When the vehicle is running, the authentication process is completed, and the operator identification result is a driver. The
(1-5)車両が走行中、且つ認証処理終了、且つ操作者の特定結果が運転者以外の同乗者である場合
制御部103は、認証処理部104から入力された操作者の特定結果を参照し、操作者が運転者以外の同乗者であると判定した場合に、その後に検知情報取得部101から入力される入力動作の検知情報を車載装置300に出力する。即ち、制御部103は、タッチパネル200を介した操作者の操作を受け付ける。 (1-5) When the vehicle is traveling, the authentication process is completed, and the operator identification result is a passenger other than the driver. Thecontrol unit 103 displays the operator identification result input from the authentication processing unit 104. When it is determined that the operator is a passenger other than the driver, the detection information of the input operation input from the detection information acquisition unit 101 is output to the in-vehicle device 300 thereafter. That is, the control unit 103 receives an operator's operation via the touch panel 200.
制御部103は、認証処理部104から入力された操作者の特定結果を参照し、操作者が運転者以外の同乗者であると判定した場合に、その後に検知情報取得部101から入力される入力動作の検知情報を車載装置300に出力する。即ち、制御部103は、タッチパネル200を介した操作者の操作を受け付ける。 (1-5) When the vehicle is traveling, the authentication process is completed, and the operator identification result is a passenger other than the driver. The
図3は、実施の形態1に係る車載情報処理装置100の認証データベース105に格納された認証パターンの一例を示す図である。
認証データベース105は、運転者以外の同乗者の認証動作であると特定可能な操作者の状態を認証パターンとして複数格納している。図示していないが、各認証パターンには、それぞれ物体の特徴量が対応付けられて格納されている。
図3の例では、右ハンドルの車両における助手席の同乗者が右手を開いた認証動作を行う認証パターン、左ハンドルの車両における助手席の同乗者が左手を開いた認証動作を行う認証パターン、右ハンドルの車両における左側後部座席の同乗者が右手を開いた認証動作を行う認証パターン、左ハンドルの車両における右側後部座席の同乗者が左手を開いた認証動作を行う認証パターン等を格納した場合を示している。なお、認証パターンは、図3で示した例に限定されるものではなく、運転者以外の同乗者と特定可能な操作者の状態であれば、種々格納しておくことが可能である。 FIG. 3 is a diagram illustrating an example of an authentication pattern stored in theauthentication database 105 of the in-vehicle information processing apparatus 100 according to the first embodiment.
Theauthentication database 105 stores a plurality of operator states that can be specified as authentication operations for passengers other than the driver as authentication patterns. Although not shown, each authentication pattern stores a feature amount of the object in association with each other.
In the example of FIG. 3, an authentication pattern for performing an authentication operation in which a passenger on the passenger seat in the right-hand drive vehicle opens the right hand, an authentication pattern for performing an authentication operation in which the passenger on the passenger seat in the left-handle vehicle opens the left hand, When an authentication pattern in which the passenger in the left rear seat in the right-hand drive vehicle performs an authentication operation with the right hand open, an authentication pattern in which the passenger in the right rear seat in the vehicle with the left handle performs an authentication operation with the left hand open is stored Is shown. Note that the authentication pattern is not limited to the example shown in FIG. 3, and various authentication patterns can be stored as long as the operator can be identified as a passenger other than the driver.
認証データベース105は、運転者以外の同乗者の認証動作であると特定可能な操作者の状態を認証パターンとして複数格納している。図示していないが、各認証パターンには、それぞれ物体の特徴量が対応付けられて格納されている。
図3の例では、右ハンドルの車両における助手席の同乗者が右手を開いた認証動作を行う認証パターン、左ハンドルの車両における助手席の同乗者が左手を開いた認証動作を行う認証パターン、右ハンドルの車両における左側後部座席の同乗者が右手を開いた認証動作を行う認証パターン、左ハンドルの車両における右側後部座席の同乗者が左手を開いた認証動作を行う認証パターン等を格納した場合を示している。なお、認証パターンは、図3で示した例に限定されるものではなく、運転者以外の同乗者と特定可能な操作者の状態であれば、種々格納しておくことが可能である。 FIG. 3 is a diagram illustrating an example of an authentication pattern stored in the
The
In the example of FIG. 3, an authentication pattern for performing an authentication operation in which a passenger on the passenger seat in the right-hand drive vehicle opens the right hand, an authentication pattern for performing an authentication operation in which the passenger on the passenger seat in the left-handle vehicle opens the left hand, When an authentication pattern in which the passenger in the left rear seat in the right-hand drive vehicle performs an authentication operation with the right hand open, an authentication pattern in which the passenger in the right rear seat in the vehicle with the left handle performs an authentication operation with the left hand open is stored Is shown. Note that the authentication pattern is not limited to the example shown in FIG. 3, and various authentication patterns can be stored as long as the operator can be identified as a passenger other than the driver.
次に、車載情報処理装置100の動作について説明する。
図4および図5は、実施の形態1に係る車載情報処理装置100の動作を示すフローチャートである。
まず、図4のフローチャートを参照しながら、車載情報処理装置100全体の動作について説明する。
車載情報処理装置100が起動されると、車載情報処理装置100の初期化が実行される(ステップST1)。ここで、車載情報処理装置100の初期化には、例えば、制御部103に認証済みの操作者が設定されている場合に、操作者の認証状態を未認証に設定する処理などが含まれる。次に、検知情報取得部101は、物体がタッチパネル200に近接または接触したことを示す入力動作の検知情報を取得したか否か判定を行う(ステップST2)。入力動作の検知情報を取得していない場合(ステップST2;NO)、ステップST2の判定処理を繰り返す。 Next, the operation of the in-vehicleinformation processing apparatus 100 will be described.
4 and 5 are flowcharts showing the operation of the in-vehicleinformation processing apparatus 100 according to the first embodiment.
First, the overall operation of the in-vehicleinformation processing apparatus 100 will be described with reference to the flowchart of FIG.
When the in-vehicleinformation processing apparatus 100 is activated, the in-vehicle information processing apparatus 100 is initialized (step ST1). Here, initialization of the in-vehicle information processing apparatus 100 includes, for example, processing for setting the authentication state of the operator to unauthenticated when an authenticated operator is set in the control unit 103. Next, the detection information acquisition unit 101 determines whether or not input operation detection information indicating that an object has approached or touched the touch panel 200 has been acquired (step ST2). When the input operation detection information is not acquired (step ST2; NO), the determination process of step ST2 is repeated.
図4および図5は、実施の形態1に係る車載情報処理装置100の動作を示すフローチャートである。
まず、図4のフローチャートを参照しながら、車載情報処理装置100全体の動作について説明する。
車載情報処理装置100が起動されると、車載情報処理装置100の初期化が実行される(ステップST1)。ここで、車載情報処理装置100の初期化には、例えば、制御部103に認証済みの操作者が設定されている場合に、操作者の認証状態を未認証に設定する処理などが含まれる。次に、検知情報取得部101は、物体がタッチパネル200に近接または接触したことを示す入力動作の検知情報を取得したか否か判定を行う(ステップST2)。入力動作の検知情報を取得していない場合(ステップST2;NO)、ステップST2の判定処理を繰り返す。 Next, the operation of the in-vehicle
4 and 5 are flowcharts showing the operation of the in-vehicle
First, the overall operation of the in-vehicle
When the in-vehicle
一方、入力動作の検知情報を取得した場合(ステップST2;YES)、検知情報取得部101は取得した入力動作の検知情報を制御部103に出力する。制御部103は、入力動作の検知情報が入力されると、車両情報取得部102から常時あるいは所定間隔で入力される車両の走行状態を示す情報とを参照して、車両が走行中であるか判定を行う(ステップST3)。車両が走行中でない場合(ステップST3;NO)、制御部103は後述するステップST5の処理に進む。一方、車両が走行中である場合(ステップST3;YES)、制御部103は操作者が認証済みであるか否か判定を行う(ステップST4)。
On the other hand, when the input operation detection information is acquired (step ST2; YES), the detection information acquisition unit 101 outputs the acquired input operation detection information to the control unit 103. When the detection information of the input operation is input, the control unit 103 refers to the information indicating the driving state of the vehicle that is input from the vehicle information acquisition unit 102 constantly or at a predetermined interval, and determines whether the vehicle is traveling. A determination is made (step ST3). When the vehicle is not traveling (step ST3; NO), the control unit 103 proceeds to a process of step ST5 described later. On the other hand, when the vehicle is traveling (step ST3; YES), the control unit 103 determines whether or not the operator has been authenticated (step ST4).
操作者が認証済みである場合(ステップST4;YES)、制御部103は入力動作の検知情報を制御情報として車載装置300に出力する(ステップST5)。検知情報取得部101は、所定時間内に再度入力動作の検知情報を取得したか否か判定を行う(ステップST6)。所定時間内に入力動作の検知情報を取得した場合(ステップST6;YES)、ステップST3の処理に戻る。一方、所定時間内に入力動作の検知情報を取得しなかった場合(ステップST6;NO)、ステップST1の処理に戻る。
また、操作者が認証済みでない場合(ステップST4;NO)、制御部103は認証処理を行い(ステップST7)、ステップST2の処理に戻る。 When the operator has been authenticated (step ST4; YES), thecontrol unit 103 outputs the input operation detection information to the in-vehicle device 300 as control information (step ST5). The detection information acquisition unit 101 determines whether or not the input operation detection information is acquired again within a predetermined time (step ST6). When the input operation detection information is acquired within the predetermined time (step ST6; YES), the process returns to step ST3. On the other hand, when the detection information of the input operation is not acquired within the predetermined time (step ST6; NO), the process returns to step ST1.
If the operator is not authenticated (step ST4; NO), thecontrol unit 103 performs an authentication process (step ST7) and returns to the process of step ST2.
また、操作者が認証済みでない場合(ステップST4;NO)、制御部103は認証処理を行い(ステップST7)、ステップST2の処理に戻る。 When the operator has been authenticated (step ST4; YES), the
If the operator is not authenticated (step ST4; NO), the
ステップST6において所定時間内に再度入力動作の検知情報を取得したか否かの判定では、例えば、検知情報取得部101がタッチパネル200から物体の近接を検知した検知情報を取得できなくなってから所定時間(例えば、1秒)経過したか否かに基づいて判定が行われる。また、例えば、検知情報取得部101がタッチパネル200から物体の接触を検知した検知情報を取得できなくなってから所定時間(例えば、3秒)経過したか否かに基づいて判定が行われる。なお、上述した所定時間の具体例は一例であり、適宜設定可能である。
In step ST6, in determining whether or not the input operation detection information has been acquired again within a predetermined time, for example, the detection information acquisition unit 101 can acquire the detection information for detecting the proximity of an object from the touch panel 200 for a predetermined time. The determination is made based on whether (for example, 1 second) has elapsed. In addition, for example, the determination is performed based on whether or not a predetermined time (for example, 3 seconds) has elapsed since the detection information acquisition unit 101 can no longer acquire detection information for detecting contact of an object from the touch panel 200. Note that the specific example of the predetermined time described above is an example, and can be set as appropriate.
さらに、ステップST6では、所定時間内に再度入力動作の検知情報を取得したか否かの判定に替えて、一連の操作のタスクが完了したか否かの判定を行ってもよい。その場合、一連の操作のタスクが完了したと判定した場合には、ステップST1の処理に戻り、一連の操作のタスクが完了していないと判定した場合には、ステップST2の処理に戻るように構成する。
また、一連の操作のタスクが完了したかの判定は、例えばナビゲーション装置におけるジャンルで施設を検索するタスクにおいて、一連の操作が行われた後に、「ジャンル検索」のボタンが押下されたか、「ジャンル選択」のボタンが押下されたか、「検索」のボタンが押下されたか等に基づいて行われる。 Furthermore, in step ST6, instead of determining whether or not input operation detection information has been acquired again within a predetermined time, it may be determined whether or not a series of operation tasks has been completed. In this case, when it is determined that a series of operation tasks has been completed, the process returns to step ST1, and when it is determined that a series of operation tasks has not been completed, the process returns to step ST2. Constitute.
In addition, for example, in the task of searching for facilities by genre in the navigation device, whether or not the “genre search” button has been pressed or “genre search” This is performed based on whether the “select” button has been pressed or the “search” button has been pressed.
また、一連の操作のタスクが完了したかの判定は、例えばナビゲーション装置におけるジャンルで施設を検索するタスクにおいて、一連の操作が行われた後に、「ジャンル検索」のボタンが押下されたか、「ジャンル選択」のボタンが押下されたか、「検索」のボタンが押下されたか等に基づいて行われる。 Furthermore, in step ST6, instead of determining whether or not input operation detection information has been acquired again within a predetermined time, it may be determined whether or not a series of operation tasks has been completed. In this case, when it is determined that a series of operation tasks has been completed, the process returns to step ST1, and when it is determined that a series of operation tasks has not been completed, the process returns to step ST2. Constitute.
In addition, for example, in the task of searching for facilities by genre in the navigation device, whether or not the “genre search” button has been pressed or “genre search” This is performed based on whether the “select” button has been pressed or the “search” button has been pressed.
次に、図5のフローチャートを参照しながら、認証処理の動作の詳細について説明する。
制御部103は操作者に対して認証動作の入力を要求する制御情報を表示装置400またはスピーカ500の少なくとも一方に出力する(ステップST11)。検知情報取得部101は、ステップST11で出力された制御情報に基づいて表示または音声出力された認証動作の入力要求に対して、認証動作が入力されたか否か判定を行う(ステップST12)。認証動作が入力された場合(ステップST12;YES)、検知情報取得部101は検知情報を制御部103に出力し、制御部103は入力された検知情報の解析を行い、物体の形状を検出し、検出した形状の特徴量を抽出する(ステップST13)。認証処理部104は、ステップST13で抽出された特徴量と、認証データベース105に格納された各認証パターンに対応付けられた特徴量との照合を行い、操作者の特定を行う(ステップST14)。制御部103は、操作者の特定結果を参照し、操作者が運転者以外の同乗者であるか否か判定を行う(ステップST15)。 Next, the details of the operation of the authentication process will be described with reference to the flowchart of FIG.
Thecontrol unit 103 outputs control information requesting the operator to input an authentication operation to at least one of the display device 400 and the speaker 500 (step ST11). The detection information acquisition unit 101 determines whether or not an authentication operation has been input in response to an input request for the authentication operation displayed or output based on the control information output in step ST11 (step ST12). When the authentication operation is input (step ST12; YES), the detection information acquisition unit 101 outputs the detection information to the control unit 103, and the control unit 103 analyzes the input detection information and detects the shape of the object. Then, the feature amount of the detected shape is extracted (step ST13). The authentication processing unit 104 collates the feature amount extracted in step ST13 with the feature amount associated with each authentication pattern stored in the authentication database 105, and identifies the operator (step ST14). The control unit 103 refers to the identification result of the operator and determines whether or not the operator is a passenger other than the driver (step ST15).
制御部103は操作者に対して認証動作の入力を要求する制御情報を表示装置400またはスピーカ500の少なくとも一方に出力する(ステップST11)。検知情報取得部101は、ステップST11で出力された制御情報に基づいて表示または音声出力された認証動作の入力要求に対して、認証動作が入力されたか否か判定を行う(ステップST12)。認証動作が入力された場合(ステップST12;YES)、検知情報取得部101は検知情報を制御部103に出力し、制御部103は入力された検知情報の解析を行い、物体の形状を検出し、検出した形状の特徴量を抽出する(ステップST13)。認証処理部104は、ステップST13で抽出された特徴量と、認証データベース105に格納された各認証パターンに対応付けられた特徴量との照合を行い、操作者の特定を行う(ステップST14)。制御部103は、操作者の特定結果を参照し、操作者が運転者以外の同乗者であるか否か判定を行う(ステップST15)。 Next, the details of the operation of the authentication process will be described with reference to the flowchart of FIG.
The
操作者が運転者以外の同乗者でない場合(ステップST15;NO)、ステップST17の処理に進む。一方、操作者が運転者以外の同乗者である場合(ステップST15;YES)、制御部103は、操作者を認証し(ステップST16)、認証処理を終了する(ステップST17)。その後、フローは図4のフローチャートのステップST2の処理に戻る。なお、制御部103は、ステップST16において、操作者の認証が行われたことを通知する表示または音声出力の少なくとも一方を行うように、表示装置400またはスピーカ500に制御情報を出力してもよい。
If the operator is not a passenger other than the driver (step ST15; NO), the process proceeds to step ST17. On the other hand, when the operator is a passenger other than the driver (step ST15; YES), the control unit 103 authenticates the operator (step ST16) and ends the authentication process (step ST17). Thereafter, the flow returns to the process of step ST2 in the flowchart of FIG. In step ST16, the control unit 103 may output control information to the display device 400 or the speaker 500 so as to perform at least one of display and sound output for notifying that the operator has been authenticated. .
一方、認証動作が入力されない場合(ステップST12;NO)、制御部103は認証動作の入力要求を出力してから所定時間経過したか否か判定を行う(ステップST18)。所定時間経過した場合(ステップST18;YES)、ステップST17の処理に進み、認証処理を終了する。
On the other hand, when the authentication operation is not input (step ST12; NO), the control unit 103 determines whether or not a predetermined time has elapsed after outputting the authentication operation input request (step ST18). If the predetermined time has elapsed (step ST18; YES), the process proceeds to step ST17, and the authentication process is terminated.
一方、所定時間経過していない場合(ステップST18;NO)、制御部103は、車両情報取得部102から入力される車両の走行状態を示す情報を参照して、車両が走行中であるか判定を行う(ステップST19)。車両が走行中である場合(ステップST19;YES)、ステップST12の処理に戻り、検知情報取得部101は認証動作が入力されるのを待機する。一方、車両が走行中でないない場合(ステップST19;NO)、ステップST17の処理に進み、認証処理を終了する。
On the other hand, when the predetermined time has not elapsed (step ST18; NO), the control unit 103 refers to the information indicating the traveling state of the vehicle input from the vehicle information acquisition unit 102 and determines whether the vehicle is traveling. Is performed (step ST19). When the vehicle is traveling (step ST19; YES), the process returns to step ST12, and the detection information acquisition unit 101 waits for an authentication operation to be input. On the other hand, when the vehicle is not running (step ST19; NO), the process proceeds to step ST17, and the authentication process is terminated.
次に、車載情報処理装置100の制御部103が出力する制御情報に基づいて表示装置400に表示される情報について説明する。
図6A,図6B,図6Dは、実施の形態1に係る車載情報処理装置100の制御部103による表示制御の一例を示す図である。図6Cは、実施の形態1に係る車載情報処理装置100の認証データベース105に格納する情報の一例を示す図である。
図6A,6Bは、上述した図4のフローチャートのステップST5において、制御部103が表示装置400に対して出力した制御情報に基づいた表示例を示している。
表示装置400の画面401には、操作者に認証動作を行うように促すコメント402が表示される。また、表示装置400の画面401には、認証動作をガイドする手の形状を示した画像403を表示する。操作者は、コメント402または画像403に従って認証動作を実行する。 Next, information displayed on thedisplay device 400 based on the control information output by the control unit 103 of the in-vehicle information processing apparatus 100 will be described.
6A, 6B, and 6D are diagrams illustrating an example of display control by thecontrol unit 103 of the in-vehicle information processing apparatus 100 according to Embodiment 1. FIG. FIG. 6C is a diagram illustrating an example of information stored in the authentication database 105 of the in-vehicle information processing apparatus 100 according to Embodiment 1.
6A and 6B show display examples based on the control information output from thecontrol unit 103 to the display device 400 in step ST5 of the flowchart of FIG. 4 described above.
Acomment 402 that prompts the operator to perform an authentication operation is displayed on the screen 401 of the display device 400. Further, an image 403 showing the shape of the hand that guides the authentication operation is displayed on the screen 401 of the display device 400. The operator performs an authentication operation according to the comment 402 or the image 403.
図6A,図6B,図6Dは、実施の形態1に係る車載情報処理装置100の制御部103による表示制御の一例を示す図である。図6Cは、実施の形態1に係る車載情報処理装置100の認証データベース105に格納する情報の一例を示す図である。
図6A,6Bは、上述した図4のフローチャートのステップST5において、制御部103が表示装置400に対して出力した制御情報に基づいた表示例を示している。
表示装置400の画面401には、操作者に認証動作を行うように促すコメント402が表示される。また、表示装置400の画面401には、認証動作をガイドする手の形状を示した画像403を表示する。操作者は、コメント402または画像403に従って認証動作を実行する。 Next, information displayed on the
6A, 6B, and 6D are diagrams illustrating an example of display control by the
6A and 6B show display examples based on the control information output from the
A
また、図6Bに示すように、表示装置400の画面401に、認証動作をガイドする手の形状を、表示位置を考慮して画像403aのように表示してもよい。表示装置400の画面401の表示領域を縦に2分割して画面401a、画面401bと設定されている場合に、制御部103は、車両の運転席の位置からより離れた画面である画面401aに画像403aを表示する制御情報を出力する。また、制御部103は、操作者に認証動作を行うように促すコメント402aとして、「手の画像の位置で右手をかざして下さい」のように認証動作を行う位置を指定するコメントを表示する制御を行う。
なお、上述した画面401を縦に2分割して画面401a、画面401bと設定される場合は一例であり、画面401が縦に3分割された場合には、制御部103は3つの領域を用いて認証動作を行う位置を設定する制御情報の出力し、画面401が縦に2分割および横に2分割された場合には、制御部103は4つの領域を用いて認証動作を行う位置を設定する制御情報を出力してもよい。 Further, as shown in FIG. 6B, the shape of a hand for guiding the authentication operation may be displayed on thescreen 401 of the display device 400 like an image 403a in consideration of the display position. When the display area of the screen 401 of the display device 400 is vertically divided into two screens 401a and 401b, the control unit 103 displays the screen 401a which is a screen farther away from the position of the driver's seat of the vehicle. Control information for displaying the image 403a is output. In addition, the control unit 103 controls the display of a comment for designating a position where the authentication operation is performed, such as “Please hold your right hand over the position of the hand image” as the comment 402a that prompts the operator to perform the authentication operation. I do.
Note that the above-describedscreen 401 is divided into two vertically and is set as a screen 401a and a screen 401b. This is an example. When the screen 401 is vertically divided into three, the control unit 103 uses three regions. When the control information for setting the position for performing the authentication operation is output and the screen 401 is divided vertically into two and horizontally into two, the control unit 103 sets the position for performing the authentication operation using four areas. Control information may be output.
なお、上述した画面401を縦に2分割して画面401a、画面401bと設定される場合は一例であり、画面401が縦に3分割された場合には、制御部103は3つの領域を用いて認証動作を行う位置を設定する制御情報の出力し、画面401が縦に2分割および横に2分割された場合には、制御部103は4つの領域を用いて認証動作を行う位置を設定する制御情報を出力してもよい。 Further, as shown in FIG. 6B, the shape of a hand for guiding the authentication operation may be displayed on the
Note that the above-described
図6Bに示した画像403aに対して操作者が認証動作を入力した場合に、認証処理部104は、制御部103で抽出された認証動作時に物体の特徴量が指定された領域(図6Bにおける画面401a)内に存在するかについても考慮して、認証データベース105に格納された特徴量との照合を行う。例えば、図6Cに示すように、画像403aが表示される場合に、画像403aの各指の頂点404a,404b,404c,404d,404eを中心とした領域405a,405b,405c,405d,405eを設定して認証データベース105に格納しておく。認証処理部104は、制御部103が特徴量として抽出した操作者の各指の頂点の位置が、認証データベース105に格納された領域405a,405b,405c,405d,405e内に位置している場合に、抽出された認証動作時の物体の特徴量が、認証データベース105に格納された特徴量と一致すると判断する。
When the operator inputs an authentication operation to the image 403a shown in FIG. 6B, the authentication processing unit 104 is an area (in FIG. 6B) in which the feature amount of the object is specified during the authentication operation extracted by the control unit 103. The feature quantity stored in the authentication database 105 is checked in consideration of whether it exists in the screen 401a). For example, as shown in FIG. 6C, when an image 403a is displayed, areas 405a, 405b, 405c, 405d, and 405e centered on the vertices 404a, 404b, 404c, 404d, and 404e of each finger of the image 403a are set. And stored in the authentication database 105. In the authentication processing unit 104, the position of the vertex of each finger of the operator extracted as the feature amount by the control unit 103 is located in the areas 405a, 405b, 405c, 405d, and 405e stored in the authentication database 105. In addition, it is determined that the extracted feature amount of the object at the time of the authentication operation matches the feature amount stored in the authentication database 105.
図6Dは、上述した図5のフローチャートのステップST16において、制御部103が表示装置400に対して出力した制御情報に基づいた表示例を示している。
表示装置400の画面401には、操作者が認証され、車載装置300を操作するための入力動作を受け付けることを示すコメント406が表示される。 FIG. 6D shows a display example based on the control information output from thecontrol unit 103 to the display device 400 in step ST16 of the flowchart of FIG. 5 described above.
On thescreen 401 of the display device 400, a comment 406 indicating that the operator is authenticated and an input operation for operating the in-vehicle device 300 is received is displayed.
表示装置400の画面401には、操作者が認証され、車載装置300を操作するための入力動作を受け付けることを示すコメント406が表示される。 FIG. 6D shows a display example based on the control information output from the
On the
図6A,図6B,図6Dでは、表示装置400の画面401にコメント402、コメント402aまたはコメント406を表示する例を示したが、制御部103は、スピーカ500に対して、認証動作を促す音声を出力される制御情報、または操作者が認証されたことを示す音声を出力させる制御情報を出力してもよい。スピーカ500は、制御部103から出力された制御情報に基づいて、例えば「認証を行います、右手を開いてかざして下さい」、「認証を行います、右手を開いて手の画像の位置でかざして下さい」、「認証されました、操作可能です」などの音声情報を出力する。
6A, 6B, and 6D illustrate an example in which the comment 402, the comment 402a, or the comment 406 is displayed on the screen 401 of the display device 400. However, the control unit 103 is a voice that prompts the speaker 500 to perform an authentication operation. Or control information for outputting a voice indicating that the operator has been authenticated may be output. The speaker 500 is based on the control information output from the control unit 103, for example, “Perform authentication, hold your right hand up”, “Perform authentication, open your right hand and hold your hand over the position of the image. Please output "," Authenticated, operation is possible ", etc.
上記では、検知情報取得部101がタッチパネル200またはタッチパッド等から検知情報を取得したのをトリガとして、制御部103が検知情報に基づいて操作者の認証動作を要求する処理に移行する構成を示したが、検知情報取得部101がハードウェアスイッチから検知情報を取得したのをトリガとして、制御部103が操作者の認証動作を要求する処理に移行するように構成してもよい。具体的には、図1で示した図1で示したブロック図において、検知情報取得部101がハードウェアスイッチ(不図示)からスイッチが押下されたことを示す押下情報を検知情報として取得する。制御部103は、当該検知情報に基づいて、タッチパネル200に認証動作を入力するように要求する制御情報を生成して、表示装置400またはスピーカ500のうち少なくとも一方に出力する。操作者は、タッチパネル200に対して認証動作を入力する。検知情報取得部101はタッチパネル200から検知情報を取得し、制御部103および認証処理部104において操作者の認証処理が行われる。
In the above description, the configuration in which the detection information acquisition unit 101 acquires detection information from the touch panel 200 or a touch pad as a trigger and the control unit 103 shifts to processing for requesting an authentication operation of the operator based on the detection information. However, the control unit 103 may be configured to shift to a process for requesting an authentication operation of the operator, triggered by the detection information acquisition unit 101 acquiring the detection information from the hardware switch. Specifically, in the block diagram shown in FIG. 1 shown in FIG. 1, the detection information acquisition unit 101 acquires, as detection information, pressing information indicating that a switch has been pressed from a hardware switch (not shown). Based on the detection information, the control unit 103 generates control information that requests the touch panel 200 to input an authentication operation, and outputs the control information to at least one of the display device 400 and the speaker 500. The operator inputs an authentication operation to the touch panel 200. The detection information acquisition unit 101 acquires detection information from the touch panel 200, and an operator authentication process is performed in the control unit 103 and the authentication processing unit 104.
以上のように、この実施の形態1によれば、検知情報取得部101が取得した検知情報と、車両情報取得部102が取得した車両情報とに応じて、認証動作の入力を要求する情報の出力、または検知情報の出力を制御する制御部103と、出力された認証動作の入力を要求する情報に対して、入力された認証動作の照合を行う認証処理部104とを備えるように構成したので、認証動作の照合を行う処理の負荷を抑制することができる。
As described above, according to the first embodiment, according to the detection information acquired by the detection information acquisition unit 101 and the vehicle information acquired by the vehicle information acquisition unit 102, It is configured to include a control unit 103 that controls output of output or detection information, and an authentication processing unit 104 that collates the input authentication operation with respect to the information that requests input of the output authentication operation. Therefore, it is possible to suppress the processing load for verifying the authentication operation.
また、この実施の形態1によれば、制御部103は、車両が走行中であり、且つ入力動作を行った操作者が認証されていない場合に、認証動作の入力を要求する情報の出力を制御し、出力された認証動作の入力を要求する情報に対して入力された認証動作の検知情報から、操作者の特徴量を抽出するように構成したので、操作者が入力動作を行う度に操作者の認証処理を行う必要がなく、認証動作の照合を行う処理の負荷を抑制することができる。
Further, according to the first embodiment, the control unit 103 outputs the information requesting the input of the authentication operation when the vehicle is running and the operator who has performed the input operation is not authenticated. Since it is configured to extract the feature amount of the operator from the authentication operation detection information input to the control request and the authentication operation input requested, every time the operator performs the input operation There is no need to perform an authentication process for the operator, and the load of the process for verifying the authentication operation can be suppressed.
また、上述した実施の形態1において、操作者が認証され、入力動作を受け付け可能な状態において、図7で示すように、検知情報取得部101がタッチパネル200上の複数の領域、例えば領域A,領域Bで検出された検知情報を取得した場合に、制御部103は認証済みの操作者を未認証とする初期化を実行するように構成してもよい。
In the first embodiment described above, in a state where the operator is authenticated and the input operation can be accepted, as shown in FIG. When the detection information detected in the area B is acquired, the control unit 103 may be configured to execute initialization to make an authenticated operator unauthenticated.
実施の形態2.
上述した実施の形態2では、タッチパネル200を介して入力された認証動作に基づいて認証処理を行う構成を示したが、この実施の形態2では、認証動作を撮像した赤外線カメラの撮像画像を用いて認証処理を行う構成を示す。
図8は、実施の形態2に係る車載情報処理装置100の構成を示すブロック図である。
実施の形態2の車載情報処理装置100の検知情報取得部101は、タッチパネル200に加えて、赤外線カメラ201、ハードウェアスイッチ(以下、H/Wスイッチという)202から検知情報を取得する。なお、図8において、車載情報処理装置100の構成要素は、実施の形態1に係る車載情報処理装置100と同一であるため、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。 Embodiment 2. FIG.
In the above-described second embodiment, the configuration in which the authentication process is performed based on the authentication operation input via thetouch panel 200 has been described. However, in the second embodiment, a captured image of an infrared camera that images the authentication operation is used. Shows a configuration for performing authentication processing.
FIG. 8 is a block diagram showing the configuration of the in-vehicleinformation processing apparatus 100 according to the second embodiment.
The detectioninformation acquisition unit 101 of the in-vehicle information processing apparatus 100 according to the second embodiment acquires detection information from the infrared camera 201 and a hardware switch (hereinafter referred to as H / W switch) 202 in addition to the touch panel 200. In FIG. 8, the components of the in-vehicle information processing apparatus 100 are the same as those in the in-vehicle information processing apparatus 100 according to the first embodiment. Is omitted or simplified.
上述した実施の形態2では、タッチパネル200を介して入力された認証動作に基づいて認証処理を行う構成を示したが、この実施の形態2では、認証動作を撮像した赤外線カメラの撮像画像を用いて認証処理を行う構成を示す。
図8は、実施の形態2に係る車載情報処理装置100の構成を示すブロック図である。
実施の形態2の車載情報処理装置100の検知情報取得部101は、タッチパネル200に加えて、赤外線カメラ201、ハードウェアスイッチ(以下、H/Wスイッチという)202から検知情報を取得する。なお、図8において、車載情報処理装置100の構成要素は、実施の形態1に係る車載情報処理装置100と同一であるため、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。 Embodiment 2. FIG.
In the above-described second embodiment, the configuration in which the authentication process is performed based on the authentication operation input via the
FIG. 8 is a block diagram showing the configuration of the in-vehicle
The detection
赤外線カメラ201は、例えばタッチパネル200の上部、ダッシュボードに嵌め込まれた車載装置300の上部、または表示装置400の上部等に設置される。赤外線カメラ201は、操作者が入力動作および認証動作を行う領域を撮影することができるように、広範囲の領域を撮像可能に構成する。具体的には、赤外線カメラ201を複数台配置する、または高画角の赤外線カメラ201を適用する。
The infrared camera 201 is installed on, for example, the upper part of the touch panel 200, the upper part of the in-vehicle device 300 fitted in the dashboard, or the upper part of the display device 400. The infrared camera 201 is configured to be able to image a wide area so that the operator can image the area where the input operation and the authentication operation are performed. Specifically, a plurality of infrared cameras 201 are arranged or a high angle of view infrared camera 201 is applied.
H/Wスイッチ202は、車載装置300を操作するために設けられたスイッチであり、当該スイッチが押下されると、押下されたことを示す情報を検知情報として出力する。
The H / W switch 202 is a switch provided for operating the in-vehicle device 300. When the switch is pressed, information indicating that the switch is pressed is output as detection information.
次に、検知情報取得部101が、タッチパネル200、赤外線カメラ201、H/Wスイッチ202のいずれかから取得した検知情報をトリガとして、制御部103が赤外線カメラ201の撮像画像を用いて認証処理を行う構成について説明する。
以下では、タッチパネル200から取得した検知情報をトリガとする場合、赤外線カメラ201の撮像画像から取得した検知情報をトリガとする場合、H/Wスイッチ202が取得した検知情報をトリガとする場合に、場合分けを行って説明する。
(2-1)タッチパネル200から取得した検知情報をトリガとする場合
検知情報取得部101は、タッチパネル200および赤外線カメラ201に接続される。検知情報取得部101が、タッチパネル200から物体の近接また接触を検知したことを示す検知情報を取得する。さらに、制御部103において車両が走行中であり、操作者が未認証であると判断された場合に、当該制御部103は、赤外線カメラ201に向けて認証動作を入力するように要求する制御情報を、表示装置400またはスピーカ500の少なくとも1つに出力する。 Next, the detectioninformation acquisition unit 101 uses the detection information acquired from any one of the touch panel 200, the infrared camera 201, and the H / W switch 202 as a trigger, and the control unit 103 performs an authentication process using the captured image of the infrared camera 201. A configuration to be performed will be described.
Hereinafter, when the detection information acquired from thetouch panel 200 is used as a trigger, the detection information acquired from the captured image of the infrared camera 201 is used as a trigger, or the detection information acquired by the H / W switch 202 is used as a trigger. The explanation will be made by dividing cases.
(2-1) Using Detection Information Obtained fromTouch Panel 200 as a Trigger Detection information acquisition unit 101 is connected to touch panel 200 and infrared camera 201. The detection information acquisition unit 101 acquires detection information indicating that the proximity or contact of an object has been detected from the touch panel 200. Furthermore, when the control unit 103 determines that the vehicle is running and the operator is not authenticated, the control unit 103 requests control information to be input to the infrared camera 201. Is output to at least one of the display device 400 or the speaker 500.
以下では、タッチパネル200から取得した検知情報をトリガとする場合、赤外線カメラ201の撮像画像から取得した検知情報をトリガとする場合、H/Wスイッチ202が取得した検知情報をトリガとする場合に、場合分けを行って説明する。
(2-1)タッチパネル200から取得した検知情報をトリガとする場合
検知情報取得部101は、タッチパネル200および赤外線カメラ201に接続される。検知情報取得部101が、タッチパネル200から物体の近接また接触を検知したことを示す検知情報を取得する。さらに、制御部103において車両が走行中であり、操作者が未認証であると判断された場合に、当該制御部103は、赤外線カメラ201に向けて認証動作を入力するように要求する制御情報を、表示装置400またはスピーカ500の少なくとも1つに出力する。 Next, the detection
Hereinafter, when the detection information acquired from the
(2-1) Using Detection Information Obtained from
(2-2)赤外線カメラ201の撮像画像から取得した検知情報をトリガとする場合
検知情報取得部101は、赤外線カメラ201に接続される。検知情報取得部101は、赤外線カメラ201の撮像画像を参照し、物体がタッチパネル200またはH/Wスイッチ202に近接または接触したことを示す撮像画像を検知した場合に、当該撮像画像を検知情報として取得する。さらに、制御部103において車両が走行中であり、操作者が未認証であると判断された場合に、当該制御部103は、赤外線カメラ201に向けて認証動作を入力するように要求する制御情報を、表示装置400またはスピーカ500の少なくとも1つに出力する。
この場合、検知情報取得部101は、赤外線カメラ201の撮像画像内に入力動作を行う物体が撮像されると予め設定された領域を記憶しておき、当該領域の撮像画像において、所定の輝度以上の部分が所定時間(例えば、1秒など)以上検出された場合に、物体がタッチパネル200またはH/Wスイッチ202に近接または接触していると検知し、その撮像画像を検知情報として取得する。
撮像画像の輝度は、例えば「0」から「254」の値を用いて255段階で示される。検知情報取得部101は、撮影画像の予め設定された領域における輝度値が、例えば「150」以上である場合に、入力動作を行う物体が撮像されたと判断する等、輝度と、物体の近接状態または接触状態を予め対応付けておく。
また、赤外線カメラ201がタッチパネル200またはH/Wスイッチ202を撮像可能な位置に配置される場合には、検知情報取得部101は、赤外線カメラ201の撮像画像内にタッチパネル200またはH/Wスイッチ202の配置位置に相当する領域を記憶しておき、当該領域の撮像画像において所定の輝度以上が所定時間以上検出されたか判断する。 (2-2) When Detection Information Acquired from Image Captured byInfrared Camera 201 is Triggered The detection information acquisition unit 101 is connected to the infrared camera 201. When the detection information acquisition unit 101 refers to a captured image of the infrared camera 201 and detects a captured image indicating that an object is close to or touches the touch panel 200 or the H / W switch 202, the detected information is used as detection information. get. Furthermore, when the control unit 103 determines that the vehicle is running and the operator is not authenticated, the control unit 103 requests control information to be input to the infrared camera 201. Is output to at least one of the display device 400 or the speaker 500.
In this case, the detectioninformation acquisition unit 101 stores an area set in advance when an object performing an input operation is captured in the captured image of the infrared camera 201, and the captured image of the area has a predetermined luminance or higher. Is detected for a predetermined time (for example, 1 second) or longer, it is detected that the object is close to or in contact with the touch panel 200 or the H / W switch 202, and the captured image is acquired as detection information.
The brightness of the captured image is indicated in 255 levels using values from “0” to “254”, for example. The detectioninformation acquisition unit 101 determines that the object performing the input operation has been captured when the luminance value in a preset region of the captured image is, for example, “150” or more, and the proximity state of the object. Alternatively, the contact state is associated in advance.
When theinfrared camera 201 is arranged at a position where the touch panel 200 or the H / W switch 202 can be imaged, the detection information acquisition unit 101 includes the touch panel 200 or the H / W switch 202 in the captured image of the infrared camera 201. An area corresponding to the arrangement position is stored, and it is determined whether or not a predetermined luminance or higher is detected for a predetermined time or longer in a captured image of the area.
検知情報取得部101は、赤外線カメラ201に接続される。検知情報取得部101は、赤外線カメラ201の撮像画像を参照し、物体がタッチパネル200またはH/Wスイッチ202に近接または接触したことを示す撮像画像を検知した場合に、当該撮像画像を検知情報として取得する。さらに、制御部103において車両が走行中であり、操作者が未認証であると判断された場合に、当該制御部103は、赤外線カメラ201に向けて認証動作を入力するように要求する制御情報を、表示装置400またはスピーカ500の少なくとも1つに出力する。
この場合、検知情報取得部101は、赤外線カメラ201の撮像画像内に入力動作を行う物体が撮像されると予め設定された領域を記憶しておき、当該領域の撮像画像において、所定の輝度以上の部分が所定時間(例えば、1秒など)以上検出された場合に、物体がタッチパネル200またはH/Wスイッチ202に近接または接触していると検知し、その撮像画像を検知情報として取得する。
撮像画像の輝度は、例えば「0」から「254」の値を用いて255段階で示される。検知情報取得部101は、撮影画像の予め設定された領域における輝度値が、例えば「150」以上である場合に、入力動作を行う物体が撮像されたと判断する等、輝度と、物体の近接状態または接触状態を予め対応付けておく。
また、赤外線カメラ201がタッチパネル200またはH/Wスイッチ202を撮像可能な位置に配置される場合には、検知情報取得部101は、赤外線カメラ201の撮像画像内にタッチパネル200またはH/Wスイッチ202の配置位置に相当する領域を記憶しておき、当該領域の撮像画像において所定の輝度以上が所定時間以上検出されたか判断する。 (2-2) When Detection Information Acquired from Image Captured by
In this case, the detection
The brightness of the captured image is indicated in 255 levels using values from “0” to “254”, for example. The detection
When the
(2-3)H/Wスイッチ202が取得した検知情報をトリガとする場合
検知情報取得部101は、赤外線カメラ201およびH/Wスイッチ202に接続される。検知情報取得部101がH/Wスイッチ202からスイッチの押下を検知したことを示す検知情報を取得する。さらに、制御部103において車両が走行中であり、操作者が未認証であると判断された場合に、当該制御部103は、赤外線カメラ201に向けて認証動作を入力するように要求する制御情報を、表示装置400またはスピーカ500の少なくとも1つに出力する。 (2-3) When the detection information acquired by the H /W switch 202 is used as a trigger The detection information acquisition unit 101 is connected to the infrared camera 201 and the H / W switch 202. The detection information acquisition unit 101 acquires detection information indicating that the switch has been pressed from the H / W switch 202. Furthermore, when the control unit 103 determines that the vehicle is running and the operator is not authenticated, the control unit 103 requests control information to be input to the infrared camera 201. Is output to at least one of the display device 400 or the speaker 500.
検知情報取得部101は、赤外線カメラ201およびH/Wスイッチ202に接続される。検知情報取得部101がH/Wスイッチ202からスイッチの押下を検知したことを示す検知情報を取得する。さらに、制御部103において車両が走行中であり、操作者が未認証であると判断された場合に、当該制御部103は、赤外線カメラ201に向けて認証動作を入力するように要求する制御情報を、表示装置400またはスピーカ500の少なくとも1つに出力する。 (2-3) When the detection information acquired by the H /
上述した(2-1)から(2-3)において、赤外線カメラ201に向けて入力する認証動作は、操作者の手の形状を認識可能な動作に加えて、操作者の手および腕の到来方向を認識可能な動作、操作者の顔の特徴量を認識可能な動作等が適用可能である。
In the above (2-1) to (2-3), the authentication operation input to the infrared camera 201 includes the arrival of the operator's hand and arm in addition to the operation capable of recognizing the shape of the operator's hand. An operation capable of recognizing the direction, an operation capable of recognizing the feature amount of the operator's face, and the like are applicable.
検知情報取得部101は、上述した(2-1)から(2-3)で表示または音声出力された認証動作の入力要求に応じて入力された認証動作を撮像した撮像画像を検知情報として取得する。制御部103は、検知情報取得部101が取得した検知情報である撮像画像の解析を行い、例えば、操作者の手の形状、操作者の手の形状と腕の到来方向、または操作者の顔等のうちの少なくとも1つの認証を行い、特徴量を抽出する。赤外線カメラ201からの撮像画像を検知情報として取得することにより、手の形状の検知に加えて、腕の到来方向および顔の認識を行うことができる。
The detection information acquisition unit 101 acquires, as detection information, a captured image obtained by capturing the authentication operation input in response to the authentication operation input request displayed or output in (2-1) to (2-3) above. To do. The control unit 103 analyzes the captured image that is the detection information acquired by the detection information acquisition unit 101. For example, the shape of the operator's hand, the shape of the operator's hand and the arrival direction of the arm, or the face of the operator Etc., at least one of the authentications is performed, and the feature amount is extracted. By acquiring a captured image from the infrared camera 201 as detection information, it is possible to recognize the arrival direction of the arm and the face in addition to the detection of the hand shape.
制御部103は、操作者の手の形状から特徴量を抽出する場合、撮像画像の解析を行い、輝度値が所定値以上である領域の特徴量を抽出する。操作者が、赤外線カメラ301に手の平を向けた場合に、手の平が赤外線カメラ301に近づくため、撮像画像の輝度値を0から254の255段階で示した場合に、手の平の撮像領域は例えば200以上の輝度値を示す等、高い値を示す。そこで、制御部103は、輝度値が例えば200以上の値を示す領域を抽出し、この領域から手および指の輪郭の形状、または各指の頂点の位置を特徴量として抽出する。
When the feature amount is extracted from the shape of the operator's hand, the control unit 103 analyzes the captured image, and extracts the feature amount of the region whose luminance value is equal to or greater than a predetermined value. When the operator points the palm toward the infrared camera 301, the palm approaches the infrared camera 301. Therefore, when the brightness value of the captured image is shown in 255 levels from 0 to 254, the imaging region of the palm is, for example, 200 or more It shows a high value such as a luminance value. Therefore, the control unit 103 extracts a region having a luminance value of 200 or more, for example, and extracts the contour shape of the hand and finger or the position of the vertex of each finger as a feature amount from this region.
制御部103は、操作者の腕の到来方向から特徴量を抽出する場合、撮像画像の解析を行い、輝度値が所定値以上である領域を特定し、特定した領域の傾きを特徴量として抽出する。操作者が赤外線カメラ301に手の平を向け、赤外線カメラ301が操作者の手の平と腕を撮像した場合に、操作者の手の平が最も赤外線カメラ301に近づき、操作者の腕は手の平よりも赤外線カメラ301から少し離れる。撮像画像の輝度値を0から254の255段階で示した場合に、手の平の撮像領域は例えば200以上の輝度値を示し、腕の撮像領域は例えば150から199の輝度値を示すものとする。そこで、制御部103は、輝度値が例えば200以上の値を示す領域を手の平の撮像領域と特定して、上述した特徴量の抽出を行う。さらに、制御部103は、輝度値が例えば150から199の値を示す領域を腕の撮像領域と特定し、特定した撮像領域に近似する矩形領域の傾きを特徴量として抽出する。矩形領域の傾きは、例えば撮像画像の縦軸または横軸に対する傾きである。
When extracting the feature amount from the arrival direction of the operator's arm, the control unit 103 analyzes the captured image, identifies a region having a luminance value equal to or greater than a predetermined value, and extracts the inclination of the identified region as the feature amount. To do. When the operator points the palm toward the infrared camera 301 and the infrared camera 301 images the palm and arm of the operator, the palm of the operator is closest to the infrared camera 301 and the operator's arm is closer to the infrared camera 301 than the palm. A little away from. When the luminance value of the captured image is shown in 255 levels from 0 to 254, the imaging region of the palm shows a luminance value of 200 or more, for example, and the imaging region of the arm shows a luminance value of 150 to 199, for example. Therefore, the control unit 103 identifies the area where the luminance value is, for example, 200 or more as the palm imaging area, and extracts the above-described feature amount. Further, the control unit 103 identifies an area where the luminance value is, for example, 150 to 199 as an arm imaging area, and extracts a slope of a rectangular area that approximates the identified imaging area as a feature amount. The inclination of the rectangular area is, for example, an inclination with respect to the vertical axis or the horizontal axis of the captured image.
制御部103は、顔の撮像画像から特徴量を抽出する場合、撮像画像の解析を行い、目、鼻、顎の形、顔の中心軸の傾き等を特徴量として抽出する。
When the feature amount is extracted from the captured image of the face, the control unit 103 analyzes the captured image, and extracts the eyes, the nose, the shape of the jaw, the inclination of the center axis of the face, and the like as the feature amount.
認証処理部104は、制御部103が抽出した特徴量と、認証データベース105に格納された各認証パターンに対応付けられた特徴量との照合を行い、操作者の特定を行う。
この場合、認証データベース105には、各認証パターンに対応付けられた特徴量として、手の形状および腕の到来方向等が格納されている。また、認証データベース105は、認証パターンとして、車両の同乗者が定義付けられ、各同乗者の認証パターンとして、各同乗者の顔を撮影した際の特徴量が対応付けて格納されている。 Theauthentication processing unit 104 collates the feature amount extracted by the control unit 103 with the feature amount associated with each authentication pattern stored in the authentication database 105, and identifies the operator.
In this case, theauthentication database 105 stores hand shapes, arm arrival directions, and the like as feature amounts associated with each authentication pattern. Further, the authentication database 105 defines a passenger of a vehicle as an authentication pattern, and stores a feature amount associated with each passenger's face in association with the passenger's authentication pattern.
この場合、認証データベース105には、各認証パターンに対応付けられた特徴量として、手の形状および腕の到来方向等が格納されている。また、認証データベース105は、認証パターンとして、車両の同乗者が定義付けられ、各同乗者の認証パターンとして、各同乗者の顔を撮影した際の特徴量が対応付けて格納されている。 The
In this case, the
認証処理部104は、実施の形態1と同様に、抽出された手の形状の特徴量と、認証データベース105に格納された特徴量との照合を行い、操作者が運転者以外の同乗者であるか特定を行う。
または、認証処理部104は、操作者の手の形状および腕の到来方向の特徴量と、認証データベース105に格納された特徴量との照合を行い、操作者が運転者以外の同乗者であるか、さらに操作者の腕が運転席から到来していないかを特定し、操作者が運転者以外の同乗者であるか特定を行う。
または、認証処理部104は、操作者の顔を認証した結果の特徴量と、認証データベース105に格納された特徴量との照合を行い、操作者が運転者以外の同乗者であるか特定を行う。認証処理部104は、同乗者であるかの特定において、物体の特徴量に加えて、手および腕の到来方向または顔を認証した結果を用いた照合を行うことにより、操作者を特定する際の精度を向上させることができる。 As in the first embodiment, theauthentication processing unit 104 collates the extracted feature value of the hand shape with the feature value stored in the authentication database 105, and the operator is a passenger other than the driver. Determine if there is.
Alternatively, theauthentication processing unit 104 collates the feature amount of the operator's hand shape and arm arrival direction with the feature amount stored in the authentication database 105, and the operator is a passenger other than the driver. In addition, it is determined whether the operator's arm has arrived from the driver's seat, and whether the operator is a passenger other than the driver is specified.
Alternatively, theauthentication processing unit 104 collates the feature amount as a result of authenticating the operator's face with the feature amount stored in the authentication database 105, and specifies whether the operator is a passenger other than the driver. Do. When identifying the operator, the authentication processing unit 104 performs verification using the result of authenticating the direction of arrival of the hand and arm or the face in addition to the feature amount of the object. Accuracy can be improved.
または、認証処理部104は、操作者の手の形状および腕の到来方向の特徴量と、認証データベース105に格納された特徴量との照合を行い、操作者が運転者以外の同乗者であるか、さらに操作者の腕が運転席から到来していないかを特定し、操作者が運転者以外の同乗者であるか特定を行う。
または、認証処理部104は、操作者の顔を認証した結果の特徴量と、認証データベース105に格納された特徴量との照合を行い、操作者が運転者以外の同乗者であるか特定を行う。認証処理部104は、同乗者であるかの特定において、物体の特徴量に加えて、手および腕の到来方向または顔を認証した結果を用いた照合を行うことにより、操作者を特定する際の精度を向上させることができる。 As in the first embodiment, the
Alternatively, the
Alternatively, the
また、認証処理部104は、制御部103が特徴量として抽出した中心軸の傾きが所定値よりも大きい場合には、運転者が無理な体勢で認証動作を入力して同乗者に成りすましている可能性があるとして、操作者は同乗者でないと特定する。認証処理部104は、同乗者であるかの特定において、操作者の顔の中心軸の傾きを考慮することにより、操作者を特定する際の精度を向上させることができる。
Further, the authentication processing unit 104 impersonates the passenger by inputting the authentication operation with an unreasonable posture when the inclination of the central axis extracted as the feature amount by the control unit 103 is larger than a predetermined value. The operator is identified as not being a passenger, as possible. The authentication processing unit 104 can improve the accuracy in identifying the operator by considering the inclination of the central axis of the operator's face in identifying whether the passenger is a passenger.
次に、実施の形態2に係る車載情報処理装置100の動作について説明する。
なお、実施の形態2に係る車載情報処理装置100の動作は、図4および図5で示した実施の形態1の車載情報処理装置100の動作と同一であるため、記載を省略する。
図4のフローチャートのステップST6で示した所定時間内に再度入力動作の検知情報を取得したか否かの判定について、赤外線カメラ201の検知情報を適用した場合について説明する。 Next, the operation of the in-vehicleinformation processing apparatus 100 according to Embodiment 2 will be described.
The operation of the in-vehicleinformation processing apparatus 100 according to Embodiment 2 is the same as that of the in-vehicle information processing apparatus 100 according to Embodiment 1 shown in FIGS.
A case where the detection information of theinfrared camera 201 is applied to determine whether or not the input operation detection information has been acquired again within the predetermined time indicated in step ST6 of the flowchart of FIG. 4 will be described.
なお、実施の形態2に係る車載情報処理装置100の動作は、図4および図5で示した実施の形態1の車載情報処理装置100の動作と同一であるため、記載を省略する。
図4のフローチャートのステップST6で示した所定時間内に再度入力動作の検知情報を取得したか否かの判定について、赤外線カメラ201の検知情報を適用した場合について説明する。 Next, the operation of the in-vehicle
The operation of the in-vehicle
A case where the detection information of the
ステップST6において所定時間内に再度入力動作の検知情報を取得したか否かの判定では、例えば、検知情報取得部101は、赤外線カメラ201が撮像した撮像画像の解析を行い、予め設定された撮像画像の領域において、輝度が所定値よりも大きい部分がなくなり、所定時間経過したか否かに基づいて判定する。ここで、予め設定された撮像画像内の領域とは、操作者が入力動作または認証動作を行う際に物体が撮像されると予め設定された領域、またはタッチパネル200またはH/Wスイッチ202が配置された領域である。この領域内において、撮像画像において、輝度が所定値よりも大きい部分がなくなり、所定時間(例えば5秒等)経過した場合に、検知情報取得部101は、操作者の手等がタッチパネル200またはH/Wスイッチ202から離れ、操作者が操作を行う動作を行っていないと判断することができる。
ここで、検知情報取得部101が判定に用いる輝度の所定値とは、撮像画像の輝度と、赤外線カメラ201と撮像される物体の距離との相関に基づいて設定される値である。例えば、操作者が操作を行っている際に、物体と赤外線カメラ201とが所定距離離間するものとする。この場合、検知情報取得部101には、予め定められたこの離間する所定距離に対応する輝度(例えば、輝度値「150」等)が輝度の所定値として設定される。 In the determination of whether or not the input operation detection information has been acquired again within a predetermined time in step ST6, for example, the detectioninformation acquisition unit 101 analyzes the captured image captured by the infrared camera 201 and sets a predetermined imaging. The determination is based on whether or not a portion where the luminance is greater than the predetermined value disappears and a predetermined time has elapsed in the image area. Here, the preset area in the captured image is a preset area when an object is imaged when the operator performs an input operation or an authentication operation, or the touch panel 200 or the H / W switch 202 is arranged. This is the area that has been In this area, when the captured image has no portion where the luminance is greater than the predetermined value and a predetermined time (for example, 5 seconds) has elapsed, the detection information acquisition unit 101 has the operator's hand or the like touch the touch panel 200 or H It can be determined that the operator is not performing an operation of performing an operation away from the / W switch 202.
Here, the predetermined luminance value used for determination by the detectioninformation acquisition unit 101 is a value set based on the correlation between the luminance of the captured image and the distance between the infrared camera 201 and the object to be imaged. For example, it is assumed that the object and the infrared camera 201 are separated by a predetermined distance when the operator is performing an operation. In this case, the detection information acquisition unit 101 is set with a predetermined brightness value (for example, a brightness value “150”) corresponding to the predetermined distance that is set in advance.
ここで、検知情報取得部101が判定に用いる輝度の所定値とは、撮像画像の輝度と、赤外線カメラ201と撮像される物体の距離との相関に基づいて設定される値である。例えば、操作者が操作を行っている際に、物体と赤外線カメラ201とが所定距離離間するものとする。この場合、検知情報取得部101には、予め定められたこの離間する所定距離に対応する輝度(例えば、輝度値「150」等)が輝度の所定値として設定される。 In the determination of whether or not the input operation detection information has been acquired again within a predetermined time in step ST6, for example, the detection
Here, the predetermined luminance value used for determination by the detection
また、実施の形態1と同様に、ステップST6では、所定時間内に再度入力動作の検知情報を取得したか否かの判定に替えて、一連の操作のタスクが完了したか否かの判定を行ってもよい。
Similarly to the first embodiment, in step ST6, instead of determining whether or not the input operation detection information has been acquired again within a predetermined time, it is determined whether or not a series of operation tasks has been completed. You may go.
以上のように、この実施の形態2によれば、検知情報取得部101が、タッチパネル200およびH/Wスイッチ202で検知された情報、および赤外線カメラ201の撮像画像の解析結果のうちの少なくとも1つにおいて検知された操作者の物体の近接または接触を示す情報を検知情報として取得し、制御部103が、認証動作の入力を要求する情報に対して、赤外線カメラ201の撮像画像の解析結果から検知された認証動作の検知情報から操作者の手の形状の特徴量、操作者の手の形状および腕の到来方向の特徴量、または操作者の顔の特徴量のうちの少なくとも1つを抽出するように構成したので、操作者を特定するための処理負荷を低減させると共に、操作者を特定する際の精度を向上させることができる。
As described above, according to the second embodiment, the detection information acquisition unit 101 has at least one of the information detected by the touch panel 200 and the H / W switch 202 and the analysis result of the captured image of the infrared camera 201. Information indicating the proximity or contact of the operator's object detected at one point is acquired as detection information, and the control unit 103 determines the input of the authentication operation from the analysis result of the captured image of the infrared camera 201. Extract at least one of the feature amount of the operator's hand shape, the feature amount of the operator's hand and the arrival direction of the arm, or the feature amount of the operator's face from the detected detection information of the authentication operation Since it comprised so, while being able to reduce the processing load for specifying an operator, the precision at the time of specifying an operator can be improved.
また、この実施の形態2によれば、認証処理部104が、制御部103から認証動作を行っている操作者の撮像画像を取得し、撮像画像から操作者の顔の中心軸の傾きを抽出するように構成したのでさらに操作者を特定する際の精度を向上させることができる。
Further, according to the second embodiment, the authentication processing unit 104 acquires a captured image of the operator who is performing the authentication operation from the control unit 103, and extracts the inclination of the central axis of the operator's face from the captured image. Since it comprised so, the precision at the time of specifying an operator can be improved further.
実施の形態3.
この実施の形態3では、認証処理部104が運転者であることを示す特徴量を用いて、操作者が運転者以外の同乗者であるか判断する構成を示す。
図9は、実施の形態3に係る車載情報処理装置100aの構成を示すブロック図である。
車載情報処理装置100aは、図1で示した実施の形態1の車載情報処理装置100の認証データベース105および認証処理部104に替えて、運転者データベース106および認証処理部104aを備えて構成している。なお、以下では、実施の形態1に係る車載情報処理装置100の構成要素と同一または相当する部分には、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。 Embodiment 3 FIG.
In the third embodiment, a configuration in which the operator is a passenger other than the driver using a feature amount indicating that theauthentication processing unit 104 is a driver will be described.
FIG. 9 is a block diagram showing the configuration of the in-vehicle information processing apparatus 100a according to the third embodiment.
The in-vehicle information processing device 100a includes adriver database 106 and an authentication processing unit 104a instead of the authentication database 105 and the authentication processing unit 104 of the in-vehicle information processing device 100 according to the first embodiment shown in FIG. Yes. In the following, the same or corresponding parts as those of the in-vehicle information processing apparatus 100 according to the first embodiment are denoted by the same reference numerals as those used in the first embodiment, and the description thereof is omitted or simplified. To do.
この実施の形態3では、認証処理部104が運転者であることを示す特徴量を用いて、操作者が運転者以外の同乗者であるか判断する構成を示す。
図9は、実施の形態3に係る車載情報処理装置100aの構成を示すブロック図である。
車載情報処理装置100aは、図1で示した実施の形態1の車載情報処理装置100の認証データベース105および認証処理部104に替えて、運転者データベース106および認証処理部104aを備えて構成している。なお、以下では、実施の形態1に係る車載情報処理装置100の構成要素と同一または相当する部分には、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。 Embodiment 3 FIG.
In the third embodiment, a configuration in which the operator is a passenger other than the driver using a feature amount indicating that the
FIG. 9 is a block diagram showing the configuration of the in-vehicle information processing apparatus 100a according to the third embodiment.
The in-vehicle information processing device 100a includes a
運転者データベース106は、運転者の認証動作であると特定可能な認証パターンと、各認証パターンに対応した運転者の特徴量を格納している。
図10は、実施の形態3に係る車載情報処理装置100aの運転者データベース106に各能された認証パターンの一例を示す図である。
図10の例では、認証パターンとして、右ハンドルの車両における運転者が左手を開いた認証動作を行うパターン、左ハンドルの車両における運転者が右手を開いた認証動作を行うパターンを格納した場合を示している。なお、認証パターンは、図10で示した例に限定されるものではなく、運転者以外の同乗者と特定可能な動作のパターンであれば種々適用することが可能である。 Thedriver database 106 stores an authentication pattern that can be identified as the driver's authentication operation, and a feature amount of the driver corresponding to each authentication pattern.
FIG. 10 is a diagram illustrating an example of authentication patterns performed in thedriver database 106 of the in-vehicle information processing apparatus 100a according to the third embodiment.
In the example of FIG. 10, the authentication pattern includes a pattern in which the driver in the right-hand drive vehicle performs an authentication operation with the left hand open, and a pattern in which the driver in the left-hand drive vehicle performs the authentication operation with the right hand open. Show. Note that the authentication pattern is not limited to the example shown in FIG. 10 and can be variously applied as long as it is an operation pattern that can be identified as a passenger other than the driver.
図10は、実施の形態3に係る車載情報処理装置100aの運転者データベース106に各能された認証パターンの一例を示す図である。
図10の例では、認証パターンとして、右ハンドルの車両における運転者が左手を開いた認証動作を行うパターン、左ハンドルの車両における運転者が右手を開いた認証動作を行うパターンを格納した場合を示している。なお、認証パターンは、図10で示した例に限定されるものではなく、運転者以外の同乗者と特定可能な動作のパターンであれば種々適用することが可能である。 The
FIG. 10 is a diagram illustrating an example of authentication patterns performed in the
In the example of FIG. 10, the authentication pattern includes a pattern in which the driver in the right-hand drive vehicle performs an authentication operation with the left hand open, and a pattern in which the driver in the left-hand drive vehicle performs the authentication operation with the right hand open. Show. Note that the authentication pattern is not limited to the example shown in FIG. 10 and can be variously applied as long as it is an operation pattern that can be identified as a passenger other than the driver.
認証処理部104aは、制御部103で抽出された特徴量と、運転者データベース106に格納された各認証パターンに対応付けられた特徴量との照合を行い、操作者が運転者であるか否かに基づいて操作者の特定を行う。詳細には、認証処理部104aは、抽出された特徴量が運転者データベース106の特徴量と一致した場合には、操作者が運転者であると特定する。一方、認証処理部104は、抽出された特徴量が運転者データベース106の特徴量と一致しない場合には、操作者が運転者以外の同乗者であると特定する。
The authentication processing unit 104a collates the feature amount extracted by the control unit 103 with the feature amount associated with each authentication pattern stored in the driver database 106, and determines whether or not the operator is a driver. The operator is specified based on the above. Specifically, the authentication processing unit 104a specifies that the operator is a driver when the extracted feature value matches the feature value in the driver database 106. On the other hand, if the extracted feature quantity does not match the feature quantity in the driver database 106, the authentication processing unit 104 specifies that the operator is a passenger other than the driver.
実施の形態3の車載情報処理装置100aのハードウェアの構成例は実施の形態1と同一であるため、説明を省略する。
Since the hardware configuration example of the in-vehicle information processing apparatus 100a of the third embodiment is the same as that of the first embodiment, the description thereof is omitted.
また、実施の形態3の車載情報処理装置100aは、図5で示したフローチャートのステップST14において、認証処理部104aが、ステップST13で抽出された特徴量と、運転者データベース106に格納された各認証パターンに対応付けられた特徴量との照合を行い、操作者の特定を行う。その他の処理は、図5で示したフローチャートと同一であるため説明を省略する。
Further, in the vehicle-mounted information processing apparatus 100a according to the third embodiment, in step ST14 of the flowchart illustrated in FIG. 5, the authentication processing unit 104a includes the feature amount extracted in step ST13 and each stored in the driver database 106. The operator is identified by collating with the feature amount associated with the authentication pattern. The other processes are the same as those in the flowchart shown in FIG.
以上のように、この実施の形態3によれば、検知情報取得部101が取得した検知情報と、車両情報取得部102が取得した車両情報とに応じて、認証動作の入力を要求する情報の出力、または検知情報の出力を制御する制御部103と、出力された認証動作の入力を要求する情報に対して、入力された認証動作の照合を行う認証処理部104aとを備えるように構成したので、認証動作の照合を行う処理の負荷を抑制することができる。
As described above, according to the third embodiment, the information for requesting the input of the authentication operation according to the detection information acquired by the detection information acquisition unit 101 and the vehicle information acquired by the vehicle information acquisition unit 102. It is configured to include a control unit 103 that controls output of output or detection information, and an authentication processing unit 104a that collates the input authentication operation with respect to the information that requests input of the output authentication operation. Therefore, it is possible to suppress the processing load for verifying the authentication operation.
また、この実施の形態3によれば、制御部103は、車両が走行中であり、且つ入力動作を行った操作者が認証されていない場合に、認証動作の入力を要求する情報の出力を制御し、出力された認証動作の入力を要求する情報に対して入力された認証動作の検知情報から、操作者の特徴量を抽出するように構成したので、操作者が入力動作を行う度に操作者の認証処理を行う必要がなく、認証動作の照合を行う処理の負荷を抑制することができる。
Further, according to the third embodiment, the control unit 103 outputs the information requesting the input of the authentication operation when the vehicle is running and the operator who has performed the input operation is not authenticated. Since it is configured to extract the feature amount of the operator from the authentication operation detection information input to the control request and the authentication operation input requested, every time the operator performs the input operation There is no need to perform an authentication process for the operator, and the load of the process for verifying the authentication operation can be suppressed.
また、上述した実施の形態3では、実施の形態1の車載情報処理装置100の認証データベース105および認証処理部104に替えて、運転者データベース106および認証処理部104aを設ける構成を示したが、実施の形態2の車載情報処理装置100の認証データベース105および認証処理部104に替えて、運転者データベース106および認証処理部104aを設けて構成してもよい。
In the above-described third embodiment, the driver database 106 and the authentication processing unit 104a are provided in place of the authentication database 105 and the authentication processing unit 104 of the in-vehicle information processing apparatus 100 of the first embodiment. Instead of the authentication database 105 and the authentication processing unit 104 of the in-vehicle information processing apparatus 100 according to the second embodiment, a driver database 106 and an authentication processing unit 104a may be provided.
また、上述した実施の形態1から実施の形態3では、認証データベース105または運転者データベース106に、予め各認証パターンに対応付けられた特徴量が格納されている場合を例に示したが、運転者および同乗者が認証動作の形状を決定し、当該形状の特徴量を認証データベース105または運転者データベース106に登録してもよい。
Further, in the first to third embodiments described above, the case where the feature amount previously associated with each authentication pattern is stored in the authentication database 105 or the driver database 106 is described as an example. The passenger and the passenger may determine the shape of the authentication operation and register the feature amount of the shape in the authentication database 105 or the driver database 106.
なお、上述した実施の形態1から実施の形態3では、同乗者には、助手席に着座した同乗者、および後部座席に着座した同乗者が含まれるものとしている。
In the first to third embodiments described above, passengers include passengers seated in the passenger seat and passengers seated in the rear seat.
上記以外にも、本発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、あるいは各実施の形態の任意の構成要素の変形、もしくは各実施の形態において任意の構成要素の省略が可能である。
In addition to the above, within the scope of the present invention, the present invention can be freely combined with each embodiment, modified any component of each embodiment, or omitted any component in each embodiment. Is possible.
この発明に係る車載情報処理装置は、操作者が操作を行う度に操作者を特定する必要がないため、車載装置への入力動作を受け付けるか否かを判断して制御する装置などに適用し、操作者を認証するための処理の負荷を抑制するのに適している。
The in-vehicle information processing apparatus according to the present invention does not need to specify the operator every time the operator performs an operation, and thus is applied to an apparatus that determines and controls whether or not to accept an input operation to the in-vehicle apparatus. It is suitable for suppressing the processing load for authenticating the operator.
100,100a 車載情報処理装置、101 検知情報取得部、102 車両情報取得部、103 制御部、104,104a 認証処理部、105 認証データベース、106 運転者データベース、200 タッチパネル、201 赤外線カメラ、202 H/Wスイッチ、300 車載装置、400 表示装置、500 スピーカ。
100, 100a In-vehicle information processing apparatus, 101 Detection information acquisition unit, 102 Vehicle information acquisition unit, 103 Control unit, 104, 104a Authentication processing unit, 105 Authentication database, 106 Driver database, 200 Touch panel, 201 Infrared camera, 202 H / W switch, 300 in-vehicle device, 400 display device, 500 speaker.
Claims (12)
- 操作者の入力動作および認証動作を検知したことを示す検知情報を取得する検知情報取得部と、
車両の走行状態を示す車両情報を取得する車両情報取得部と、
前記検知情報取得部が取得した前記検知情報と、前記車両情報取得部が取得した前記車両情報とに応じて、前記認証動作の入力を要求する情報の出力、または前記入力動作の検知情報の出力を制御する制御部と、
出力された前記認証動作の入力を要求する情報に対して、入力された前記認証動作の照合を行う認証処理部とを備えた車載情報処理装置。 A detection information acquisition unit that acquires detection information indicating that the input operation and authentication operation of the operator have been detected;
A vehicle information acquisition unit for acquiring vehicle information indicating a running state of the vehicle;
According to the detection information acquired by the detection information acquisition unit and the vehicle information acquired by the vehicle information acquisition unit, output of information requesting input of the authentication operation or output of detection information of the input operation A control unit for controlling
An in-vehicle information processing apparatus comprising: an authentication processing unit configured to collate the input authentication operation with respect to the output information requesting input of the authentication operation. - 前記制御部は、前記車両が走行中であり、且つ前記入力動作を行った前記操作者が認証されていない場合に、前記認証動作の入力を要求する情報の出力を制御し、出力された前記認証動作の入力を要求する情報に対して入力された前記認証動作の前記検知情報から、前記操作者の特徴量を抽出することを特徴とする請求項1記載の車載情報処理装置。 The control unit controls output of information requesting input of the authentication operation when the vehicle is running and the operator who has performed the input operation is not authenticated, and the output The in-vehicle information processing apparatus according to claim 1, wherein the feature amount of the operator is extracted from the detection information of the authentication operation input with respect to information requesting input of the authentication operation.
- 前記認証処理部は、前記制御部が抽出した前記操作者の特徴量の照合を行い、当該操作者が前記車両の運転者であるか、または前記車両の運転者以外の同乗者であるかの特定を行い、
前記制御部は、前記認証処理部において前記操作者が前記同乗者であると特定された場合に、当該操作者の認証を行い、前記検知情報取得部が取得した前記入力動作の検知情報を制御情報として出力することを特徴とする請求項2記載の車載情報処理装置。 The authentication processing unit checks the feature amount of the operator extracted by the control unit, and whether the operator is a driver of the vehicle or a passenger other than the driver of the vehicle Identify
When the authentication processing unit identifies the operator as the passenger, the control unit authenticates the operator and controls the detection information of the input operation acquired by the detection information acquisition unit. The in-vehicle information processing apparatus according to claim 2, wherein the information is output as information. - 前記認証処理部は、前記制御部が抽出した前記操作者の特徴量の照合を行い、当該操作者が前記車両の運転者であるか、または前記車両の運転者以外の同乗者であるかの特定を行い、
前記制御部は、前記認証処理部において前記操作者が前記運転者であると特定された場合に、前記検知情報取得部が取得した前記入力動作の検知情報を制御情報として出力しないことを特徴とする請求項2記載の車載情報処理装置。 The authentication processing unit checks the feature amount of the operator extracted by the control unit, and whether the operator is a driver of the vehicle or a passenger other than the driver of the vehicle Identify
The control unit does not output the detection information of the input operation acquired by the detection information acquisition unit as control information when the authentication processing unit specifies that the operator is the driver. The in-vehicle information processing apparatus according to claim 2. - 前記制御部は、前記車両が走行中でない場合に、前記検知情報取得部が取得した前記入力動作の検知情報を制御情報として出力することを特徴とする請求項1記載の車載情報処理装置。 The in-vehicle information processing apparatus according to claim 1, wherein the control unit outputs the detection information of the input operation acquired by the detection information acquisition unit as control information when the vehicle is not running.
- 前記検知情報取得部は、タッチパネル、タッチパッドまたはハードウェアスイッチのうちの少なくとも1つにおいて検知された前記操作者の物体の近接または接触を示す情報を前記入力動作の検知情報として取得し、
前記制御部は、前記認証動作の入力を要求する情報に対して、前記タッチパネルまたは前記タッチパッドにおいて検知された前記認証動作の検知情報から、前記操作者の特徴量を抽出することを特徴とする請求項2記載の車載情報処理装置。 The detection information acquisition unit acquires information indicating proximity or contact of the operator's object detected by at least one of a touch panel, a touch pad, or a hardware switch as detection information of the input operation,
The control unit extracts a feature amount of the operator from detection information of the authentication operation detected on the touch panel or the touch pad with respect to information requesting input of the authentication operation. The in-vehicle information processing apparatus according to claim 2. - 前記検知情報取得部は、タッチパネル、タッチパッドまたはハードウェアスイッチにおいて検知された前記操作者の物体の近接または接触を示す情報、または撮像手段の撮像画像から検知された前記操作者の物体の近接または接触を示す情報を前記入力動作の検知情報として取得し、
前記制御部は、前記認証動作の入力を要求する情報に対して、前記撮像手段の撮像画像から検知された前記認証動作の検知情報から、前記操作者の特徴量を抽出することを特徴とする請求項2記載の車載情報処理装置。 The detection information acquisition unit includes information indicating proximity or contact of the operator's object detected by a touch panel, a touch pad, or a hardware switch, or proximity of the operator's object detected from a captured image of an imaging unit or Obtaining information indicating contact as detection information of the input operation,
The control unit extracts a feature amount of the operator from detection information of the authentication operation detected from a captured image of the imaging unit with respect to information requesting input of the authentication operation. The in-vehicle information processing apparatus according to claim 2. - 前記制御部は、前記操作者の特徴量として、前記操作者の物体の特徴量を抽出することを特徴とする請求項6記載の車載情報処理装置。 The in-vehicle information processing apparatus according to claim 6, wherein the control unit extracts a feature amount of the operator's object as the feature amount of the operator.
- 前記制御部は、前記操作者の特徴量として、前記操作者の手の形状の特徴量、前記操作者の手の形状の特徴量および腕の到来方向の特徴量、または前記操作者の顔の特徴量のうちの少なくとも1つを抽出することを特徴とする請求項7記載の車載情報処理装置。 The control unit, as the feature amount of the operator, the feature amount of the shape of the operator's hand, the feature amount of the shape of the operator's hand and the feature amount of the arrival direction of the arm, or the feature amount of the face of the operator 8. The in-vehicle information processing apparatus according to claim 7, wherein at least one of the feature quantities is extracted.
- 前記制御部は、前記操作者の特徴量として、前記操作者の顔の中心軸の傾きを抽出することを特徴とする請求項9記載の車載情報処理装置。 10. The in-vehicle information processing apparatus according to claim 9, wherein the control unit extracts an inclination of a central axis of the operator's face as the feature amount of the operator.
- 前記制御部は、前記入力動作を行った操作者が認証されている場合であって、前記検知情報取得部において前記入力動作の検知情報が予め設定された期間取得されない場合、または前記検知情報取得部において前記入力動作の検知情報が複数領域で取得された場合に、前記認証された操作者を未認証に設定することを特徴とする請求項3記載の車載情報処理装置。 The control unit is a case where an operator who performs the input operation is authenticated, and the detection information of the input operation is not acquired in a preset period in the detection information acquisition unit, or the detection information acquisition The in-vehicle information processing apparatus according to claim 3, wherein when the input operation detection information is acquired in a plurality of areas, the authenticated operator is set to be unauthenticated.
- 検知情報取得部が、操作者の入力動作および認証動作を検知したことを示す検知情報を取得するステップと、
車両情報取得部が、車両の走行状態を示す車両情報を取得するステップと、
制御部が、前記検知情報と、前記車両情報とに応じて、前記認証動作の入力を要求する情報の出力、または前記入力動作の検知情報の出力を制御するステップと、
認証処理部が、出力された前記認証動作の入力を要求する情報に対して、入力された前記認証動作の照合を行うステップとを備えた車載情報処理方法。 A step of acquiring detection information indicating that the detection information acquisition unit has detected the input operation and authentication operation of the operator;
A vehicle information acquisition unit acquiring vehicle information indicating a running state of the vehicle;
A control unit controlling output of information requesting input of the authentication operation or output of detection information of the input operation according to the detection information and the vehicle information;
An in-vehicle information processing method comprising: an authentication processing unit performing verification of the input authentication operation with respect to the output information requesting input of the authentication operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/062143 WO2017179201A1 (en) | 2016-04-15 | 2016-04-15 | Vehicle-mounted information processing device and vehicle-mounted information processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/062143 WO2017179201A1 (en) | 2016-04-15 | 2016-04-15 | Vehicle-mounted information processing device and vehicle-mounted information processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017179201A1 true WO2017179201A1 (en) | 2017-10-19 |
Family
ID=60041462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/062143 WO2017179201A1 (en) | 2016-04-15 | 2016-04-15 | Vehicle-mounted information processing device and vehicle-mounted information processing method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017179201A1 (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1183504A (en) * | 1997-09-11 | 1999-03-26 | Alpine Electron Inc | Navigator |
JPH11137847A (en) * | 1997-11-12 | 1999-05-25 | Snk:Kk | Game system with sensors and its controlling method |
JP2000329577A (en) * | 1999-05-18 | 2000-11-30 | Fujitsu Ten Ltd | Electronic apparatus |
JP2001273498A (en) * | 2000-03-24 | 2001-10-05 | Matsushita Electric Ind Co Ltd | Device, system, card and method for personal identification based on biometric |
JP2004259149A (en) * | 2003-02-27 | 2004-09-16 | Calsonic Kansei Corp | Operation input device |
JP2007122579A (en) * | 2005-10-31 | 2007-05-17 | Equos Research Co Ltd | Vehicle controller |
JP2008191871A (en) * | 2007-02-02 | 2008-08-21 | Toyota Motor Corp | Drunken driving prevention support system |
JP2009252105A (en) * | 2008-04-09 | 2009-10-29 | Denso Corp | Prompter-type operation device |
JP2012032879A (en) * | 2010-07-28 | 2012-02-16 | Nissan Motor Co Ltd | Input operation device |
JP2013029455A (en) * | 2011-07-29 | 2013-02-07 | Sanyo Electric Co Ltd | Display device and power consumption reduction method |
JP2013120434A (en) * | 2011-12-06 | 2013-06-17 | Denso It Laboratory Inc | Operator identification device, method, and on-vehicle navigation apparatus |
WO2013136776A1 (en) * | 2012-03-15 | 2013-09-19 | パナソニック株式会社 | Gesture input operation processing device |
-
2016
- 2016-04-15 WO PCT/JP2016/062143 patent/WO2017179201A1/en active Application Filing
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1183504A (en) * | 1997-09-11 | 1999-03-26 | Alpine Electron Inc | Navigator |
JPH11137847A (en) * | 1997-11-12 | 1999-05-25 | Snk:Kk | Game system with sensors and its controlling method |
JP2000329577A (en) * | 1999-05-18 | 2000-11-30 | Fujitsu Ten Ltd | Electronic apparatus |
JP2001273498A (en) * | 2000-03-24 | 2001-10-05 | Matsushita Electric Ind Co Ltd | Device, system, card and method for personal identification based on biometric |
JP2004259149A (en) * | 2003-02-27 | 2004-09-16 | Calsonic Kansei Corp | Operation input device |
JP2007122579A (en) * | 2005-10-31 | 2007-05-17 | Equos Research Co Ltd | Vehicle controller |
JP2008191871A (en) * | 2007-02-02 | 2008-08-21 | Toyota Motor Corp | Drunken driving prevention support system |
JP2009252105A (en) * | 2008-04-09 | 2009-10-29 | Denso Corp | Prompter-type operation device |
JP2012032879A (en) * | 2010-07-28 | 2012-02-16 | Nissan Motor Co Ltd | Input operation device |
JP2013029455A (en) * | 2011-07-29 | 2013-02-07 | Sanyo Electric Co Ltd | Display device and power consumption reduction method |
JP2013120434A (en) * | 2011-12-06 | 2013-06-17 | Denso It Laboratory Inc | Operator identification device, method, and on-vehicle navigation apparatus |
WO2013136776A1 (en) * | 2012-03-15 | 2013-09-19 | パナソニック株式会社 | Gesture input operation processing device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9235269B2 (en) | System and method for manipulating user interface in vehicle using finger valleys | |
US9703472B2 (en) | Method and system for operating console with touch screen | |
US20170320501A1 (en) | Systems and methodologies for controlling an autonomous vehicle | |
WO2015015843A1 (en) | Gesture determination device and method, gesture-operated device, program, and recording medium | |
KR102084032B1 (en) | User interface, means of transport and method for distinguishing a user | |
JP6851482B2 (en) | Operation support device and operation support method | |
JP2009252105A (en) | Prompter-type operation device | |
US20140168068A1 (en) | System and method for manipulating user interface using wrist angle in vehicle | |
WO2018061603A1 (en) | Gestural manipulation system, gestural manipulation method, and program | |
US11040722B2 (en) | Driving authorization transfer determination device | |
KR101892390B1 (en) | User interface, transport means and method for recognizing a hand of a user | |
JP6385624B2 (en) | In-vehicle information processing apparatus, in-vehicle apparatus, and in-vehicle information processing method | |
US20150234515A1 (en) | Determination of an Input Position on a Touchscreen | |
WO2018116565A1 (en) | Information display device for vehicle and information display program for vehicle | |
KR101976498B1 (en) | System and method for gesture recognition of vehicle | |
WO2017179201A1 (en) | Vehicle-mounted information processing device and vehicle-mounted information processing method | |
US20140098998A1 (en) | Method and system for controlling operation of a vehicle in response to an image | |
JP2007132678A (en) | Navigation device | |
JP6167932B2 (en) | Input device and input acquisition method | |
JP2016110269A (en) | Manipulation input device | |
CN112074801B (en) | Method and user interface for detecting input through pointing gestures | |
JP6188468B2 (en) | Image recognition device, gesture input device, and computer program | |
US20200218347A1 (en) | Control system, vehicle and method for controlling multiple facilities | |
WO2017017938A1 (en) | Gesture operating system, method, and program | |
US20230249552A1 (en) | Control apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16898661 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16898661 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |