WO2006013783A1 - 入力装置 - Google Patents
入力装置 Download PDFInfo
- Publication number
- WO2006013783A1 WO2006013783A1 PCT/JP2005/013866 JP2005013866W WO2006013783A1 WO 2006013783 A1 WO2006013783 A1 WO 2006013783A1 JP 2005013866 W JP2005013866 W JP 2005013866W WO 2006013783 A1 WO2006013783 A1 WO 2006013783A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- display
- shape
- input device
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0425—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/02—Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
- B60R11/0264—Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for control means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
Definitions
- the present invention relates to an input device for a user to input commands and information to a device.
- the present invention relates to an input device that allows a user to input commands and information using a body such as a hand based on information displayed on a display or the like.
- a touch panel display as an input device for a user to input commands and information using a finger or the like based on information displayed on a display screen such as a display.
- the touch panel display has a touch panel on the display screen.
- a GUI Graphic User Interface
- GUI components displayed on the screen includes display components represented by menus and button switches (hereinafter referred to as GUI components).
- GUI components displayed on the screen includes display components represented by menus and button switches (hereinafter referred to as GUI components).
- the user can input a command or information associated with the GUI component by touching the desired GUI component on the screen.
- touch panel displays are widely used in bank ATMs (Automated Teller Machines) and car navigation systems.
- the display screen In order for a user to touch the touch panel display with a finger, the display screen must be placed close to the body. In ergonomics, the optimal arrangement conditions of the input interface for VDT (Video Display Terminal) operation, etc. are defined, and it is assumed that the body force radius is within 50 cm!
- VDT Video Display Terminal
- the display screen should be installed as far as possible!
- One example is a large television installed in a home or a car navigation device installed in a car. Viewing TV at a short distance of about 30cm is not good for the eyes.
- the time it takes for the driver to adjust the focal length of the eyes has been reported to hinder attention to driving. In other words, it is safer that the difference between the focal length during driving (a few meters away) and the focal length when viewing the display screen of the car navigation system is smaller.
- a display device that can increase the focal distance when viewing the display screen, there is a far-focus display represented by a head-up display (HUD) using a lens or a mirror.
- the touch panel cannot be applied because the display screen cannot be touched.
- the touch panel display has an essential problem that the visibility of the display is deteriorated because a fingerprint is put on the display screen by a user input operation.
- a touch panel display the user does not touch the display screen.
- the touch tracer There is a tablet that is generally used as an input device of a PC (Personal Computer) (for example, see Patent Document 1).
- a cursor with a shape such as an arrow is displayed on the display screen, and if the user moves his / her finger or pen while making contact with a predetermined operation surface provided on the operation unit, the movement will occur.
- the cursor on the screen moves accordingly.
- the user can input a command or information associated with the GUI component by confirming that the cursor has moved over the desired GUI component and performing a predetermined confirmation operation (for example, a click operation). Monkey.
- an input interface in which the display screen and the operation unit are separated requires the cursor displayed on the screen to be moved by sliding a finger or a pen on the operation surface of the operation unit. It is not possible to select a desired GUI part like a display with a single touch. In other words, after confirming the current position of the cursor, it is necessary to slide the finger on the operation surface to move the cursor onto the desired GUI part, so quick input such as a touch panel display is difficult. is there. In this way, the display screen An input interface with a separate screen and operation unit is not capable of intuitive input like a touch panel display, and is less operable than a touch panel display.
- the display and the touch panel are arranged separately, and the finger of the user who is operating the touch panel is picked up by the camera, and the output image thereof
- An input device that combines and displays the image with a display image is also considered (for example, see Patent Document 2).
- the user can perform input operations while looking at his / her finger displayed on the display, so that accurate input can be performed using the touch panel placed at hand while keeping the line of sight fixed on the display. The operation can be performed.
- Patent Document 1 Japanese Patent Laid-Open No. 11 3169
- Patent Document 2 Japanese Patent Laid-Open No. 10-269012
- the input device using the camera as described above has a problem that the installation position of the camera is restricted.
- the camera In order to photograph the finger of the user who is operating the touch panel, the camera must be installed in a direction perpendicular to the operation surface of the touch panel.
- a camera is provided directly below the operation surface.
- the purpose of the present invention is to provide intuitive and accurate input as if the touch screen is directly touching and inputting even if the display screen and the operation unit are separated. It is possible to provide an input device that is possible and has a high degree of freedom in installation.
- a first aspect of the present invention is an operation unit (300) that is operated by a user and outputs an operation signal corresponding to the operation, and display information for creating a display image that assists the operation by the user.
- a body shape input unit (100) including a creation unit (700), a camera for imaging a body part of a user involved in the operation, and outputting image data obtained as a result of the imaging; and the body shape input A body image representing the body part by detecting the shape and position of the body part based on the image data output from the unit and performing shape correction processing for correcting the shape and position of the detected body part
- an arbitrary optical imaging device such as a camera, a plurality of light receiving units, or a line sensor can be typically used. Furthermore, information for specifying the shape and position of the user's body part may be acquired by a method other than the optical method.
- a second aspect of the present invention is that, in the first aspect, the body shape extraction unit performs viewpoint conversion processing in consideration of a relative positional relationship between the camera and the operation unit as the shape correction processing.
- Figure 26 and Figure 27 the shape of the body part can be displayed in a more realistic form.
- the body shape extraction unit determines whether the shape detected from the image data output from the body shape input unit is the shape of the body part. Is determined by pattern matching using a shape pattern of a body part held in advance. As a result, it is possible to determine whether or not the detected shape is the shape of the body part, so that arbitrary control according to the determination result is possible.
- the body shape extraction unit is configured based on the shape pattern based on the shape of the body part detected from the image data output from the body shape input unit. Correction. This makes it possible to Even if the body part obtained by image recognition is missing, such as when wearing a niche, the body image can be displayed without a sense of incongruity.
- a mark used in the shape correction process is attached to a predetermined position of the operation unit.
- a shape correction process is performed so that the position of the mark in the image data output from the force lens is converted to a predetermined position on the screen of the display unit.
- a sixth aspect of the present invention is that, in the first aspect, the image composition unit emphasizes an outline of the body image when the display image and the body image are synthesized. Features. Thereby, it can avoid that display information is hidden by displaying a body image.
- a seventh aspect of the present invention is that, in the first aspect, the image composition unit changes the transparency of the body image and synthesizes the body image when the display image and the body image are composed. It is characterized by emphasizing the outline of the image (Fig. 42C). Thereby, it can be avoided that the display information is hidden by displaying the body image.
- the image composition unit detects a fingertip portion of a user in the body image when the display image and the body image are composed, It is characterized by highlighting the fingertip ( Figure 42D). As a result, the user's attention can be directed to the fingertip part that is greatly involved in the input operation.
- the image composition unit adds a shadow of the body image when the display image and the body image are composed. (Fig. 42E). Thereby, a body image can be displayed more realistically.
- the image composition unit may be hidden in the display image that is hidden by the body image when the display image and the body image are composed. It is characterized in that the display information is displayed in a pop-up area where it is not hidden by the body image (Fig. 42F). As a result, the user can also check the display information that has been hidden by displaying the body image.
- An eleventh aspect of the present invention is that, in the first aspect, the image composition unit is configured to hide the display image and the body image in the display image that are hidden by the body image. The display information is displayed in front of the body image (Fig. 42G). As a result, the user can also check the display information hidden by displaying the body image.
- the image composition unit detects a fingertip portion of the user in the body image when the display image and the body image are synthesized,
- the display information in the display image overlapping the fingertip portion is emphasized (FIG. 42H).
- the user can easily confirm which display information corresponds to the position of his / her fingertip.
- the image composition unit enlarges display information in the display image overlapping the fingertip portion or changes the color of the display information.
- the display information is emphasized by fading, changing the display information so that it appears three-dimensionally, or displaying auxiliary information related to the display information in a pop-up manner.
- the user can easily confirm which display information corresponds to the current fingertip position.
- the display information creation unit generates a display image to be created according to a detection result of the shape of the body part in the body shape extraction unit. It is a thing to change. This makes it possible to create display information according to the situation.
- the display information creation unit creates a display image only when the shape of the body part is detected by the body shape extraction unit. It is characterized by being. As a result, the power consumption can be reduced by not performing the image display process when no body part is detected.
- the display information creating unit includes a display image to be created when the shape of the body part is detected by the body shape extracting unit. It is characterized by emphasizing the GUI parts (Figs. 38B and 39B). As a result, GUI parts are displayed inconspicuously when the body part is not detected, and the GUI It is possible to make display information other than parts easier to see and increase the amount of information.
- the body shape extraction unit determines whether the shape detected based on the information acquired by the body shape input unit is the shape of the body part.
- the display information creating unit changes display information to be created according to the judgment result of the body shape extracting unit. Accordingly, for example, when something other than the body is detected, it is possible to suppress power consumption by not performing the image display process.
- the body shape extraction unit determines whether the body part is a right hand left hand based on the detected shape of the body part.
- the display information creation unit changes a display image to be created according to a determination result of the body shape extraction unit (FIGS. 40A and 40B).
- the display information creating means displays the display information only when the shape of the body part detected by the body shape extraction unit is that of the right hand or the left hand. It is characterized by creating. Thus, for example, display information is displayed only when the user performs an input operation from the right side of the operation unit, or display information is displayed only when the user performs an input operation from the left side of the operation unit. Can be displayed.
- the display information creation unit creates the body part shape detected by the body shape extraction unit when the shape is a right hand or a left hand. It is characterized by highlighting the GUI part in the display image to be displayed, changing the position of the GUI part, or changing the validity of the GUI part. As a result, only when the user performs input operation from the right side (or left side) of the operation unit, the G UI part is highlighted, the GUI part is enabled, or the GUI part is disabled. Can be displayed. In addition, the position of the GUI component can be changed to the input position to match the user's position.
- the body shape extraction unit determines whether the body part is an adult or a child based on the detected shape of the body part.
- the display information creation unit is configured according to the determination result of the body shape extraction unit, The display image to be created is changed (FIGS. 41A and 41B).
- the display information creating unit displays the display information only when the shape of the body part detected by the body shape extracting unit is an adult or a child. It is characterized by creating. Thereby, for example, the display information can be displayed only when the user is an adult, or the display information can be displayed only when the user is a child.
- the display information creating unit creates the body part shape detected by the body shape extracting unit when it is an adult or a child. It is characterized by highlighting the GUI part in the display image to be displayed, changing the position of the GUI part, or changing the validity of the GUI part. As a result, only when the user is an adult (or child), it is possible to highlight the GUI component, enable the GUI component, or display that the GUI component is disabled. In addition, depending on whether the user is an adult or a child, the position of the GUI component can be changed to the position of the input operation.
- the input device has two operation modes, a mode in which an input operation by a user can be performed and a mode in which the user cannot perform the operation.
- the input device has two operation modes, a mode in which an input operation by a user is possible and a mode incapable of being input, and the image synthesizing unit Is characterized in that the method of combining the display image created by the display information creation unit and the body image is changed according to the operation mode.
- the image synthesizing unit Is characterized in that the method of combining the display image created by the display information creation unit and the body image is changed according to the operation mode.
- the image composition unit displays the body image in a translucent state or in a circle when the user can perform an input operation.
- the display image and the body image are displayed so that the body image is displayed in an opaque state or in a translucent and contoured state, and the body image is displayed in an opaque state when the user cannot perform an input operation. It is characterized by synthesizing. As a result, it is possible to suggest to the user when it is a mode in which an input operation by the user is impossible.
- a twenty-seventh aspect of the present invention is characterized in that, in the first aspect, the image composition unit changes a composition method according to each part of the display image. As a result, the user can be informed that the operation form and result differ depending on each part of the display image.
- the display image has an operable region and an inoperable region, and the image composition unit is operable with the display image.
- the body image is synthesized in a region, and the body image is not synthesized in the inoperable region.
- the display image is easier to see than when synthesized.
- the display screen and the operation unit are separated! /, But it is intuitive as if the touch force is directly touching the screen as in the touch panel display.
- accurate input is possible.
- realistic body images can be displayed, enabling more intuitive input operations.
- shape of the body part can be displayed in a more realistic form without accurately placing the camera at a position perpendicular to the operation surface of the operation unit, so the degree of freedom of the installation position of the camera is high.
- FIG. 1 is a conceptual diagram of an input device according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing the configuration of the input device.
- FIG. 3 is an example of installation of an input device in a vehicle.
- FIG. 4 is an example of installation of an input device in a vehicle.
- FIG. 5 is an installation example of an input device in a vehicle.
- FIG. 6 is an example of installation of an input device in a vehicle.
- FIG. 7 is an installation example of the body shape input unit 100.
- FIG. 8 is an installation example of the body shape input unit 100.
- FIG. 9 is an example in which body shape input unit 100 and operation unit 300 are configured integrally.
- FIG. 10 is a side view of an example in which the body shape input unit 100 is installed below the operation unit 300.
- FIG. 11 is a top view of an example in which body shape input unit 100 is installed below operation unit 300.
- FIG. 12 is an example of a blinking rule for the light source 110.
- FIG. 13 is an example of an image of a hand imaged when the light source 110 is turned on.
- FIG. 14 is an example of a hand image captured when the light source 110 is turned off.
- FIG. 15 is an example of an image of a hand after removing the influence of external light.
- FIG. 16 is a configuration example in which a plurality of light receiving units 150 are used as the body shape input unit 100.
- FIG. 17 is a diagram showing a light emission range of the light emitting unit 160.
- FIG. 18 is a diagram showing a light receiving range of the light receiving unit 150.
- FIG. 19 is a diagram showing a sensitivity range 171.
- FIG. 20 shows an example in which the light emitting unit 160 is disposed above the operation unit 300.
- FIG. 21 is a configuration example using a line sensor as the body shape input unit 100.
- FIG. 22 is a flowchart showing a process flow of the body shape extraction unit 600.
- FIG. 23 is an example of image data processed by the body shape extraction unit 600.
- FIG. 24 is an example of body region 607 extracted by body shape extraction unit 600.
- FIG. 25 is an example of a contour 609 extracted by the body shape extraction unit 600.
- FIG. 26 shows an example of shape correction processing.
- FIG. 27 is an example of shape correction processing.
- FIG. 28 is an example of a body image created by body shape extraction unit 600.
- FIG. 29 is an example of dot image data processed by the body shape extraction unit 600.
- FIG. 30 is an example of the result of extracting the dots in the contour portion by the body shape extraction unit 600
- FIG. 31 is an example of a contour extracted by body shape extraction unit 600.
- FIG. 32 is an example of line image data processed by the body shape extraction unit 600.
- FIG. 33 is an example of a contour extracted by body shape extraction unit 600.
- FIG. 34 is an example of the operation unit 300.
- FIG. 35 is an example of a user operation method.
- FIG. 36 is an example of the operation unit 300.
- FIG. 37 is a sequence diagram showing a process flow of the control unit 500 for displaying a body image.
- FIG. 38A is an example of display information created by the display information creation unit 700 when a body shape is not detected.
- FIG. 38B is an example of display information created by the display information creation unit 700 when a body shape is detected.
- FIG. 39A is an example of display information created by the display information creation unit 700 when a body shape is not detected.
- FIG. 39B is an example of display information created by the display information creation unit 700 when a body shape is detected.
- FIG. 40A is an example of display information created by display information creation section 700 when the right hand is detected.
- FIG. 40B is an example of display information created by display information creation section 700 when the left hand is detected.
- FIG. 41A is an example of display information created by display information creation section 700 when a relatively large hand is detected.
- FIG. 41B is an example of display information created by the display information creation unit 700 when a relatively small hand is detected.
- FIG. 42A is an example of an image synthesized by the image synthesizing unit 800.
- FIG. 42B is an example of an image combined by the image combining unit 800.
- FIG. 42C is an example of an image synthesized by the image synthesis unit 800.
- FIG. 42D is an example of an image combined by the image combining unit 800.
- FIG. 42E is an example of an image combined by the image combining unit 800.
- FIG. 42F is an example of an image combined by the image combining unit 800.
- FIG. 42G is an example of an image synthesized by the image synthesizing unit 800.
- FIG. 42H is an example of an image synthesized by the image synthesizing unit 800.
- FIG. 421 shows an example of an image synthesized by the image synthesis unit 800.
- FIG. 42J shows an example of an image synthesized by the image synthesis unit 800.
- FIG. 43 is a sequence diagram showing a process flow of the control unit 500 when an input operation by the user is detected.
- FIG. 44A is an example of display information created by display information creation section 700 when a user performs an input operation.
- FIG. 44B is an example of display information created by the display information creation unit 700 when an input operation is performed by the user.
- FIG. 1 is a conceptual diagram of an input device that works according to an embodiment of the present invention.
- a touch pad 4 for operation is installed at the user's hand, and a display 2 is installed at a position away from the user.
- Camera 1 is installed above touch pad 4.
- the display 2 displays one or more GUI parts 3 for the user to input desired commands and information.
- Each point on the operation surface of the touch pad 4 has a one-to-one correspondence with each point on the display screen of the display 2.
- the coordinate data absolute coordinate data
- the coordinate data indicating the touch pad 4 force is also output to the control unit, and based on this coordinate data, the GUI part 3 corresponding to the contact position is identified, and the command or Information is entered.
- the control unit performs image recognition processing on the image data output from camera 1, extracts only the hand 6 portion from the entire image, and displays the extracted hand 6 image as a hand image 7 on display 2 by superimposition. To do. While viewing the hand image 7 displayed on the display 2, the user moves the hand 6 so that the fingertip of the hand image 7 is on the desired GUI part 3 and presses the touch pad 4. Then, a command or information associated with the GUI part 3 corresponding to the contact position (that is, the GUI part 3 located at the fingertip of the hand image 7) is input.
- hand image 7 is not displayed on the screen.
- the user wants to select the GUI part 3 displayed at the center of the screen, it is necessary to move the line of sight to the operation surface of the touch node 4 to check the center position of the touch pad 4 and press the force with a finger. It is inconvenient. It is especially dangerous to move your line of sight while driving.
- the input device of the present invention by viewing the hand image 7 displayed on the display 2, it is possible to confirm which position on the screen the current finger position corresponds to. . Therefore, the user can select the GUI component 3 while looking only at the display 2 without moving the line of sight to the touch pad 4.
- the hand image 7 is displayed on the screen even when the user lifts his / her finger with a slight operating force of the touch pad 4, so the user actually puts his / her finger on the touch pad 4 You can check the position of the current finger on the screen without touching and sliding it. Therefore, simply pressing the touch pad 4 enables simple and quick input operation.
- the force at which the touch pad 4 force coordinate data is output As will be described later, the touch pad 4 does not necessarily have a function of outputting the coordinate data of the contact position. It may be one that only detects whether or not it has been pressed. In this case, it is only necessary to detect the position of the fingertip from the image captured by the camera 1 and to determine which GUI component 3 the user has selected.
- FIG. 2 is a block diagram showing a configuration of the input device.
- the input device 1000 includes a body shape input unit 100, a display unit 200, an operation unit 300, and a calculation unit 400.
- the calculation unit 400 includes a control unit 500 that controls the calculation unit 400 as a whole, a body shape extraction unit 600 that extracts the body shape by processing the output of the body shape input unit 100, and a display necessary for inputting commands and information by the user.
- Display information creation unit 700 for creating information that is, an image including GUI part 3
- an image representing the body part of the user who operates the operation unit 300 and the display information created by the display information creation unit 700 are combined. It consists of an image composition unit 800.
- Body shape input unit 100 is a means for inputting the shape and position of a body part such as a hand used by a user to the apparatus, and uses a camera that receives visible light, near infrared light, and the like. In some cases, an individual sensor such as a sensor photodiode is used. Each case will be described below.
- the body shape input unit 100 for example, a visible light power mela, a near infrared camera, an infrared camera, or an ultrasonic camera can be used.
- the body shape input unit 100 is arranged at a position where the operation unit 300 can be imaged, images the operation unit 300, and outputs the image data.
- the hand 6 is included in the image photographed by the body shape input unit 100.
- the body shape input unit 100 is preferably disposed on a normal line passing through the center of the operation surface of the operation unit 300, and installed such that the optical axis of the lens is parallel to the normal line.
- the body shape input unit 100 is a visible light power mela
- a light source 110 that emits visible light so that the hand 6 can be clearly photographed even at night.
- the operation surface of the operation unit 300 is preferably a uniform color, and particularly preferably black or blue.
- the body shape input unit 100 is a near-infrared camera, it is preferable to provide a light source 110 that emits near-infrared light so that the hand 6 is always clearly photographed.
- the operation surface of 300 is preferred to be black.
- FIG. 3 shows a first installation example.
- the operation unit 300 is an extension of the center console, and is installed at a position where the driver can operate with the elbow resting on the elbow rest.
- the body shape input unit 100 is installed at a position where the operation unit 300 can capture an image.
- the body shape input unit 100 is preferably installed at the position B (normally the ceiling of the vehicle) on the normal line of the operation unit 300, but the map lamp is located in front of the position B (position A). It may be built in a mirror. Also, it can be installed in the room lamp unit behind B (C position).
- the shape force of the hand 6 imaged by the body shape input unit 100 is viewed from a direction perpendicular to the operation surface.
- the shape of the hand 6 may be different, and the viewpoint conversion process described later when creating the hand image 7 (an image viewed from an oblique direction with respect to the operation surface is viewed from a direction perpendicular to the operation surface) In some cases, image processing for conversion to an image) is required.
- FIG. 4 shows a second installation example.
- the operation unit 300 is installed obliquely upward in the center of the steering.
- the body shape input unit 100 is installed at a position where the operation unit 300 can capture an image (such as the ceiling of a vehicle).
- the image photographed by the body shape input unit 100 rotates according to the steering angle of the steering wheel.
- the effect of this rotation is obtained. Can be removed.
- the image data correction method it is conceivable to provide a means for detecting the steering angle of the steering wheel and to perform rotation processing on the image data in accordance with the detected angle of inclination.
- one or more reference marks are attached to the operation surface of the operation unit 300, and the steering angle of the steering is detected by detecting the position of the mark from the image data. It is conceivable to perform rotation processing on the image data in accordance with the snake angle.
- FIG. 5 shows a third installation example.
- the operation unit 300 is installed inside the driver's door so that the driver can operate with the elbow resting on the elbow rest.
- the body shape input unit 100 is installed at a position where the operation unit 300 can capture an image (such as a vehicle ceiling).
- FIG. 6 shows a fourth installation example.
- the operation unit 300 is installed on an armrest in the center of the rear seat.
- the body shape input unit 100 is installed at a position where the operation unit 300 can take an image (a vehicle ceiling or a room lamp).
- the body shape input unit 100 is separated from the operation unit 300 such as the ceiling of a vehicle.
- the body shape input unit 100 and the operation unit 300 may be configured integrally.
- an example in which the body shape input unit 100 and the operation unit 300 are integrally configured will be described with reference to FIGS.
- FIG. 7 shows an example in which the body shape input unit 100 is installed at a predetermined position above the operation unit 300.
- a light source 110 may be installed near the body shape input unit 100 as necessary.
- the light source 110 needs to be visible light if the body shape input unit 100 creates a color image, but if the body shape input unit 100 creates a black-and-white image, it may be near-infrared light. Good.
- FIG. 8 is an example in which a mirror 120 is installed at a predetermined position above the operation unit 300, and the operation unit 300 and the hand 6 reflected on the mirror 120 are imaged by the body shape input unit 100.
- the relative position of the body shape input unit 100 with respect to the operation unit 300 may be fixed using, for example, a housing 130 as shown in FIG.
- the operation unit 300 and the hand 6 are picked up by the body shape input unit 100 from above (that is, the operation surface side of the operation unit 300). It is also conceivable that the body shape input unit 100 takes an image from below (that is, the side opposite to the operation surface of the operation unit 300).
- FIG. 10 and FIG. 11 a configuration example in which the body shape input unit 100 captures an image of the operation unit 300 with a downward force will be described.
- FIG. 10 shows an example in which a transparent touch panel is adopted as the operation unit 300 and the body shape input unit 100 and the light source 110 are installed below the operation panel 300.
- FIG. 11 is a view of the operation unit 300 in FIG. In this example, the visible light cut filter 140 is disposed on the lower surface of the touch panel.
- the light source 110 emits near-infrared light
- the body shape input unit 100 is an imaging device that is sensitive to near-infrared light.
- near-infrared light emitted from the light source 110 is set in accordance with a certain temporal rule in order to remove the influence of near-infrared light included in external light. It is also possible to make it blink. This will be described with reference to FIGS.
- FIG. 12 as an example of the simplest rule in blinking of the light source 110, a pattern in which turning on and off is repeated every certain period is shown.
- FIG. 13 is an example of an image of a body part (here, a hand) captured by the body shape input unit 100 when the light source 110 is turned on.
- FIG. 14 is an example of an image of a body part (hand) when the light source 110 is turned off.
- the light source 110 is turned on, as shown in FIG. 13, it is conceivable that images from the light source 110 and external light overlap. Further, when the light source 110 is turned off, it is conceivable that an image by external light is captured as shown in FIG.
- a body shape extraction unit 600 (or body shape input unit 100), which will be described later, performs a subtraction process for subtracting FIG. 14 from FIG. 13 for each pixel included in the imaging result, thereby producing a light source 110 as shown in FIG. Only the image by can be taken out. By doing in this way, the influence of external light can be removed.
- the first example is an example of functioning as a dot sensor by two-dimensionally arranging a plurality of light receiving parts such as photodiodes.
- a touch panel is adopted as the operation unit 300, and a plurality of pairs each including a light receiving unit 150 and a light emitting unit 160 are two-dimensionally arranged below the operation panel 300.
- the light emitting unit 160 emits light in the hatched area 151 in FIG. 17, and the light receiving unit 150 has sensitivity to the hatched area 161 in FIG. 18 (that is, can receive light from the hatched area 161). ). Therefore, if the user's hand 6 exists in the range where the range 151 and the range 161 intersect, that is, the shaded sensitivity range 171 in FIG. 19, the light emitted from the light emitting unit 160 is reflected on the hand 6, and the reflected light Is received by the light receiving unit 150.
- the body shape input unit 100 outputs data indicating whether or not the reflected light is received by each light receiving unit 150 to the calculation unit 400.
- the sensitivity range 171 shown in FIG. 19 can be arbitrarily set depending on how the light receiving unit 150 and the light emitting unit 160 are installed, but is preferably set to a distance of about 10 cm from the operation surface. Thereby, it is possible to prevent the position and shape of the hand 6 from being detected in vain when the user moves the hand 6 above the operation unit 300 with an intention other than the operation.
- one or a plurality of light emitting units 160 are provided above the operation unit 300, and the light from the light emitting unit 160 is transmitted to the operation unit 300.
- the light may be received by a plurality of light receiving units 150 that are two-dimensionally arranged below the light source.
- the hand 6 does not exist on the light receiving unit 150 that has received the light from the light emitting unit 160, and the hand 6 exists on the light receiving unit 150 that has received the light from the light emitting unit 160. .
- the second example is an example in which one or more line sensors are used.
- a line sensor is arranged as a body shape input unit 100 above the operation unit 300.
- the line sensor is a sensor configured to detect light from one line on the operation surface of the operation unit 300 (light reflected by the operation unit 300 or light transmitted through the operation unit 300). By moving the line sensor in parallel with the operation unit 300, the entire operation surface can be scanned. In this case, the line sensor may be rotated as well as moved linearly if necessary due to restrictions such as installation location. However, in order to identify the position and shape of the hand 6 viewed from the upward force of the operation unit 300 based on the scanning result when the line sensor is rotated, correction processing according to the movement pattern of the line sensor is required.
- the entire operation surface is scanned without moving the line sensor by using a line sensor that can simultaneously detect the force of providing a plurality of line sensors and the light of a plurality of line forces on the operation surface. It is also possible to do. In this case, the entire operation surface can be scanned at a higher speed than when the line sensor is moved.
- the color of the operation surface of operation unit 300 be uniform.
- the line sensor is installed in a direction perpendicular to the operation surface of the operation unit 300.
- the line sensor is oblique to the operation surface of the operation unit 300 (ie, a direction different from the vertical direction). ) May be provided with a line sensor.
- the shape of the hand 6 detected by the line sensor may be different from the shape of the hand 6 when viewed from the direction perpendicular to the operation surface, and the hand image 7 is created. In some cases, the viewpoint conversion process described later may be required.
- a line sensor may be disposed below the operation unit 300.
- a transparent one is used for the operation unit 300, the light from the operation unit 300 is reflected by a mirror and guided to the line sensor, or a plurality of holes are formed in the operation unit 300 so It is necessary to devise such as detecting the position and shape with a line sensor.
- Body shape extraction section 600 extracts body shape data based on the output of body shape input section 100.
- the body shape data refers to, for example, the shape and position of a body part (ie, hand or foot) placed on the operation surface of the operation unit 300 by the user (the operation surface force may also be floating). It is the data shown.
- the specific operation of the body shape extraction unit 600 differs depending on whether a camera is used as the body shape input unit 100 or a sensor other than the camera. Hereinafter, the operation of the body shape extraction unit 600 in each case will be described.
- body shape extraction unit 600 when a camera is used as body shape input unit 100 will be described.
- FIG. 22 is a flowchart showing the flow of body shape extraction processing by body shape extraction unit 600.
- the body shape extraction unit 600 first takes in the image data output from the body shape input unit 100 (here, a camera) into the frame memory (S602).
- FIG. 23 shows an example of the image data captured in the frame memory.
- reference numeral 603 denotes an operation surface of the operation unit 300
- reference numeral 605 denotes a user's hand 6.
- the body shape extraction unit 600 determines the region corresponding to the user's body (here, hand 6) from the image data captured in the frame memory in step S602 as shown in FIG. Extracted as area 607 (S604).
- the following three methods can be considered as methods for extracting a body region from image data.
- the first extraction method is a background subtraction method.
- the background subtraction method first, the operation unit 300 is photographed by the body shape input unit 100 in a state where nothing exists between the body shape input unit 100 and the operation unit 300, and the result is stored as background image data. deep. Then, the image data output from the body shape input unit 100 and the background image data are compared for each pixel or each block, and a portion different between the two is extracted as a body region.
- the background subtraction method is advantageous in that the operation surface of the operation unit 300 does not need to be a single color.
- the background image data may be stored in the memory in advance, or when the image data output from the body shape input unit 100 does not change for a certain period of time, the image data is stored in the background image data. It may be stored as a data.
- the second extraction method is a luminance threshold method.
- the luminance threshold method the body region is extracted by comparing the luminance value of each pixel of the image data output from the body shape input unit 100 with a predetermined threshold value. Therefore, in the image data output from the body shape input unit 100, the operation surface of the operation unit 300 is blackened so that the luminance difference between the hand 6 portion and the operation surface portion of the operation unit 300 becomes large. It is preferable that the color is close to black and the operation surface is matted to minimize reflection of light.
- the threshold value is set to a value larger than the luminance value of the shifted pixel corresponding to the operation unit 300 in the image data output from the body shape input unit 100.
- a body region can be extracted by extracting a region having a luminance value higher than the threshold value set in this way.
- the body region is below the threshold by irradiating visible light or near-infrared light to the range of the field angle of the body shape input unit 100. It is necessary to make it appear bright.
- the third extraction method is a blackjack processing method.
- the chromach processing method is a method commonly used in television broadcasts, etc., which shoots a person or object using a monochrome background and removes the color portion of the background from the image data obtained by shooting. By doing this, only people are extracted. Therefore, in order to apply the chromach processing method to the present invention, the operation surface of the operation unit 300 needs to be a single color (for example, blue). In order to extract a stable body region even in the dark, such as at night, the angle of view of the body shape input unit 100 It is necessary to irradiate the range of visible light.
- the user's body part is not necessarily placed on the operation surface of the operation unit 300.
- luggage may be placed.
- an image of the package is displayed on the display unit 200, which hinders display of display information. Therefore, if it is determined whether or not the shape extracted in step S606 is the shape of a body part, and if it is not a body part, the subsequent processing is not performed, it is possible to avoid such a problem.
- step S606 determines whether the shape extracted in step S606 is the shape of the body part.
- hold the shape pattern of the body part in advance and compare the shape extracted in step S606 with this shape pattern (pattern matching )
- pattern matching the input operation by the operation unit 300 can be validated only when the shape extracted in step S606 is found to be the shape of the body part. Conceivable. As a result, it is possible to prevent malfunction due to a load placed on the operation surface of the operation unit 300.
- step S606 when the user has applied a nail polish of the same color as the operation surface of the operation unit 300 to the nail, in step S606, the shape of the hand with the nail portion missing is detected.
- a shape pattern as described above is prepared, a partially missing portion in the detected shape of the body part can be complemented with reference to this shape pattern.
- the body shape extraction unit 600 extracts the contour 609 as shown in Fig. 25 based on the body region 607 extracted in step S604 (S606).
- This contour extraction processing is performed by extracting pixels that are in contact with pixels in a region other than the body region 607 from pixels included in the body region 607. More specifically, among all the pixels included in the body region 607, four neighboring pixels in the upper, lower, left, and right sides (or eight neighboring pixels obtained by further adding the upper right, upper left, lower right, and lower left pixels of the target pixel ) In which a pixel in a region other than the body region 607 is included. Note that smooth contour processing may be performed on the extracted contour 609 as necessary. By performing smooth wrinkle processing, contour 6 Aliasing that occurs in 09 can be eliminated.
- the body shape extraction unit 600 performs shape correction processing (S608).
- shape correction processing among the pixels of the image data output from the body shape input unit 100, for the contour 609 extracted in step S606 and the pixels in the contour 609, camera lens distortion correction processing, viewpoint conversion processing, and other processing are performed. Perform correction processing.
- the camera lens distortion correction processing is performed using lens distortion data, particularly when a wide-angle lens is used in the body shape input unit 100. Therefore, when a lens with a small distortion (for example, a standard lens or a telephoto lens) is used in the body shape input unit 100, the distortion correction of the camera lens is unnecessary.
- a lens with a small distortion for example, a standard lens or a telephoto lens
- the viewpoint conversion processing is performed when the body shape input unit 100 cannot be installed at a desired viewpoint (in this example, a viewpoint from directly above the operation unit 300) due to restrictions on the installation position. To do.
- the viewpoint conversion technique is a well-known technique. An example of the viewpoint conversion process will be described with reference to FIGS.
- FIG. 26 is an example of viewpoint conversion processing applied when the body shape input unit 100 is installed at a position such as position A in FIG.
- the image is displayed so that the four corners (a, b, c, d) of the operation unit 300 in the image data 601 output from the body shape input unit 100 correspond to the four corners of the screen of the display unit 200. Be stretched.
- the positions of the four corners (a, b, c, d) of the operation unit 300 in the image data 601 are also fixed. It is possible to perform viewpoint conversion processing as shown in Fig. 26 without detecting the corner position each time.
- FIG. 27 is a view applied when the body shape input unit 100 is installed at the position of FIG. It is an example of a point conversion process.
- the operation unit 300 since the operation unit 300 rotates with the steering, the operation unit 300 in the image data 6001 output from the body shape input unit 100 also rotates according to the rotation of the steering. Also in this case, the four corners (a, b, c, d) of the operation unit 300 in the image data 601 output from the body shape input unit 100 are detected, and the four corners of the screen of the four-corner force display unit 200 are detected. The image is stretched to the corresponding position
- step S608 shape correction processing other than camera lens distortion correction processing and viewpoint conversion processing is also performed as necessary. For example, a process of converting the resolution and aspect ratio of the image data output from the body shape input unit 100 into the screen resolution and aspect ratio of the display unit 200 is appropriately performed.
- a standard lens which does not require lens distortion correction in the body shape input unit 100, and the body shape input unit 100 is set on a normal passing through the center of the operation surface of the operation unit 300. Installed so that the optical axis of the body shape input unit 100 is parallel to the normal line, and the angle of view of the body shape input unit 100 is set so that the entire operation surface of the operation unit 300 is exactly within the image. Assuming that In this case, the shape correction process in step S608 is not necessary.
- step S608 As a result of the shape correction processing in step S608, a hand image 611 as shown in FIG. 28 is obtained.
- a body shape extraction section 600, a body image generation process performs (S610) o
- This step is a step of creating a body image to be displayed on the display unit 200.
- the body image created in step S608 for example, the hand image 611 in FIG. 28
- the body image created in step S608 is displayed on the display unit 200 as it is, there is no need to perform any particular processing in this step.
- the body image created in step S608 is dark, the body image is smoothed to increase its brightness, to make the fine lines of the hand inconspicuous, and to correct the color tone of the body image.
- the hand is displayed more clearly, or a texture image prepared in advance is pasted in the area inside the contour. Examples of textures include animal skins, wood, concrete, metal, and artificial patterns.
- the operation of the body shape extraction unit 600 when a sensor other than a camera is used as the body shape input unit 100 will be described.
- a body shape extraction unit when a dot sensor in which sensors such as photodiodes are two-dimensionally arranged as shown in FIG. 16 is used as the body shape input unit 100.
- the operation of 600 will be described.
- Body shape extraction unit 600 first takes in the data output from body shape input unit 100 and creates dot image data as shown in FIG.
- the value of each dot in the dot image data is either 0 or 1.
- a dot having a value of 1 is indicated by a black circle.
- the body shape extraction unit 600 performs contour dot extraction processing on the dot image data.
- Figure 30 shows the result of contour dot extraction processing for the dot image data in Fig. 29.
- the body shape extraction unit 600 creates a contour image as shown in FIG. 31 by approximating the dots selected by the contour dot extraction process with a spline curve or the like.
- the body shape extraction unit 600 executes a process of converting the resolution and aspect ratio of the created contour image into the resolution and aspect ratio of the screen of the display unit 200, as necessary. If necessary, smoothing processing is performed on the contour image, or a texture image prepared in advance is pasted in the area inside the contour. Examples of textures include human hands, animal skins, wood, concrete, metal, and artificial patterns.
- body shape extraction section 600 when a line sensor is used as body shape input section 100 as shown in FIG.
- Body shape extraction unit 600 first takes in the data output from body shape input unit 100 and creates line image data as shown in FIG.
- body shape extraction section 600 performs contour extraction processing on the line image data. Specifically, a contour image as shown in Fig. 33 is created by approximating the end points of each line included in the line image data with a spline curve or the like.
- the body shape extraction unit 600 executes processing for converting the resolution and aspect ratio of the created contour image into the resolution and aspect ratio of the screen of the display unit 200 as necessary, and If necessary, smooth the contour image or remove the area inside the contour. Processing such as pasting a texture image prepared in advance in the area is performed. Examples of textures include human hands, animal skins, wood, concrete, metal, and artificial patterns.
- the display unit 200 displays an image synthesized by the image synthesis unit 800 on the screen.
- the display unit 200 includes a liquid crystal display, a CRT (Cathode Ray Tube) display, an EL (Electronic Luminescence) display, and the like. Is available.
- the display unit 200 is an aerial image obtained by combining the image synthesized by the image synthesis unit 800, such as HUD (Head Up Display) or HMD (Head Mounted Display), using a half mirror, mirror, lens, or the like. It may be a display that forms an image. In this case, for example, an image can be displayed even at a position where the display unit 200 is difficult to install, such as above the front hood of the vehicle.
- HUD Head Up Display
- HMD Head Mounted Display
- a projector may be used as the display unit 200.
- the image synthesized by the image synthesis unit 800 is projected onto the screen by the projector, a large screen display can be realized at low cost.
- the configuration of the display unit 200 may be appropriately selected according to the installation location, the display purpose, and the like.
- the operation unit 300 detects an input operation by the user and outputs a signal corresponding to the input operation.
- a coordinate input device such as a touch panel or a touch pad, or a switch such as a button switch or a jog dial (a switch as a software) can be used.
- the operation unit 300 may include both a coordinate input device and a switch.
- a coordinate input device is used as the operation unit 300 and an example in which a switch is used will be described.
- the operation unit 300 is a coordinate input device such as a touchpad touch panel, and outputs coordinate data indicating the position touched (or pressed) by the user at a predetermined timing. To do.
- the user As the user's input operation detected by the operation unit 300, the user generally touches the operation surface of the operation unit 300 or presses the operation surface. Depends on device type and settings. For example, in the case of a capacitive touch pad, it is generally detected whether the user has touched the operation surface. On the other hand, in the case of a pressure-sensitive touch pad, it is detected whether the user presses the operation surface with a pressure higher than a certain value, rather than whether the operation surface is touched. If the pressure threshold is increased, it can be detected as an input operation only when the user strongly presses the operation surface.
- Input operation can be considered.
- These input operations do not necessarily need to be detected only by the operation unit 300. For example, even if the control unit 500 detects an input operation such as a double click or a drag based on the coordinate data output by the operation unit 300. Good.
- a rotary switch for volume adjustment is displayed on the display unit 200 in a simulated manner using a GUI component, and a drag operation that draws a circle on the operation surface of the operation unit 300 is detected.
- the volume may be changed according to the drag operation.
- the shape, position, function, and number of GUI parts displayed on the display unit 200 can be arbitrarily changed as needed. . Further, for example, it is possible to easily specify an arbitrary point on the map displayed on the display unit 200. Therefore, a very general-purpose input device can be realized.
- the operation unit 300 is not necessarily provided with a function of outputting such coordinate data provided in a general touchpad touch panel. Ie It is also possible to output only a signal indicating whether the user touches the operation surface or whether it is pressed. In this case, since the position touched (or pressed) by the user cannot be detected from the output of the operation unit 300, it is necessary to detect the position based on the data output from the body shape input unit 100. For example, if the user is constrained to select a GUI part with his index finger up as shown in Fig. 1 and the hand image power obtained by the body shape extraction unit 600 detects the position of the fingertip of the index finger, You can see which GUI part is selected.
- the operation unit 300 has a function for outputting coordinate data, for example, as shown in FIG. 35, a plurality of fingers are positioned on a plurality of GUI parts in advance, and the hand 6 is not moved. It is possible to use these GUI parts to select the desired GUI parts as appropriate.
- the operation unit 300 includes a base unit 301 and a plurality of switches 310.
- the switch 310 include a button switch 311, a toggle switch, a rotary switch 312, a jog dial 313, and a joystick 314.
- the display unit 200 displays GUI components corresponding to the position of the switch 310 installed in the operation unit 300.
- the shape of these GUI parts is preferably almost the same as the shape of the switch 310.
- the function of the GUI component displayed on the display unit 200 can be arbitrarily changed whenever necessary.
- the rotary switch 312 in the audio mode, can be used as a volume control unit, and in the navigation mode, the rotary switch 312 can be used as a map display magnification changing unit.
- the user can perform an input operation while sensing the behavior of the switch by touch, and therefore, the input operation can be performed only with a visual sense such as a touch pad. In comparison, more intuitive and reliable input is possible.
- the operation unit 300 may include both the coordinate input device and the switch.
- a coordinate input device may be further arranged in the center of the base 301 in FIG. Yes.
- the corresponding switch is provided in the operation unit 300 to select it.
- other GUI components can be selected using a coordinate input device, making the placement of the GUI components flexible and enabling effective use of the screen.
- calculation unit 400 Since the body shape extraction unit 600 included in the calculation unit 400 has already been described, the description thereof is omitted here.
- the processing of the calculation unit 400 is roughly divided into processing executed to display an image of the user's body part on the display unit 200, and processing executed when a user input operation is performed. It is divided into.
- the body shape extraction unit 600 detects the body shape in step S501, the body shape extraction unit 600 sends a message to the control unit 500 that the body shape has been detected.
- the body shape extraction unit 600 may detect characteristics related to the body shape (eg, hand size, left hand or right hand) and send the characteristics to the control unit 500 together with a message.
- control unit 500 checks the operation mode at that time.
- control unit 500 changes the display information to be displayed on display unit 200 to display information creation unit 700 in step S503.
- the display information creation unit 700 changes the display information according to this instruction.
- FIG. 38A and FIG. 38B show the case where the body shape extraction unit 600 detects the body shape (that is, the case where the user's body part exists on the operation unit 300) and the case where the body shape extraction unit 600 does not detect the body shape (that is, This is a first example in which the display information is changed when the user's body part does not exist on the operation unit 300.
- FIG. 38A and FIG. 38B show the case where the body shape extraction unit 600 detects the body shape (that is, the case where the user's body part exists on the operation unit 300) and the case where the body shape extraction unit 600 does not detect the body shape (that is, This is a first example in which the display information is changed when the user's body part does not exist on the operation unit 300.
- FIG. 38A and FIG. 38B show the case where the body shape extraction unit 600 detect
- FIG. 38A is a screen display example when the body shape extraction unit 600 does not detect the body shape.
- the outline of a GUI part here, a button
- FIG. 39B is a screen display example corresponding to FIG. 38A when the body shape extraction unit 600 detects the body shape.
- the buttons are displayed in three dimensions, the user can recognize at a glance where the area to be selected is.
- FIG. 39A and FIG. 39B show the case where the body shape extraction unit 600 detects the body shape (that is, the case where the user's body part exists on the operation unit 300) and the case where the body shape extraction unit 600 does not detect the body shape.
- FIG. 39A is another screen display example when the body shape extraction unit 600 does not detect the body shape.
- GUI parts buttons here
- FIG. 39B is a screen display example corresponding to FIG. 38A when the body shape extraction unit 600 detects the body shape.
- the button is displayed larger than FIG.
- the user can easily select the button.
- the display information in this way, it is easy for the user to perform input operations, sometimes it is easier to see information other than the buttons, and when the user is trying to perform input operations, the buttons are enlarged and operated. Can be improved.
- FIG. 40A and FIG. 40B are examples in which the display information is changed between when the body shape extraction unit 600 detects the right hand and when the left hand is detected.
- the body shape extraction unit 600 determines whether the detected body shape is that of the right hand or the left hand, and notifies the control unit 500 of the determination result as characteristics relating to the body shape. Based on this determination result, the controller 500
- the display information creation unit 700 is instructed to change the display information. Whether the detected body shape is right-handed or left-handed can be determined from data such as FIG. 24 and FIG. 25 using various existing algorithms.
- FIG. 40A is a screen display example when the body shape extraction unit 600 detects the right hand
- FIG. 40B is a screen display example when the body shape extraction unit 600 detects the left hand.
- the operation unit 300 has two users on both the left and right sides, and exists on the right side of the operation unit 300.
- the user who operates the operation unit 300 with his left hand and the user who is present on the left side of the operation unit 300 operates the operation unit 300 with his right hand will assume the situation! That is, when the body shape extraction unit 600 detects the right hand, it is considered that the user operating the operation unit 300 is on the left side of the operation unit 300.
- a GUI component here, a button
- a button is displayed on the left side of the screen as shown in FIG. 40A.
- a button is displayed on the right side of the screen as shown in FIG. 40B.
- the button placement position is changed between when the body shape extraction unit 600 detects the right hand and when the left hand is detected, but the function, shape, size, number, etc. of the buttons are changed. You may make it do.
- the control unit 300 is installed between the driver's seat and front passenger seat of a right-hand drive car, and the right hand (i.e. passenger's hand in the front passenger seat) is detected while the vehicle is moving, Displays buttons that require relatively complicated input operations such as character input and buttons that require relatively simple input operations such as screen scrolling, but the left hand (that is, the driver's hand) is moving while the vehicle is moving. If detected, only buttons that require relatively simple input operations may be displayed for safety.
- buttons that require relatively complex input operations and buttons that require relatively simple input operations are displayed, but if the right hand (i.e., the driver's hand) is detected while the vehicle is moving, it is relatively simple. It may be possible to display only buttons that require input operations.
- FIGS. 41A and 41B show display information when the body shape extraction unit 600 detects a relatively large hand (that is, an adult hand) and when a relatively small hand (that is, a child's hand) is detected. It is an example to change.
- the body shape extraction unit 600 determines whether the detected body shape is a relatively large hand or a relatively small hand, and notifies the control unit 500 of the determination result as characteristics relating to the body shape. Based on the determination result, the control unit 500 instructs the display information creation unit 700 to change the display information.
- Whether the detected body shape is a relatively large hand or a relatively small hand can be determined, for example, by comparing the area and width of the body region 607 in FIG. 24 with a predetermined threshold.
- FIG. 40A is a screen display example when the body shape extraction unit 600 is relatively large and detects a hand. When the body shape extraction unit 600 is relatively large and detects a hand, it is considered that an adult is trying to operate, so the input operation is not particularly limited.
- FIG. 40B is a screen display example when the body shape extraction unit 600 detects a relatively small hand. If the body shape extraction unit 600 is relatively large and the hand is detected, it is considered that the child is trying to operate, so some or all buttons are disabled to restrict input operations, and the buttons are disabled. Change the color of the button or mark it so that it is shared with the user.
- the button color is changed or a mark is added.
- the present invention is not limited to this. Variations are possible. For example, it may be possible to change a difficult word included in the display information to an easy word, or to change the screen configuration or coloring to a child-like style.
- the display information creation unit 700 includes a body shape extraction unit.
- Display information may be generated only when the shape of the body part is detected by 600.
- the processing related to image display is interrupted when the user does not perform an input operation, so that power consumption can be reduced.
- the display information creation unit 700 may hesitate to generate display information only when the body shape extraction unit 600 detects the right hand (or left hand).
- the display information creation unit 700 may generate display information only when an adult hand (or a child's hand) is detected by the body shape extraction unit 600.
- the shape detected by the body shape extraction unit 600 is not necessarily the shape of the body part.
- the display information creation unit 700 may not generate a GUI part.
- the display information creation unit 700 may not generate a GUI part.
- step S504 the control unit 500 creates (changes) the body image created by the body shape extraction unit 600 and the display information creation unit 700 for the image composition unit 800.
- the display information is instructed to be combined.
- the image composition unit 800 synthesizes the body image and the display information according to this instruction. An example of an image synthesized by the image synthesis unit 800 will be described below.
- FIG. 42A is a screen example when the body image created by the body shape extraction unit 600 (eg, FIG. 28) is overwritten on the display information created by the display information creation unit 700.
- FIG. 42B shows a screen when the contour information created by the body shape extraction unit 600 (eg, FIG. 25, FIG. 31, FIG. 33) is overwritten on the display information created by the display information creation unit 700 as a body image. It is an example. By such image composition, the shape and position of the body part are displayed, and at the same time, the display information inside the body image is also displayed, so that it is easy to check the display information during the operation.
- the contour information created by the body shape extraction unit 600 eg, FIG. 25, FIG. 31, FIG. 33
- FIG. 42C shows the body image created by the body shape extraction unit 600, processed so that the outline is opaque and the inside is translucent, and then overwritten on the display information created by the display information creation unit 700. It is an example of a screen in the case of. Intuitive input operation is possible by such image composition. At the same time, it becomes easier to check the display information during the operation.
- FIG. 42D shows a screen when the contour image created by the body shape extraction unit 600 is processed so that the fingertip portion is highlighted, and then the display information created by the display information creation unit 700 is overwritten.
- This kind of image composition allows the user to quickly confirm the position he / she is trying to press, and the display information inside the body image is also displayed, making it easier to confirm the display information during operation.
- a method of detecting the fingertip portion for example, a method of specifying the fingertip portion using a pattern matching method from the shape of the contour, or extracting the nail portion with the image data force as shown in FIG. 23 is conceivable.
- Examples of highlighting include displaying a mark, changing the color, changing the transparency, and gradually increasing the color of the outline as you go to the fingertip, or gradually as you go to the fingertip.
- One example is to reduce the transparency of the contour.
- FIG. 42E is a screen example when the body image created by the body shape extraction unit 600 is overwritten with the display information created by the display information creation unit 700 after adding a translucent or opaque shadow. .
- Such image composition can give the user the feeling of actually touching the screen.
- the body image is checked as necessary to overwrite the display information created by the force display information creation unit 700. It is also possible to process other than hand images. Hereinafter, such an example will be described.
- Fig. 42F shows auxiliary information (G UI component label) in the vicinity of a GUI component that is partially or entirely hidden when the body image created by the body shape extraction unit 600 is overwritten on the display information. And information such as auxiliary explanation) are displayed in a pop-up.
- the image composition unit 800 first determines whether each GUI component included in the display information created by the display information creation unit 700 overlaps the body image using a known algorithm. Then, if there is a GUI part that overlaps the body image, find an area that is away from the GUI part's position in a predetermined direction (such as rightward or upward) and does not overlap the body image, and that area Displays auxiliary information for GUI parts.
- a predetermined direction such as rightward or upward
- Fig. 42G shows a GUI part overwritten with the body image created by the body shape extraction unit 600. If the body part created by the body shape extraction unit 600 is overwritten on the display information, the GUI part label is overwritten on the body part. It is an example. In the example of FIG. 42G, the force of overwriting the body image with only the label of the hidden GUI part may be overwritten together with the shape of the hidden GUI part. By such image composition, the user can identify the G UI component hidden by the body image without moving the hand, and the operability is improved.
- FIG. 42H is an example in which a GUI component that overlaps the fingertip portion of the body image is highlighted when the body image created by the body shape extraction unit 600 is overwritten on the display information.
- the user can easily confirm which GUI part the fingertip of the body image is on.
- a method for detecting the position of the fingertip portion for example, a method of specifying the fingertip portion using a pattern matching method from the shape of the contour, or extracting the nail portion with image data power as shown in FIG. It is done.
- highlighting is, for example, changing the color, changing the transparency, changing the shape, changing the line type 'line thickness, changing the text format, changing the icon, changing the color or transparency continuously (gradient ) Etc. are possible by various methods.
- the screen is divided into a map area and a TV area, and the user's hand is combined only in the map area, and combined in the TV area. Not. This is because, for example, in the television area, there is no button to be operated, so there is no need to synthesize a hand image. In this way, it is possible to improve the visibility of the TV area and inform the user that there is no target to operate in the TV area.
- Fig. 42J As in the example of Fig. 421, the screen is divided into the map area and the TV area, and the user's hand is located across both areas. Are synthesized while being opaque, and only the outline is synthesized in the television area. This is an illustration For example, in the example shown in Fig. 421, since it is completely hidden, it is disliked that the hand is cut out, and the hand shape is displayed in a more natural shape while the visibility of the TV area is reduced. This is to keep it to a minimum.
- step S511 the operation unit 300 sends a message to the control unit 500 when detecting a contact operation or a pressing operation by the user.
- the operation unit 300 may simply output only the coordinate data to the control unit 500, and the control unit 500 may detect a contact operation or the like based on the coordinate data.
- control unit 500 instructs display information creation unit 700 to change the display information.
- Display information creation unit 700 changes the display information in accordance with instructions from control unit 500. An example of changing the display information here will be described with reference to FIGS. 44A and 44B.
- FIG. 44A is an example in which a GUI part corresponding to a point on the operation unit 300 touched (or pressed) by the user (that is, a GUI part selected by the user) is highlighted.
- the GUI component here, button
- FIG. 44A a hand image is shown for convenience, but actually, the display information created by the display information creation unit 700 does not include the hand image.
- FIG. 44B is an example in which a point on the screen corresponding to a point on the operation unit 300 touched (or pressed) by the user is highlighted.
- a circular mark is overwritten at the point on the screen corresponding to the point on the operation unit 300 touched (or pressed) by the user, as if the fingerprint also remained with the force.
- This circular mark is displayed until a certain time has elapsed after the mark is displayed or until the user touches (or presses) the operation unit 300 again.
- the user can easily confirm whether or not the user has correctly specified the point on the screen that he / she has specified. In particular, if the point you are trying to specify and the point that you actually specified are misaligned, Can be confirmed.
- step S513 the control unit 500 causes the image composition unit 800 to display the body image created by the body shape extraction unit 600 and the display information created by the display information creation unit 700. To be synthesized.
- the image composition unit 800 synthesizes the body image and the display information in accordance with this instruction.
- an intuitive input operation using a GUI can be performed without the user's direct contact with the screen and without looking at the hand during operation. Can do.
- the input device of the present invention enables intuitive input operation such as a touch panel display without directly touching the screen, and the input operation is performed at a position away from the display. It is also suitable when a far-focus display is used as a display means. Further, since it is not necessary to look at the hand at the time of input operation, it is also suitable as an input device for a car navigation device.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-228619 | 2004-08-04 | ||
JP2004228619 | 2004-08-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006013783A1 true WO2006013783A1 (ja) | 2006-02-09 |
Family
ID=35787071
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/013866 WO2006013783A1 (ja) | 2004-08-04 | 2005-07-28 | 入力装置 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2006013783A1 (ja) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007272596A (ja) * | 2006-03-31 | 2007-10-18 | Denso Corp | 移動体用操作物体抽出装置 |
JP2007276615A (ja) * | 2006-04-06 | 2007-10-25 | Denso Corp | プロンプター方式操作装置 |
JP2007286696A (ja) * | 2006-04-12 | 2007-11-01 | Toyota Motor Corp | 入力装置 |
JP2008020406A (ja) * | 2006-07-14 | 2008-01-31 | Matsushita Electric Ind Co Ltd | ナビゲーション装置 |
JP2008123032A (ja) * | 2006-11-08 | 2008-05-29 | Toyota Motor Corp | 情報入力装置 |
JP2008221960A (ja) * | 2007-03-12 | 2008-09-25 | Equos Research Co Ltd | 車載装置 |
JP2008234594A (ja) * | 2007-03-23 | 2008-10-02 | Denso Corp | 操作入力装置 |
JP2009051253A (ja) * | 2007-08-23 | 2009-03-12 | Denso Corp | 車両用遠隔操作装置 |
JP2009083533A (ja) * | 2007-09-27 | 2009-04-23 | Denso Corp | 車両用プロンプター方式操作装置 |
JP2009143373A (ja) * | 2007-12-13 | 2009-07-02 | Denso Corp | 車両用操作入力装置 |
JP2010083206A (ja) * | 2008-09-29 | 2010-04-15 | Denso Corp | 車載用電子機器操作装置 |
JP2010160772A (ja) * | 2009-01-06 | 2010-07-22 | Pixart Imaging Inc | バーチャル入力装置を有する電子装置 |
JP2010285148A (ja) * | 2009-06-15 | 2010-12-24 | Tesla Motors Inc | タッチスクリーンを介して車両機能を制御するインターフェース |
US8466871B2 (en) | 2009-02-27 | 2013-06-18 | Hyundai Motor Japan R&D Center, Inc. | Input apparatus for in-vehicle devices |
JPWO2014073403A1 (ja) * | 2012-11-08 | 2016-09-08 | アルプス電気株式会社 | 入力装置 |
JP2017210198A (ja) * | 2016-05-27 | 2017-11-30 | トヨタ紡織株式会社 | 車両用動き検出システム |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0573196A (ja) * | 1991-04-12 | 1993-03-26 | Alpine Electron Inc | データ入力装置 |
JPH10269012A (ja) * | 1997-03-28 | 1998-10-09 | Yazaki Corp | タッチパネルコントローラ及びこれを用いた情報表示装置 |
JPH1124839A (ja) * | 1997-07-07 | 1999-01-29 | Sony Corp | 情報入力装置 |
JP2003271294A (ja) * | 2002-03-15 | 2003-09-26 | Canon Inc | データ入力装置、データ入力方法、及びプログラム |
JP2003323553A (ja) * | 2003-03-05 | 2003-11-14 | Hitachi Ltd | 帳票処理方法およびシステム |
JP2004120374A (ja) * | 2002-09-26 | 2004-04-15 | Fuji Photo Optical Co Ltd | 資料提示装置 |
JP2004196260A (ja) * | 2002-12-20 | 2004-07-15 | Nissan Motor Co Ltd | 車両用表示装置 |
-
2005
- 2005-07-28 WO PCT/JP2005/013866 patent/WO2006013783A1/ja active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0573196A (ja) * | 1991-04-12 | 1993-03-26 | Alpine Electron Inc | データ入力装置 |
JPH10269012A (ja) * | 1997-03-28 | 1998-10-09 | Yazaki Corp | タッチパネルコントローラ及びこれを用いた情報表示装置 |
JPH1124839A (ja) * | 1997-07-07 | 1999-01-29 | Sony Corp | 情報入力装置 |
JP2003271294A (ja) * | 2002-03-15 | 2003-09-26 | Canon Inc | データ入力装置、データ入力方法、及びプログラム |
JP2004120374A (ja) * | 2002-09-26 | 2004-04-15 | Fuji Photo Optical Co Ltd | 資料提示装置 |
JP2004196260A (ja) * | 2002-12-20 | 2004-07-15 | Nissan Motor Co Ltd | 車両用表示装置 |
JP2003323553A (ja) * | 2003-03-05 | 2003-11-14 | Hitachi Ltd | 帳票処理方法およびシステム |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7970173B2 (en) | 2006-03-31 | 2011-06-28 | Denso Corporation | Object-detecting device and method of extracting operation object |
JP2007272596A (ja) * | 2006-03-31 | 2007-10-18 | Denso Corp | 移動体用操作物体抽出装置 |
JP2007276615A (ja) * | 2006-04-06 | 2007-10-25 | Denso Corp | プロンプター方式操作装置 |
JP2007286696A (ja) * | 2006-04-12 | 2007-11-01 | Toyota Motor Corp | 入力装置 |
JP2008020406A (ja) * | 2006-07-14 | 2008-01-31 | Matsushita Electric Ind Co Ltd | ナビゲーション装置 |
JP2008123032A (ja) * | 2006-11-08 | 2008-05-29 | Toyota Motor Corp | 情報入力装置 |
JP2008221960A (ja) * | 2007-03-12 | 2008-09-25 | Equos Research Co Ltd | 車載装置 |
JP2008234594A (ja) * | 2007-03-23 | 2008-10-02 | Denso Corp | 操作入力装置 |
JP2009051253A (ja) * | 2007-08-23 | 2009-03-12 | Denso Corp | 車両用遠隔操作装置 |
JP2009083533A (ja) * | 2007-09-27 | 2009-04-23 | Denso Corp | 車両用プロンプター方式操作装置 |
JP2009143373A (ja) * | 2007-12-13 | 2009-07-02 | Denso Corp | 車両用操作入力装置 |
JP2010083206A (ja) * | 2008-09-29 | 2010-04-15 | Denso Corp | 車載用電子機器操作装置 |
JP2010160772A (ja) * | 2009-01-06 | 2010-07-22 | Pixart Imaging Inc | バーチャル入力装置を有する電子装置 |
US8466871B2 (en) | 2009-02-27 | 2013-06-18 | Hyundai Motor Japan R&D Center, Inc. | Input apparatus for in-vehicle devices |
JP2010285148A (ja) * | 2009-06-15 | 2010-12-24 | Tesla Motors Inc | タッチスクリーンを介して車両機能を制御するインターフェース |
JPWO2014073403A1 (ja) * | 2012-11-08 | 2016-09-08 | アルプス電気株式会社 | 入力装置 |
JP2017210198A (ja) * | 2016-05-27 | 2017-11-30 | トヨタ紡織株式会社 | 車両用動き検出システム |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5201999B2 (ja) | 入力装置、及びその方法 | |
WO2006013783A1 (ja) | 入力装置 | |
JP4351599B2 (ja) | 入力装置 | |
TWI450132B (zh) | A portrait recognition device, an operation judgment method, and a computer program | |
JP4783456B2 (ja) | 映像再生装置及び映像再生方法 | |
JP4702959B2 (ja) | ユーザインタフェイスシステム | |
JP5412227B2 (ja) | 映像表示装置、および、その表示制御方法 | |
US20090002342A1 (en) | Information Processing Device | |
US10591988B2 (en) | Method for displaying user interface of head-mounted display device | |
JP5110438B2 (ja) | 入力装置 | |
JP5086560B2 (ja) | 入力装置 | |
US20100275159A1 (en) | Input device | |
US20130135199A1 (en) | System and method for user interaction with projected content | |
JP2010176565A (ja) | 操作装置 | |
JP2010033158A (ja) | 情報処理装置及び情報処理方法 | |
JP5921981B2 (ja) | 映像表示装置および映像表示方法 | |
US20040046747A1 (en) | Providing input signals | |
JP4945694B2 (ja) | 映像再生装置及び映像再生方法 | |
JP2009282637A (ja) | 表示方法および表示装置 | |
JP4757132B2 (ja) | データ入力装置 | |
JP2020071641A (ja) | 入力操作装置及びユーザインタフェースシステム | |
JP4973508B2 (ja) | 画像表示装置および画像表示方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
122 | Ep: pct application non-entry in european phase |