US20140195016A1 - Input device and method and program - Google Patents
Input device and method and program Download PDFInfo
- Publication number
- US20140195016A1 US20140195016A1 US14/151,667 US201414151667A US2014195016A1 US 20140195016 A1 US20140195016 A1 US 20140195016A1 US 201414151667 A US201414151667 A US 201414151667A US 2014195016 A1 US2014195016 A1 US 2014195016A1
- Authority
- US
- United States
- Prior art keywords
- velocity
- line
- gain
- unit
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
- G06F3/0383—Signal control means within the pointing device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
- H04N21/42206—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
- H04N21/42222—Additional components integrated in the remote control device, e.g. timer, speaker, sensors for detecting position, direction or movement of the remote control, microphone or battery charging device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q9/00—Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom
- H04Q9/04—Arrangements for synchronous operation
Definitions
- the present invention relates to an input device and method and a program, particularly to an input device and method and a program capable of improving the operational feeling in an input operation.
- an EPG Electronic Program Guide
- respective programs are arranged and displayed in a matrix.
- a user operates a remote controller to move a pointer to an arbitrary position and select a given program.
- a remote controller supplied with a television receiver is capable of moving the pointer only in the vertical or horizontal directions. That is, the pointer is not directly moved from a given display position to an intended position located diagonally therefrom.
- a remote controller which detects an operation performed by a user in an arbitrary direction in a three-dimensional free space and moves the pointer in the direction of the operation.
- the operation by the user and the actual movement of the pointer do not match in timing.
- the user has uncomfortable operational feeling in many cases.
- Japanese Patent No. 3217945 proposes not a remote controller enabling the operation in an arbitrary direction in a three-dimensional free space, but the improvement of the operational feeling of a controller provided at the center of a keyboard of a personal computer to move the pointer in accordance with the operation of a pressure-sensitive device called an isometric joystick.
- the invention of the above patent publication realizes a transfer function capable of providing the output as indicated by the broken line with respect to the input indicated by the solid line, to thereby solve the slow motion of the pointer at the start of the movement thereof, which is caused mainly by the dead zone of the above-described device (i.e., the dead zone in which low pressure is ignored), and the overshoot occurring when the movement is stopped.
- the dead zone of the above-described device i.e., the dead zone in which low pressure is ignored
- AV Audio Visual
- MPU Micro Processing Unit
- a relatively long delay occurs between the reception of a movement signal and the movement of the pointer on the screen.
- a user has uncomfortable feeling about the delay which occurs not only at the start or stop of the movement of the pointer but also in the acceleration or deceleration phase during the movement.
- a time delay between the operation and the output of an operation signal corresponding to the operation additionally occurs.
- the hand operating the remote controller is freely movable. Therefore, the user more easily recognizes the delay in the movement of the pointer in response to the operation than in the case of using a joystick or the like. As a result, the unconformable feeling felt by the user is more noticeable.
- the present invention has been made in light of the above-described circumstances, and it is desirable to improve the operational feeling in an input operation. Particularly, in a system having a relatively long delay, it is desirable to improve the operational feeling in an input operation.
- an input device includes a detection unit, a first acquisition unit, a second acquisition unit, and a compensation unit.
- the detection unit is configured to detect an operation by a user for controlling an electronic device and output an operation signal corresponding to the operation.
- the first acquisition unit is configured to acquire the detected operation signal and a differential value of the operation signal.
- the second acquisition unit is configured to acquire a function defined by the differential value to compensate for a delay in response of the operation signal with respect to the operation by the user.
- the compensation unit is configured to compensate the operation signal with the acquired function.
- a detection unit detects an operation by a user for controlling an electronic device and outputs an operation signal corresponding to the operation
- a first acquisition unit acquires the detected operation signal and a differential value of the operation signal
- a second acquisition unit acquires a function defined by the differential value to compensate for a delay in response of the operation signal with respect to the operation by the user
- a compensation unit compensates the operation signal with the acquired function
- the operational feeling in an input operation can be improved.
- the operational feeling in an input operation can be improved.
- FIG. 1 is a diagram illustrating a characteristic of a transfer function of an existing input device
- FIG. 2 is a block diagram illustrating a configuration of an input system according to an embodiment of the present invention
- FIG. 3 is a perspective view illustrating a configuration of the exterior of an input device
- FIG. 4 is a diagram illustrating a configuration of the interior of the input device
- FIG. 5 is a perspective view illustrating a configuration of a sensor substrate
- FIG. 6 is a diagram illustrating a use state of the input device
- FIG. 7 is a block diagram illustrating an electrical configuration of the interior of the input device
- FIG. 8 is a block diagram illustrating a functional configuration of an MPU
- FIG. 9 is a flowchart explaining pointer display processing of the input device.
- FIG. 10 is a diagram explaining characteristics of a gain
- FIG. 11 is a diagram illustrating changes in velocity
- FIG. 12 is a diagram illustrating changes in displacement
- FIGS. 13A and 13B are diagrams illustrating changes in characteristics occurring when the input device is vibrated
- FIGS. 14A and 14B are diagrams illustrating changes in characteristics occurring when the input device is vibrated
- FIG. 15 is a diagram illustrating changes in velocity
- FIG. 16 is a diagram illustrating changes in displacement
- FIG. 17 is a diagram illustrating changes in displacement
- FIG. 18 is a diagram illustrating changes in velocity
- FIG. 19 is a diagram illustrating changes in displacement
- FIG. 20 is a diagram illustrating changes in velocity
- FIG. 21 is a diagram illustrating changes in displacement
- FIG. 22 is a diagram illustrating changes in velocity
- FIG. 23 is a flowchart explaining timer processing of a television receiver
- FIG. 24 is a flowchart explaining pointer display processing of the input device
- FIG. 25 is a flowchart explaining pointer display processing of the input device
- FIG. 26 is a diagram illustrating changes in velocity
- FIG. 27 is a diagram illustrating changes in displacement
- FIG. 28 is a diagram illustrating changes in displacement
- FIG. 29 is a diagram illustrating changes in displacement
- FIG. 30 is a diagram illustrating changes in velocity
- FIG. 31 is a diagram illustrating a configuration of another embodiment of the input device.
- FIG. 32 is a diagram illustrating a configuration of another embodiment of the input device.
- FIG. 33 is a diagram illustrating a configuration of another embodiment of the input device.
- FIG. 34 is a block diagram illustrating a configuration of an input system according to another embodiment of the present invention.
- FIG. 35 is a block diagram illustrating a functional configuration of an image processing unit
- FIG. 36 is a flowchart explaining pointer display processing of a television receiver
- FIG. 37 is a block diagram illustrating another functional configuration of the image processing unit.
- FIG. 38 is a flowchart explaining pointer display processing of the television receiver.
- FIGS. 39A to 39C are diagrams illustrating changes in displacement.
- FIG. 2 illustrates a configuration of an input system according to an embodiment of the present invention.
- This input system 1 is configured to include a television receiver 10 functioning as an electronic device and an input device 31 functioning as a pointing device or remote controller for remote-controlling the television receiver 10 .
- the television receiver 10 is configured to include an antenna 11 , a communication unit 12 , an MPU (Micro Processing Unit) 13 , a demodulation unit 14 , a video RAM (Random Access Memory) 15 , and an output unit 16 .
- MPU Micro Processing Unit
- demodulation unit 14 a demodulation unit 14 .
- video RAM Random Access Memory
- the antenna 11 receives radio waves from the input device 31 .
- the communication unit 12 demodulates the radio waves received via the antenna 11 , and outputs the demodulated radio waves to the MPU 13 . Further, the communication unit 12 modulates a signal received from the MPU 13 , and transmits the modulated signal to the input device 31 via the antenna 11 .
- the MPU 13 controls the respective units on the basis of an instruction received from the input device 31 .
- the demodulation unit 14 demodulates a television broadcasting signal received via a not-illustrated antenna, and outputs a video signal and an audio signal to the video RAM 15 and the output unit 16 , respectively.
- the video RAM 15 combines an image based on the video signal supplied from the demodulation unit 14 with an image of on-screen data such as a pointer and an icon received from the MPU 13 , and outputs the combined image to an image display unit of the output unit 16 .
- the output unit 16 displays the image on the image display unit, and outputs sound from an audio output unit formed by a speaker and so forth.
- the image display unit of the output unit 16 displays an icon 21 and a pointer 22 .
- the input device 31 is operated by a user to change the display position of the icon 21 or the pointer 22 and to remote-control the television receiver 10 .
- FIG. 3 illustrates a configuration of the exterior of the input device 31 .
- the input device 31 includes a body 32 functioning as an operation unit operated by the user to generate an operation signal for controlling an electronic device.
- the body 32 is provided with buttons 33 and 34 on the upper surface thereof and a jog dial 35 on the right surface thereof.
- FIG. 4 illustrates a configuration of the interior of the body 32 of the input device 31 .
- a main substrate 51 In the interior of the input device 31 , a main substrate 51 , a sensor substrate 57 , and batteries 56 are stored.
- the main substrate 51 is attached with an MPU 52 , a crystal oscillator 53 , a communication unit 54 , and an antenna 55 .
- the sensor substrate 57 is attached with an angular velocity sensor 58 and an acceleration sensor 59 , which are manufactured by the technique of MEMS (Micro Electro Mechanical Systems).
- the sensor substrate 57 is set to be parallel to the X-axis and the Y-axis, which are two mutually perpendicular sensitivity axes of the angular velocity sensor 58 and the acceleration sensor 59 .
- the angular velocity sensor 58 formed by a biaxial oscillating angular velocity sensor detects the respective angular velocities of a pitch angle and a yaw angle rotating around a pitch rotation axis 71 and a yaw rotation axis 72 parallel to the X-axis and the Y-axis, respectively.
- the acceleration sensor 59 is a biaxial acceleration sensor which detects the acceleration in the directions of the X-axis and the Y-axis.
- the acceleration sensor 59 is capable of detecting the gravitational acceleration as the vector quantity by using the sensor substrate 57 as the sensitivity plane.
- a triaxial acceleration sensor using three axes of the X-axis, the Y-axis, and the Z-axis as the sensitivity axes can also be used as the acceleration sensor 59 .
- the two batteries 56 supply necessary electric power to the respective units.
- FIG. 6 illustrates a use state of the input device 31 .
- the user holds the input device 31 in his hand 81 , and operates the entire input device 31 in an arbitrary direction in a three-dimensional free space.
- the input device 31 detects the direction of the operation, and outputs an operation signal corresponding to the direction of the operation. Further, if the button 33 or 34 or the jog dial 35 is operated, the input device 31 outputs an operation signal corresponding to the operation.
- buttons 33 and 34 correspond to the left and right buttons of a normal mouse, respectively.
- the button 33 , the button 34 , and the jog dial 35 are operated by the index finger, the middle finger, and the thumb, respectively.
- the commands issued when the buttons and the dial are operated are arbitrary, but may be set as follows, for example.
- buttons 33 With a single-press of the button 33 , which corresponds to a left-click, a selection operation is performed. With a press-and-hold of the button 33 , which corresponds to a drag operation, an icon is moved. With a double-press of the button 33 , which corresponds to a double-click, a file or folder is opened, or a program is executed. With a single-press of the button 34 , which corresponds to a right-click, the menu is displayed. With rotation of the jog dial 35 , a scroll operation is performed. With pressing of the jog dial 35 , a confirmation operation is performed.
- the user can use the input device 31 with operational feeling similar to the operational feeling which the user has when operating a normal mouse of a personal computer.
- the button 33 can be configured as a two-stage switch. In this case, when the first-stage switch is operated or kept in the pressed state, an operation signal representing the movement of the input device 31 is output. Further, when the second-stage switch is operated, a selection operation is performed. It is also possible, of course, to provide a special button and output an operation signal representing the movement when the button is operated.
- FIG. 7 illustrates an electrical configuration of the input device 31 .
- the input device 31 includes an input unit 101 and a sensor 102 , in addition to the MPU 52 , the crystal oscillator 53 , the communication unit 54 , and the antenna 55 .
- the crystal oscillator 53 supplies the MPU 52 with a reference clock.
- the input unit 101 formed by the buttons 33 and 34 , the jog dial 35 , and other buttons is operated by the user, the input unit 101 outputs to the MPU 52 a signal corresponding to the operation.
- the sensor 102 formed by the angular velocity sensor 58 and the acceleration sensor 59 detects the angular velocity and the acceleration in the operation, and outputs the detected angular velocity and acceleration to the MPU 52 .
- the sensor 102 functions as a detection unit which detects an operation by a user for controlling an electronic device and outputs an operation signal corresponding to the operation.
- the MPU 52 generates an operation signal corresponding to an input, and outputs the operation signal in the form of radio waves from the communication unit 54 to the television receiver 10 via the antenna 55 .
- the radio waves are received by the television receiver 10 via the antenna 11 .
- the communication unit 54 receives the radio waves from the television receiver 10 via the antenna 55 , demodulates the signal, and outputs the demodulated signal to the MPU 52 .
- FIG. 8 illustrates a functional configuration of the MPU 52 which operates in accordance with a program stored in an internal memory thereof.
- the MPU 52 includes a velocity acquisition unit 201 , a storage unit 202 , an acceleration acquisition unit 203 , a compensation processing unit 204 , an acceleration acquisition unit 205 , a velocity operation unit 206 , and a movement amount calculation unit 207 .
- the compensation processing unit 204 is configured to include a function unit 221 and a compensation unit 222 .
- the function unit 221 includes a gain acquisition unit 211 , a correction unit 212 , and a limitation unit 213 .
- the compensation unit 222 includes a multiplication unit 214 .
- the velocity acquisition unit 201 and the acceleration acquisition unit 203 constitute a first acquisition unit which acquires the detected operation signal and a differential value of the operation signal.
- the velocity acquisition unit 201 acquires, as the operation signal corresponding to the operation by the user, an angular velocity signal from the angular velocity sensor 58 of the sensor 102 .
- the storage unit 202 stores the angular velocity signal acquired by the velocity acquisition unit 201 .
- the acceleration acquisition unit 203 which functions as the first acquisition unit that acquires the acceleration of the operated operation unit, calculates the difference between the angular velocity signal at one step and the angular velocity signal at the next step stored in the storage unit 202 , to thereby calculate an angular acceleration signal. That is, the acceleration acquisition unit 203 acquires the angular acceleration signal as the differential value of the angular velocity signal as the operation signal.
- the function unit 221 which functions as a second acquisition unit that acquires a function for compensating for a delay in response of the operation signal on the basis of the acquired acceleration, generates a gain G(t) which is a function defined by the acceleration as the differential value acquired by the acceleration acquisition unit 203 , or generates a gain G(t) which is a function defined by the velocity as the operation signal acquired by the velocity acquisition unit 201 and the acceleration as the differential value acquired by the acceleration acquisition unit 203 . Then, the velocity as the operation signal is multiplied by the generated gain G(t). That is, the operation signal is corrected to perform a process of compensating for the delay.
- the gain acquisition unit 211 acquires the gain G(t) corresponding to the acceleration acquired by the acceleration acquisition unit 203 .
- the correction unit 212 corrects the gain G(t) as appropriate.
- the limitation unit 213 limits the gain G(t) or the corrected gain G(t) not to exceed a threshold value.
- the multiplication unit 214 which constitutes the compensation unit 222 functioning as a compensation unit that compensates the operation signal with a function, multiplies the angular velocity acquired by the velocity acquisition unit 201 by the gain G(t) limited by the limitation unit 213 , and outputs the corrected angular velocity.
- the acceleration acquisition unit 205 acquires the acceleration signal from the acceleration sensor 59 of the sensor 102 .
- the velocity operation unit 206 calculates the velocity by using the corrected angular velocity and the acceleration acquired by the acceleration acquisition unit 205 .
- the movement amount calculation unit 207 calculates the movement amount of the body 32 , and outputs the movement amount to the communication unit 54 as the operation signal of the input device 31 .
- the communication unit 54 modulates this signal, and transmits the modulated signal to the television receiver 10 via the antenna 55 .
- pointer display processing of the input device 31 will be described with reference to FIG. 9 .
- This processing is performed when the user holding the body 32 in his hand operates the first-stage switch of the button 33 or keeps the first-stage switch in the pressed state, and at the same time operates the entire input device 31 in an arbitrary predetermined direction, i.e., the entire input device 31 is operated in an arbitrary direction in a three-dimensional free space to move the pointer 22 displayed on the output unit 16 of the television receiver 10 in a predetermined direction. That is, this processing is performed to output the operation signal for controlling the display on the screen of the television receiver 10 from the input device 31 to the television receiver 10 .
- the velocity acquisition unit 201 acquires the angular velocity signal output from the sensor 102 . That is, the operation performed in a predetermined direction in a three-dimensional free space by the user holding the body 32 in his hand is detected by the angular velocity sensor 58 , and a detection signal representing an angular velocity ( ⁇ (t), ⁇ y(t)) according to the movement of the body 32 is acquired.
- the storage unit 202 buffers the acquired angular velocity ( ⁇ x(t), ⁇ y(t)).
- the acceleration acquisition unit 203 acquires an angular acceleration ⁇ ′x(t), ⁇ ′y(t)). Specifically, the acceleration acquisition unit 203 divides the difference between the angular velocity ( ⁇ x(t), ⁇ y(t)) of this time and the angular velocity ( ⁇ x(t ⁇ 1), ⁇ y(t ⁇ 1)) stored the last time in the storage unit 202 by the time therebetween, to thereby calculate the angular acceleration ( ⁇ ′x(t), ⁇ ′y(t)).
- the compensation processing unit 204 performs an operation to compensate for the delay in response of the operation signal on the basis of the acquired velocity and acceleration.
- the gain acquisition unit 211 acquires the gain G(t) according to the angular acceleration ( ⁇ ′x(t), ⁇ ′y(t)) acquired at Step S 3 .
- This gain G(t) as a function is multiplied by the angular velocity at Step S 7 described later. Therefore, a gain G(t) value of 1 serves as a reference value.
- the gain G(t) is larger than the reference value, the angular velocity as the operation signal is corrected to be increased.
- the gain G(t) is smaller than the reference value, the angular velocity is corrected to be reduced.
- the gain G(t) is a value equal to or larger than the reference value (equal to or larger than the value of 1).
- the gain G(t) is a value smaller than the reference value (smaller than the value of 1).
- the larger the absolute value of the acceleration is, the larger the difference between the absolute value of the gain G(t) and the reference value (the value of 1) is.
- the gain G(t) may be acquired by performing an operation or by reading the gain G(t) from a previously mapped table. Further, the gain G(t) may be obtained separately for the X-direction and the Y-direction. Alternatively, the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single gain G(t).
- the correction unit 212 corrects the gain G(t) on the basis of the angular velocity ( ⁇ x(t), ⁇ y(t)) acquired by the velocity acquisition unit 201 . Specifically, the gain G(t) is corrected such that the larger the angular velocity ( ⁇ x(t), ⁇ y(t)) is, the closer to the reference value (the value of 1) the gain G(t) is. That is, in this embodiment, with the process of Step S 4 (the process based on the angular acceleration) and the process of Step S 5 (the process based on the angular velocity), the gain G(t) is acquired which is the function defined by both the angular velocity as the operation signal and the angular acceleration as the differential value of the angular velocity.
- the corrected value may be obtained separately for the X-direction and the Y-direction, or the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single corrected value.
- the limitation unit 213 limits the gain G(t) not to exceed the threshold value. That is, the corrected gain G(t) is limited to be within the range of the predetermined threshold value.
- the threshold value is set to be the maximum or minimum value, and the absolute value of the gain G(t) is limited not to exceed the threshold value. If the input device 31 is vibrated, therefore, a situation is suppressed in which the absolute value of the gain G(t) is too small to compensate for the delay or too large to prevent oscillation.
- Steps S 4 to S 6 can be performed by a single reading process, if the gain G(t) has previously been mapped in the gain acquisition unit 211 to satisfy the conditions of the respective steps.
- FIG. 10 illustrates an example of mapping satisfying these conditions.
- the horizontal axis and the vertical axis represent the angular acceleration and the gain G(t), respectively.
- the gain G(t) is represented by a straight line with an intercept of 1 and a positive slope for each of angular velocities.
- the angular velocity is represented in absolute value.
- the gain G(t) represented by the vertical axis is a value equal to or larger than the reference value (the value of 1). If the angular acceleration is negative (in the left half region of FIG. 10 ), the gain G(t) is a value smaller than the reference value (the value of 1) (Step S 4 ).
- the gain G(t) is a value represented by a straight line with an intercept corresponding to the reference value (the value of 1) and a positive slope. Therefore, the larger the absolute value of the angular acceleration is, the larger the absolute value of the difference between the absolute value of the gain G(t) and the reference value (the value of 1) is (Step S 4 ). In other words, the gain G(t) is set to a value with which the larger the absolute value of the angular acceleration as the differential value is, the larger the correction amount of the angular velocity as the operation signal is.
- the value of the gain G(t) is approximately 3 (the absolute value of the difference from the value of 1 is 2, which is small).
- the value of the gain G(t) is approximately ⁇ 5 (the absolute value of the difference from the value of 1 is 6, which is large).
- the gain G(t) is set to a value with which the smaller the angular velocity as the operation signal is, the larger the correction amount of the angular velocity is. For example, in the case of an angular acceleration of 15 digit/s 2 , when the angular velocity is 1 digit/s (i.e., when the angular velocity is small), the gain G(t) is approximately 8 (i.e., the absolute value of the gain G(t) is large).
- the gain G(t) is approximately 5 (i.e., the absolute value of the gain G(t) is small).
- the gain G(t) is approximately ⁇ 6 (i.e., the absolute value of the gain G(t) is large).
- the gain G(t) is approximately ⁇ 1 (i.e., the absolute value of the gain G(t) is small).
- the above indicates that practically the correction of the angular velocity is performed only when the angular velocity is small, and is not performed when the angular velocity is large.
- the gain G(t) is a value equal or close to the reference value (the value of 1). Practically, therefore, the velocity is not corrected. That is, the correction of the angular velocity is performed immediately after the start of the movement of the input device 31 and immediately before the stop of the movement.
- the value of the gain G(t) is limited to be within the range from a threshold value of ⁇ 10 to a threshold value of 10, i.e., limited not to exceed these threshold values (Step S 6 ). That is, the maximum value of the gain G(t) is to be the threshold value of 10, and the minimum value of the gain G(t) is set to be the threshold value of ⁇ 10.
- the respective lines representing the characteristics of the respective velocities in FIG. 10 may not be straight lines, and may be curved lines.
- the multiplication unit 214 multiplies the angular velocity ( ⁇ x(t), ⁇ y(t)) as the operation signal by the gain G(t). That is, the angular velocity is multiplied by the gain G(t) as a coefficient, and thereby the corrected angular velocity ( ⁇ x1(t), ⁇ y1(t)) is generated. For example, if the gain G(t) is used as the representative value integrating the value in the X-axis direction and the value in the Y-axis direction, the corrected angular velocity ( ⁇ x1(t), ⁇ y1(t)) is calculated with the following formula.
- ⁇ x 1( t ) ⁇ x ( t ) ⁇ G ( t )
- the velocity operation unit 206 calculates a velocity (Vx(t), Vy(t)).
- the velocity is obtained by the multiplication of the angular velocity by the radius of gyration. That is, the motion of the input device 31 occurring when the user operates the input device 31 corresponds to the combination of rotational motions centering around a shoulder, elbow, or wrist of the user. Further, the radius of gyration of the motion corresponds to the distance from the rotational center of the combined rotational motions, which changes over time, to the input device 31 .
- (Vx(t), Vy(t)) and ( ⁇ x(t), ⁇ y(t)) on the right side represent the dimension of the velocity. Even if each of the velocity and the angular velocity represented by the right side of this formula (2) is differentiated to represent the dimension of the acceleration or the time rate of change of the acceleration, the correlation is not lost. Similarly, even if each of the velocity and the angular velocity is integrated to represent the dimension of the displacement, the correlation is not lost.
- the radius of gyration (Rx, Ry) is obtained if the value of change (a′x(t), a′y(t)) of the acceleration (ax(t), ay(t)) and the value of change (w′′x(t), ⁇ ′′y(t)) of the angular acceleration ⁇ ′x(t), ⁇ ′y(t)) are known.
- the radius (Rx, Ry) is obtained on the basis of the formula (5).
- the acceleration acquisition unit 205 acquires the acceleration (ax(t), ay(t)) detected by the acceleration sensor 59 constituting the sensor 102 . Therefore, the velocity operation unit 206 differentiates the acceleration (ax(t), ay(t)) to calculate the value of change (a′x(t), a′y(t)) of the acceleration. Further, the velocity operation unit 206 performs a second-order differentiation on the angular velocity ( ⁇ x(t), ⁇ y(t)) detected by the velocity acquisition unit 201 , to thereby calculate the rate of change (w′′x(t), ⁇ ′′y(t)) of the angular acceleration ⁇ ′x(t), ⁇ ′y(t)).
- the velocity operation unit 206 divides the rate of change (a′x(t), a′y(t)) of the acceleration by the rate of change ( ⁇ ′′x(t), ⁇ ′′y(t)) of the angular acceleration ⁇ ′x(t), ⁇ ′y(t)) to calculate the radius of gyration (Rx, Ry).
- the velocity operation unit 206 multiplies the obtained radius (Rx, Ry) by the angular velocity to calculate the velocity (Vx(t), Vy(t)).
- the corrected angular velocity ( ⁇ x1(t), ⁇ y1(t)), i.e., the angular velocity ( ⁇ x(t), ⁇ y(t)) multiplied by the gain G(t) is used.
- the movement amount calculation unit 207 calculates the pointer movement amount by using the corrected angular velocity ( ⁇ x1(t), ⁇ y1(t)), and outputs the calculated pointer movement amount.
- the movement amount calculation unit 207 adds the velocity to the immediately preceding position coordinates of the pointer 22 to calculate new position coordinates. That is, the displacement per unit time in the X-direction and the Y-direction of the input device 31 is converted into the displacement amount per unit time in the X-direction and the Y-direction of the pointer 22 displayed on the image display unit of the output unit 16 .
- the pointer movement amount is calculated such that the larger the gain G(t) is, i.e., the larger the correction amount of the angular velocity as the operation signal is, the larger the compensation amount of the delay in response is. That is, as the gain G(t) is increased, the delay between the operation of the input device 31 and the movement of the pointer 22 is reduced. If the value of the gain G(t) is further increased, the movement of the pointer 22 is more advanced in phase than the operation of the input device 31 .
- Step S 8 may be omitted, and the pointer movement amount may be obtained with the use of the corrected angular velocity obtained at Step S 7 .
- the processes performed here include a process of removing a hand-shake component of the input device 31 through a low-pass filter, and a process of, when the operation velocity is low (a low velocity and a low acceleration), setting an extremely low moving velocity of the pointer 22 to make it easy to stop the pointer 22 on the icon 21 .
- other processes are performed to prevent a situation in which the movement of the input device 31 occurs during the operation of the button 33 or 34 , for example, and is erroneously identified as the operation of the entire input device 31 to cause the movement of the pointer 22 .
- These processes include a process of prohibiting the movement of the pointer 22 during the button operation, and a process for correcting the inclination of the input device 31 by setting the gravity direction detected by the acceleration sensor 59 as the lower direction.
- the operation signal representing the pointer movement amount is transmitted from the communication unit 54 to the television receiver 10 via the antenna 55 .
- the communication unit 12 receives the signal from the input device 31 via the antenna 11 .
- the MPU 13 maps the video RAM 15 such that the pointer 22 is displayed at a position corresponding to the received signal.
- the output unit 16 the pointer 22 is displayed at a position corresponding to the operation by the user.
- a part or all of the respective processes of Steps S 1 to S 9 in FIG. 9 can also be performed by the television receiver 10 .
- this configuration it is possible to simplify the configuration of the input device 31 and reduce the load on the input device 31 .
- a part or all of the functional blocks in FIG. 8 is provided to the television receiver 10 .
- angular velocity and angular acceleration used in the above-described processes can be replaced by simple velocity and acceleration.
- FIGS. 11 and 12 illustrate the movement of the pointer 22 occurring when the user performs an operation of moving the input device 31 in a predetermined direction and then stopping the input device 31 .
- the vertical axis represents the velocity
- the horizontal axis represents the time.
- the vertical axis represents the displacement
- the horizontal axis represents the time.
- the unit shown in the respective drawings is a relative value used in a simulation. The same applies to other drawings illustrating characteristics described later.
- a line L 1 represents the velocity corresponding to an actual operation (i.e., the ideal state in which the delay of the pointer 22 is absent).
- the velocity gradually increases from a velocity of 0 at a constant rate, and reaches a velocity of 30. Then, the velocity is maintained for a predetermined time. Thereafter, the velocity gradually decreases from the velocity of 30 at a constant rate, and reaches the velocity of 0.
- a line L 2 represents the velocity in a system having a delay in response, i.e., the velocity of the pointer 22 in a system having a time delay between the operation of the input device 31 and the movement of the pointer 22 in response to the operation.
- the line L 2 is similar in characteristic to the line L 1 , but is delayed (i.e., delayed in phase) from the line L 1 by a time T 0 . That is, when the input device 31 is operated by the user, the velocity of the operation changes as indicated by the line L 1 . However, the operation signal corresponding to the operation is detected with a delay. Therefore, the velocity of the operation signal (which corresponds to the velocity of the pointer 22 controlled on the basis of the operation signal) changes as indicated by the line L 2 .
- a line L 3 represents the result of the process of compensating for the delay, as illustrated in the flowchart of FIG. 9 .
- the start point of the change in velocity of the line L 3 is the same as the start point of the line L 2 .
- the velocity of the line L 3 rapidly increases at the start point, at which the velocity is 0, with a slope steeper than the slope of the line L 2 (i.e., with a larger absolute value of the acceleration) to exceed the line L 2 , and reaches a value located slightly below and close to the line L 1 .
- the line L 3 representing the result of compensation rapidly obtains a characteristic substantially the same as the characteristic of the line L 1 which has no delay.
- the line L 3 is located above the line L 2 and close to and below the line L 1 . That is, the delay time rapidly shifts from the maximum time T 0 to the minimum time T 1 . This means that a prompt response is made upon start of the operation by the user. That is, it is understood that the line L 3 is a line resembling the line L 1 and compensating for the delay of the line L 2 .
- the line L 3 gradually increases with a constant slope substantially similar to the slope of the line L 1 (therefore, the line L 2 ) (i.e., with the constant delay time T 1 ).
- the line L 3 is more advanced in phase than the line L 2 , but is slightly delayed in phase from the line L 1 (i.e., in FIG. 11 , the line L 3 is located above and on the left side of the line L 2 , but is located slightly below and on the right side of the line L 1 ). That is, immediately after the start of the movement, the pointer 22 is accelerated with little delay (with the minimum delay time T 1 ).
- the line L 3 exceeds the velocity of 30 and further increases. Then, at timing immediately before the line L 2 reaches the constant velocity of 30, the line L 3 reaches a velocity of 40, and thereafter rapidly decreases with a steep slope to fall to the velocity of 30. This means that the transitional increase in velocity rapidly ceases and the line L 3 reaches a stable velocity.
- the velocity of the line L 3 remains at the constant value of 30 for a predetermined time. That is, the velocity of the pointer 22 gradually increases from the value of 0, and thereafter is stabilized at the value of 30.
- the velocity of the line L 3 remains at the value of 30 for a while, even after the velocity of the line L 1 starts to fall below the value of 30. Then, at timing immediately before the velocity of the line L 2 starts to decrease from the value of 30, the line L 3 rapidly decreases with a steep slope (i.e., with a larger absolute value of the acceleration) to fall to a velocity of 18, which is a value close to and above the line L 1 . That is, the delay time rapidly shifts from the maximum value T 0 to the minimum value T 1 . This means that a prompt response is made when the user attempts to stop the operation. That is, it is understood that the line L 3 is a line resembling the line L 1 and compensating for the delay of the line L 2 , and that the delay has been compensated for.
- the line L 3 gradually decreases with a slope substantially similar to the slope of the line L 1 (therefore, the line L 2 ), which is a constant slope (i.e., with the constant delay time T 1 ).
- the line L 3 is more advanced in phase than the line L 2 , but is slightly delayed in phase from the line L 1 (i.e., in FIG. 11 , the line L 3 is located below and on the left side of the line L 2 , but is slightly above and on the right side of the line L 1 ). That is, immediately after the start of the stopping operation of the movement, the pointer 22 is decelerated with little delay (with the minimum delay time T 1 ).
- the line L 3 falls below the velocity of 0 and further decreases. Then, at timing immediately before the velocity of the line L 2 reaches the velocity of 0, the velocity of the line L 3 reaches a velocity of approximately ⁇ 9, and thereafter increases with a steep slope (i.e., rapidly) to reach the velocity of 0. This means that the transitional decrease in velocity rapidly ceases and the line L 3 reaches the velocity of 0.
- the line L 3 has a characteristic close to the characteristic of the line L 1 , in which the delay of the line L 2 has been compensated for.
- FIG. 12 illustrates displacements of the pointer 22 corresponding to the changes in velocity of FIG. 11 .
- a line L 11 represents the displacement corresponding to the actual operation (i.e., the displacement with no delay).
- a line L 12 represents the displacement of the system having a delay.
- a line L 13 represents the result of the process of compensating for the delay, as illustrated in the flowchart of FIG. 9 .
- the line L 11 has a characteristic of increasing from a displacement of 0 with a substantially constant slope and thereafter reaching a displacement of approximately 2900.
- the line L 12 is substantially the same in characteristic of change as the line L 11 , but is delayed (i.e., delayed in phase) from the line L 11 . In the drawing, the line L 12 is located below and on the right side of the line L 11 .
- the line L 13 starts to be displaced at a start point substantially the same as the start point of the line L 12 , and swiftly reaches a value close to the line L 11 . Thereafter, the line L 13 gradually increases at a constant rate with a slope substantially similar to the slope of the line L 11 (therefore, the line L 12 ).
- the line L 13 is higher than the line L 12 but slightly lower than the line L 11 . That is, in FIG. 12 , the line L 13 is higher than the line L 12 and close to and lower than the line L 11 .
- the line L 13 is a line having a characteristic resembling the characteristic of the line L 11 , and compensating for the delay of the line L 12 .
- the line L 13 slightly exceeds the line L 11 (i.e., in FIG. 12 , the line L 13 is located slightly above the line L 11 ), and thereafter converges to the constant value of 2900.
- the line L 13 is a line having a characteristic resembling the characteristic of the line L 11 , and compensating for the delay of the line L 12 .
- the user can operate the input device 31 in an arbitrary direction in a free space to, for example, swiftly move the pointer 22 to the desired icon 21 located in the direction of the operation and stop the pointer 22 at the location.
- the uncomfortable operational feeling felt by the user is suppressed. That is, a situation is suppressed in which the user feels that the movement of the pointer 22 starts later than the start of the operation of the input device 31 , or that the movement of the pointer 22 stops later than the stop of the operation of the input device 31 .
- the operational feeling can be improved.
- FIGS. 13A and 13B illustrate characteristics obtained when the input device 31 is vibrated.
- the vertical axis represents the velocity in FIG. 13A and the displacement in FIG. 13B .
- the horizontal axis represents the time in both drawings.
- lines L 21 , L 22 , and L 23 represent the result of a case in which there is no delay, the result of a case in which there is a delay, and the result of a case in which the delay has been compensated for, respectively.
- lines L 31 , L 32 , and L 33 represent the result of the case in which there is no delay, the result of the case in which there is a delay, and the result of the case in which the delay has been compensated for, respectively. It is understood that, when the frequency of the vibration of the input device 31 is high, the delay has not been compensated for and oscillation is occurring.
- FIGS. 14A and 14B illustrate characteristics obtained when the gain G(t) has been limited and the input device 31 is vibrated.
- the vertical axis represents the velocity in FIG. 14A and the displacement in FIG. 14B .
- the horizontal axis represents the time in both drawings.
- lines L 51 , L 52 , and L 53 represent the result of a case in which there is no delay, the result of a case in which there is a delay, and the result of a case in which the delay has been compensated for, respectively.
- lines L 61 , L 62 , and L 63 represent the result of the case in which there is no delay, the result of the case in which there is a delay, and the result of the case in which the delay has been compensated for, respectively. It is understood from these drawings that the oscillation is suppressed. This suppression of oscillation is the effect of the process of Step S 6 in FIG. 9 .
- a similar effect can also be achieved by the elimination of oscillation frequency through a low-pass filter.
- FIGS. 15 and 16 illustrate characteristics obtained in this case.
- FIGS. 15 and 16 correspond to FIGS. 11 and 12 , respectively.
- Lines L 81 , L 82 , and L 83 in FIG. 15 correspond to the lines L 1 , L 2 , and L 3 in FIG. 11 , respectively.
- Lines L 91 , L 92 , and L 93 in FIG. 16 correspond to the lines L 11 , L 12 , and L 13 in FIG. 12 , respectively.
- the line L 83 immediately after the start of the motion rapidly increases at the same start point as the start point of the line L 82 , and thereafter exceeds the line L 81 (the line L 83 is located above and on the left side of the line L 81 in the drawing). Thereafter, the line L 83 gradually increases with the same slope as the slope of the line L 81 . Further, when the motion is stopped, the line L 83 rapidly decreases from the constant value of 30 to fall below the line L 81 (the line L 83 is located below and on the left side of the line L 81 in the drawing), and thereafter gradually decreases with the same slope as the slope of the line L 81 .
- the line L 93 is higher than the line L 92 , and also rapidly increases to exceed the line L 91 . Thereafter, in the vicinity of the line L 91 , the line L 93 increases substantially similarly to the line L 91 , and converges to the displacement of 2900.
- FIGS. 17 to 22 illustrate the results of the comparison.
- FIGS. 17 and 18 illustrate the case in which compensation has been made to maintain a slight delay.
- FIGS. 19 and 20 illustrate the case in which compensation has been made to advance in phase the movement of the pointer 22 .
- FIGS. 21 and 22 illustrate the case in which compensation has been made to set the delay to be substantially zero. All of the drawings illustrate a case in which the pointer 22 is moved in a predetermined direction and thereafter moved in the opposite direction.
- the vertical axis represents the displacement amount in FIGS. 17 , 19 , and 21 and the velocity in FIGS. 18 , 20 , and 22 .
- the horizontal axis represents the time in all of the drawings.
- lines L 101 and L 111 represent the results of a case in which there is no delay
- lines L 102 and L 112 represent the results of a case in which there is a delay of 0.2 seconds in the system (a case in which compensation is not made).
- lines L 103 and L 113 represent the results of a case in which compensation has been made to maintain a slight delay.
- the line L 103 is located between the lines L 101 and L 102 .
- the line L 113 is located between the lines L 111 and L 112 . It is therefore understood that the compensation has been made to reduce the delay to a time shorter than 0.2 seconds.
- lines L 121 and L 131 represent the results of a case in which there is no delay
- lines L 122 and L 132 represent the results of a case in which there is a delay in the system (a case in which compensation is not made).
- lines L 123 and L 133 represent the results of a case in which compensation has been made to advance in phase the movement of the pointer 22 .
- FIG. 19 when the displacement increases, the line L 123 is located above and on the left side of the line L 121 .
- the line L 123 is located below and on the left side of the line L 121 . Also in FIG.
- the line L 133 when the velocity increases, the line L 133 is located above and on the left side of the line L 131 .
- the line L 133 When the velocity decreases, the line L 133 is located below and on the left side of the line L 131 . It is therefore understood that the lines L 123 and L 133 are more advanced in phase than the line L 121 and L 131 , respectively.
- lines L 141 and L 151 represent the results of a case in which there is no delay
- lines L 142 and L 152 represent the results of a case in which there is a delay (a case in which compensation is not made).
- lines L 143 and L 153 represent the results of a case in which compensation has been made to eliminate the delay.
- the line L 143 changes substantially along the line L 141 .
- the line L 153 changes substantially along the line L 151 . It is therefore understood that appropriate compensation has been performed.
- the value of the gain G(t) can also be changed in accordance with the delay amount in the television receiver 10 .
- FIGS. 23 and 24 illustrate the processing of the television receiver 10 and the processing of the input device 31 , respectively, which are performed in this case.
- the television receiver 10 performs the timer processing illustrated in FIG. 23 .
- Step S 31 the television receiver 10 sets the timer value to zero.
- Step S 32 the television receiver 10 stands by until the completion of a processing cycle. That is, upon completion of the processing cycle from the reception of the information of the pointer movement amount output from the input device 31 to the completion of the movement of the pointer 22 on the screen, the television receiver 10 at Step S 33 transmits the timer value measured during the processing cycle. Thereafter, the processing returns to Step S 31 to repeatedly perform similar processes.
- the television receiver 10 transmits to the input device 31 the timer value corresponding to the time taken for the processing of the processing cycle.
- the time taken for the television receiver 10 to perform the above-described processing varies, depending on the capability of the MPU 13 used in the television receiver 10 and on the state of the load on the MPU 13 during the processing and so forth. Therefore, the television receiver 10 measures the processing time by using a timer, and transmits the measured processing time to the input device 31 .
- the input device 31 controls the value of the gain G(t), as illustrated in the flowchart of FIG. 24 .
- the processes of Steps S 51 to S 60 in FIG. 24 are basically similar to the processes of Steps S 1 to S 9 in FIG. 9 . In FIG. 24 , however, the process of Step S 5 in FIG. 9 of correcting the gain G(t) on the basis of the angular velocity is omitted. Alternatively, the process may not be omitted. If the correction process is not omitted, the gain G(t) is a function defined by the angular velocity and the angular acceleration. If the correction process is omitted, the gain G(t) is a function defined by the angular acceleration.
- Step S 55 a process of receiving the timer value is performed as Step S 55 after the process of Step S 54 corresponding to Step S 4 in FIG. 9 .
- Step S 51 the angular velocity ( ⁇ x(t), ⁇ y(t)) is temporarily buffered in the storage unit 202 at Step S 52 .
- Step S 53 the difference between the angular velocity of this time ( ⁇ x(t), ⁇ y(t)) and the stored angular velocity of the last time ( ⁇ x(t ⁇ 1), ⁇ y(t ⁇ 1)) (the difference between the angular velocity at one step and the angular velocity at the next step) is calculated, and thereby the angular acceleration ( ⁇ ′x(t), ⁇ ′y(t)) is calculated. That is, the angular velocity is differentiated, and the angular acceleration as the differential value is acquired.
- Step S 54 the gain G(t) according to the angular acceleration ( ⁇ ′x(t), ⁇ ′y(t)) is acquired.
- the correction unit 212 receives the timer value transmitted from the television receiver 10 at Step S 33 in FIG. 23 . Specifically, the signal from the television receiver 10 is received by the communication unit 54 via the antenna 55 , demodulated, and acquired by the correction unit 212 . Then, at Step S 56 , the correction unit 212 corrects the gain G(t) in accordance with the timer value. Specifically, an operation with the following formula is performed.
- ⁇ represents a positive value which increases as the timer value increases.
- the value ⁇ is calculated on the basis of a predetermined function, or is acquired from a mapped memory. Therefore, in the acceleration phase, the longer the delay is, the larger value the gain G(t) is corrected to. In the deceleration phase, the longer the delay is, the smaller value the gain G(t) is corrected to.
- Steps S 57 to S 60 are similar to the processes of Steps S 6 to S 9 in FIG. 9 . Description thereof is redundant, and thus will be omitted.
- the gain G(t) is changed in accordance with the delay amount, as illustrated in the above formula (6). Thereby, the operational feeling can be further improved.
- the gain G(t) is determined on the basis of the velocity and the acceleration.
- the gain G(t) can be determined solely on the basis of the acceleration.
- FIG. 25 illustrates pointer display processing performed in this case.
- the velocity acquisition unit 201 acquires the angular velocity ( ⁇ x(t), ⁇ y(t)) from the output of the angular velocity sensor 58 .
- the angular velocity is temporarily stored by the storage unit 202 .
- the acceleration acquisition unit 203 calculates the difference between the angular velocity of this time ( ⁇ x(t), ⁇ y(t)) and the angular velocity of the last time ( ⁇ x(t ⁇ 1), ⁇ y(t ⁇ 1)) stored in the storage unit 202 (the difference between the angular velocity at one step and the angular velocity at the next step), to thereby acquire the angular acceleration ( ⁇ ′x(t), ⁇ ′y(t)). That is, the angular velocity is differentiated, and the angular acceleration as the differential value is acquired.
- the gain acquisition unit 211 acquires the gain G(t) according to the angular acceleration ( ⁇ ′x(t), ⁇ ′y(t)).
- the gain G(t) is a value larger than one.
- the gain G(t) is a value smaller than one.
- the limitation unit 213 limits the gain G(t) not to exceed the reference value.
- Step S 86 the multiplication unit 214 multiplies the angular velocity ( ⁇ x(t), ⁇ y(t)) by the gain G(t) to calculate the corrected angular velocity ( ⁇ x1, ⁇ y1). That is, the operation with the following formula is performed.
- This formula (7) is the same as the above-described formula (1).
- ⁇ x 1( t ) ⁇ x ( t ) ⁇ G ( t )
- the velocity operation unit 206 calculates the velocity (Vx(t), Vy(t)). That is, the velocity operation unit 206 divides the rate of change (a′x(t), a′y(t)) of the acceleration by the rate of change ( ⁇ ′′x(t), ⁇ ′′y(t)) of the angular acceleration, to thereby obtain the radius (Rx, Ry) of the motion of the input device 31 occurring when the user operates the input device 31 .
- the velocity operation unit 206 multiplies the obtained radius (Rx, Ry) by the angular velocity to calculate the velocity (Vx(t), Vy(t)).
- the corrected angular velocity ( ⁇ x1(t), ⁇ y1(t)), i.e., the angular velocity ( ⁇ x(t), ⁇ y(t)) multiplied by the gain G(t) is used.
- the movement amount calculation unit 207 calculates the pointer movement amount by using the velocity (Vx(t), Vy(t)) calculated in the process of Step S 87 , and outputs the calculated pointer movement amount.
- the movement amount calculation unit 207 adds the velocity to the immediately preceding position coordinates of the pointer 22 , to thereby calculate new position coordinates. That is, the displacement per unit time in the X-direction and the Y-direction of the input device 31 is converted into the displacement amount per unit time in the X-direction and the Y-direction of the pointer 22 displayed on the image display unit of the output unit 16 .
- the pointer movement amount is calculated such that the larger the gain G(t) is, i.e., the larger the correction amount of the angular velocity is, the larger the compensation amount of the delay in response is.
- the process at Step S 5 in FIG. 9 of correcting the gain G(t) on the basis of the angular velocity ( ⁇ x(t), ⁇ y(t)) is not performed. That is, the gain G(t) is acquired and determined solely on the basis of the angular acceleration.
- FIGS. 26 and 27 illustrate changes in velocity and displacement occurring when the process of compensating for the delay is performed with the use of the gain G(t) determined solely on the basis of the acceleration, as illustrated in FIG. 25 .
- FIG. 26 corresponds to FIG. 11
- FIG. 27 corresponds to FIG. 12 .
- Lines L 161 , L 162 , and L 163 in FIG. 26 correspond to the lines L 1 , L 2 , and L 3 in FIG. 11 , respectively.
- the line L 163 representing the compensated velocity starts at a start point substantially the same as the start point of the line L 162 representing the delayed velocity, and increases at a constant rate with a slope steeper than the slope of the line L 161 (therefore, the line L 162 ) to reach a velocity of approximately 50. Then, the line L 163 decreases with a steep slope to fall to the velocity of 30. Thereafter, the line L 163 remains at the constant velocity of 30 for a predetermined time.
- the line L 163 representing the compensated velocity starts to decrease at a point substantially the same as the point at which the line L 162 representing the delayed velocity starts to decrease, and decreases at a constant rate with a slope steeper than the slope of the line L 161 to fall to a velocity of approximately ⁇ 17. Further, the line L 163 increases with a steep slope to reach the velocity of 0.
- Lines L 171 , L 172 , and L 173 in FIG. 27 correspond to the lines L 11 , L 12 , and L 13 in FIG. 12 , respectively.
- the line L 173 starts to be displaced at a start point substantially the same as the start point of the line L 172 , and swiftly reaches a value close to the line L 171 .
- the line L 173 gradually increases at a constant rate with a slope substantially similar to the slope of the line L 171 (therefore, the line L 172 ).
- the line L 173 is higher than the line L 172 but slightly lower than the line L 171 . That is, in FIG.
- the line L 173 is higher than the line L 172 and close to and lower than the line L 171 .
- the line L 173 is a line having a characteristic resembling the characteristic of the line L 171 , and compensating for the delay of the line L 172 .
- the line L 173 Immediately before reaching the displacement of approximately 2900, the line L 173 exceeds the line L 171 (i.e., in FIG. 27 , the line L 173 is located slightly above the line L 171 ), and thereafter converges to the constant value of 2900.
- the line L 173 is a line having a characteristic resembling the characteristic of the line L 171 , and compensating for the delay of the line L 172 .
- FIGS. 28 and 29 illustrate the results of an operation of moving the pointer 22 in a predetermined direction and thereafter moving the pointer 22 back to the opposite direction by using the gain G(t) determined solely on the basis of the acceleration.
- Lines L 181 and L 191 represent the results of a case in which there is no delay
- lines L 182 and L 192 represent the results of a case in which there is a delay (a case in which compensation is not made).
- lines L 183 and L 193 represent the results of a case in which compensation has been made to set the delay to be substantially zero.
- FIG. 28 illustrates the result of a case in which the delay in a high-velocity region has been compensated for. According to the evaluation of the subjects in this case, the delay was unnoticed when the velocity was high but noticed when the velocity was low.
- FIG. 29 illustrates the result of a case in which the delay in a low-velocity region has been compensated for. According to the evaluation of the subjects in this case, the delay was unnoticed when the velocity was low but noticed when the velocity was high.
- FIG. 30 illustrates changes in velocity occurring in the operation of moving the pointer 22 in a predetermined direction and thereafter moving the pointer 22 back to the opposite direction by using the gain G(t) determined solely on the basis of the acceleration.
- a line L 201 represents the result of a case in which there is no delay
- a line L 202 represents the result of a case in which there is a delay (a case in which compensation is not made).
- a line L 203 represents the result of a case in which compensation has been made to set the delay to be substantially zero.
- the subjects felt that the delay was substantially compensated for, but had uncomfortable feeling about the sensitivity of the pointer 22 at the start and end of the movement thereof. That is, when the user operates the input device 31 , excessive acceleration of the pointer 22 abruptly starts with a delay of one beat. Also in the stopping operation, the pointer 22 rapidly decelerates and stops. As a result, the user feels unnaturalness.
- the angular velocity sensor 58 and the acceleration sensor 59 are used as the sensor.
- an image sensor can also be used.
- FIG. 31 illustrates a configuration of this case.
- a leading end of the input device 31 is attached with an image sensor 401 , such as a CMOS (Complementary Metal Oxide Semiconductor).
- the user operates the input device 31 to have the image sensor 401 pick up the image in the direction in which the image sensor 401 is oriented.
- the velocity (Vx, Vy) is calculated in accordance with the following formula.
- Vx ( X 1 ⁇ X 2)/ ⁇ t
- Vy ( Y 1 ⁇ Y 2)/ ⁇ t (8)
- the compensation process can be performed in a similar manner as in the above-described case.
- FIG. 32 illustrates an embodiment of this case.
- the input device 31 includes a sensor 501 and an operation unit 502 .
- the sensor 501 includes a geomagnetic sensor 511 and an acceleration sensor 512 .
- the user moves the input device 31 in an arbitrary direction.
- the geomagnetic sensor 511 detects the absolute angle (direction) of the operated input device 31 .
- the operation unit 502 divides the difference between two temporally adjacent angles by the time therebetween to calculate the angular velocity.
- the compensation process can be performed in a similar manner as in the above-described case.
- the operation unit 502 calculates a pitch angle and a roll angle. Then, on the basis of the calculated angles, the operation unit 502 compensates for the slope to correct the position coordinates to more accurate values. In this process, a commonly used slope compensation algorithm can be used.
- variable resistor can also be used as the sensor.
- FIG. 33 illustrates an embodiment of this case.
- the input device 31 includes a variable resistor 600 as the sensor.
- slide portions 604 and 605 are guided by rod-like resistors 612 and 613 , respectively, which are disposed in the horizontal direction in the drawing (the X-axis direction). Thereby, the slide portions 604 and 605 are slidable in the horizontal direction.
- slide portions 608 and 609 are guided by rod-like resistors 610 and 611 , respectively, which are disposed in the vertical direction in the drawing (the Y-axis direction). Thereby, the slide portions 608 and 609 are slidable in the vertical direction in the drawing (the Y-axis direction).
- a bar 602 attached with the slide portions 604 and 605 at end portions thereof is formed with a groove 603 .
- a bar 606 attached with the slide portions 608 and 609 at end portions thereof is formed with a groove 607 .
- an operation unit 601 is slidably disposed in the grooves 603 and 607 .
- the resistance value in the X-direction and the resistance value in the Y-direction at the position of the operation unit 601 are changed.
- These resistance values represent the coordinates in the X-direction and the Y-direction in the frame 614 . Therefore, in a similar manner as illustrated in the formula (8), the difference between two coordinate points is divided by the time. Thereby, the velocity can be obtained.
- the compensation process can be performed in a similar manner as in the above-described case.
- the mass of the operation unit 601 may be increased such that, when the entire input device 31 is tilted in a predetermined direction, the operation unit 601 is moved within the frame 614 .
- the operation unit 601 may be operated by the user with his finger.
- FIG. 34 illustrates a configuration of an input system according to another embodiment of the present invention.
- an input system 701 of FIG. 34 the operation by the user using a gesture with his hand or finger is detected, and thereby a command is input.
- a television receiver 711 of the input system 701 includes a demodulation unit 721 , a video RAM 722 , an image processing unit 723 , an MPU 724 , and an output unit 725 . Further, an upper portion of the television receiver 711 is attached with an image sensor 726 .
- the demodulation unit 721 demodulates a television broadcasting signal received via a not-illustrated antenna, and outputs a video signal and an audio signal to the video RAM 722 and the output unit 725 , respectively.
- the video RAM 722 stores the video signal supplied from the demodulation unit 721 , and stores the image picked up by the image sensor 726 .
- the image processing unit 723 detects the gesture with a hand or finger (which corresponds to the operation unit of the input device 31 , and thus will be hereinafter referred to also as the operation unit), and assigns a command to the gesture.
- This function can be realized by commonly used techniques, such as the techniques of Japanese Unexamined Patent Application Publication Nos. 59-132079 and 10-207618, for example.
- the image processing unit 723 detects the gesture of the operation unit of the user picked up by the image sensor 726 . In this embodiment, therefore, a part of the configuration of the television receiver 711 functioning as an electronic device constitutes an input device.
- the image processing unit 723 outputs the coordinates of the pointer 22 or the like to the MPU 724 .
- the MPU 724 controls the display position of the pointer 22 displayed on the output unit 725 .
- the image processing unit 723 and the MPU 724 can be integrally configured.
- the output unit 725 includes an image display unit and an audio output unit.
- the image sensor 726 functioning as a detection unit picks up the image of the operation unit, which is at least a part of the body of the user performing a gesture motion while viewing the image displayed on the image display unit of the output unit 725 .
- FIG. 35 illustrates a functional configuration of the image processing unit 723 which operates in accordance with a program stored in an internal memory thereof.
- the image processing unit 723 includes a displacement acquisition unit 821 , a storage unit 822 , a velocity acquisition unit 823 , a storage unit 824 , an acceleration acquisition unit 825 , a compensation processing unit 826 , and an output unit 827 .
- the displacement acquisition unit 821 acquires the displacement of the operation unit of the user stored in the video RAM 722 .
- the storage unit 822 stores the displacement acquired by the displacement acquisition unit 821 .
- the velocity acquisition unit 823 calculates the difference between the displacement at one step and the displacement at the next step stored in the storage unit 822 , to thereby calculate a velocity signal. That is, the displacement is differentiated to acquire the velocity as the operation signal.
- the storage unit 824 stores the velocity acquired by the velocity acquisition unit 823 .
- the acceleration acquisition unit 825 calculates the difference between the velocity signal at one step and the velocity signal at the next step stored in the storage unit 824 , to thereby calculate an acceleration signal. That is, the velocity as the operation signal is differentiated to acquire the acceleration as the differential value of the velocity.
- the velocity acquisition unit 823 and the acceleration acquisition unit 825 constitute a first acquisition unit.
- the compensation processing unit 826 generates a gain G(t) defined by the acceleration as the differential value acquired by the acceleration acquisition unit 825 .
- the compensation processing unit 826 generates a gain G(t) defined by the velocity as the operation signal acquired by the velocity acquisition unit 823 and the acceleration as the differential value acquired by the acceleration acquisition unit 825 . Then, the compensation processing unit 826 multiplies the velocity as the operation signal by the generated gain G(t). That is, the velocity as the operation signal is corrected.
- the compensation processing unit 826 includes a function unit 841 and a compensation unit 842 .
- the function unit 841 includes a gain acquisition unit 831 , a correction unit 832 , and a limitation unit 833 .
- the compensation unit 842 includes a multiplication unit 834 .
- the gain acquisition unit 831 acquires the gain G(t) defined by the acceleration as the differential value acquired by the acceleration acquisition unit 825 .
- the correction unit 832 corrects the gain G(t) as appropriate.
- the limitation unit 833 limits the uncorrected gain G(t) or the corrected gain G(t) not to exceed a threshold value.
- the multiplication unit 834 multiplies the velocity as the operation signal acquired by the velocity acquisition unit 823 by the gain G(t), which is a function limited by the limitation unit 833 , to thereby correct the velocity as the operation signal and compensate for the delay.
- the output unit 827 calculates the coordinates of the pointer 22 , and outputs the calculated coordinates.
- pointer display processing of the television receiver 711 will be described.
- This processing is performed when the user operates the operation unit in an arbitrary predetermined direction, i.e., when the entire operation unit is moved in an arbitrary direction in a three-dimensional free space to move the pointer 22 displayed on the output unit 725 of the television receiver 711 in a predetermined direction.
- This processing is performed to generate, in the television receiver 711 which practically includes therein (i.e., is integrated with) the input device, the operation signal for controlling the display on the screen of the television receiver 711 .
- the displacement acquisition unit 821 acquires a displacement (x(t), y(t)). Specifically, the image of the operation unit of the user is picked up by the image sensor 726 and stored in the video RAM 722 . The displacement acquisition unit 821 acquires the coordinates of the operation unit from this image.
- the storage unit 822 buffers the acquired displacement (x(t), y(t)).
- the velocity acquisition unit 823 acquires a velocity (x′(t), y′(t)). Specifically, the velocity acquisition unit 823 divides the difference between the displacement (x(t), y(t)) of this time and the displacement (x(t ⁇ 1), y(t ⁇ 1)) stored the last time in the storage unit 822 by the time therebetween, to thereby calculate the velocity (x′(t), y′(t)) as the operation signal. That is, the differential value is calculated.
- the storage unit 824 buffers the acquired velocity (x′(t), y′(t)).
- the acceleration acquisition unit 825 acquires an acceleration (x′′(t), y′′(t)). Specifically, the acceleration acquisition unit 825 divides the difference between the velocity (x′(t), y′(t)) of this time and the velocity (x′(t ⁇ 1), y′(t ⁇ 1)) stored the last time in the storage unit 824 by the time therebetween, to thereby calculate the acceleration (x′′(t), y′′ (t)) as the differential value. That is, the differential value of the operation signal is acquired.
- Steps S 106 to S 109 on the basis of the velocity acquired as the operation signal and the acceleration as the differential value of the velocity, the compensation processing unit 826 performs an operation for compensating for the delay in response of the operation signal.
- the gain acquisition unit 831 acquires the gain G(t) defined by the acceleration (x′′(t), y′′(t)) acquired at Step S 105 .
- This gain G(t) as a function is multiplied by the velocity as the operation signal at Step S 109 described later. Therefore, a gain G(t) value of 1 serves as a reference value.
- the gain G(t) is larger than the reference value, the velocity is corrected to be increased.
- the gain G(t) is smaller than the reference value, the velocity is corrected to be reduced.
- the gain G(t) is a value equal to or larger than the reference value (equal to or larger than the value of 1).
- the gain G(t) is a value smaller than the reference value (smaller than the value of 1).
- the larger the absolute value of the acceleration is, the larger the difference between the absolute value of the gain G(t) and the reference value (the value of 1) is.
- the gain G(t) may be acquired by performing an operation or by reading the gain G(t) from a previously mapped table. Further, the gain G(t) may be obtained separately for the X-direction and the Y-direction. Alternatively, the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single gain G(t).
- the correction unit 832 corrects the gain G(t) on the basis of the velocity (x′(t), y′(t)) acquired by the velocity acquisition unit 823 . Specifically, the gain G(t) is corrected such that the larger the velocity (x′(t), y′(t)) is, the closer to the reference value (the value of 1) the gain G(t) is.
- the corrected value may be obtained separately for the X-direction and the Y-direction, or the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single corrected value.
- This correction process can be omitted. If the correction process is not omitted, the gain G(t) is a function defined by the velocity and the acceleration. If the correction process is omitted, the gain G(t) is a function defined by the acceleration.
- the limitation unit 833 limits the gain G(t) not to exceed the threshold value. That is, the corrected gain G(t) is limited to be within the range of the predetermined threshold value.
- the threshold value is set to be the maximum or minimum value, and the absolute value of the gain G(t) is limited not to exceed the threshold value. If the operation unit of the user is vibrated, therefore, a situation is suppressed in which the absolute value of the gain G(t) is too small to compensate for the delay or too large to prevent oscillation.
- Steps S 106 to S 108 can be performed by a single reading process, if the gain G(t) has previously been mapped in the gain acquisition unit 831 to satisfy the conditions of the respective steps.
- the multiplication unit 834 constituting the compensation unit 842 multiplies the velocity (x′(t), y′(t)) as the operation signal by the gain G(t). That is, the velocity is multiplied by the gain G(t) as a coefficient, and thereby the corrected velocity (x′1(t), y′1(t)) is generated. For example, if the gain G(t) is used as the representative value integrating the value in the X-axis direction and the value in the Y-axis direction, the corrected velocity (x′1(t), y′1(t)) is calculated with the following formula.
- the output unit 827 calculates the coordinates on the basis of the corrected velocity (x′1(t), y′1(t)), and outputs the calculated coordinates.
- the output unit 827 adds the velocity to the immediately preceding position coordinates of the pointer 22 to calculate new position coordinates. That is, the displacement per unit time in the X-direction and the Y-direction of the operation unit of the user is converted into the displacement amount per unit time in the X-direction and the Y-direction of the pointer 22 displayed on the image display unit of the output unit 725 .
- the pointer movement amount is calculated such that the larger the gain G(t) is, i.e., the larger the correction amount of the velocity is, the larger the compensation amount of the delay in response is. That is, as the gain G(t) is increased, the delay between the operation of the operation unit and the movement of the pointer 22 is reduced. If the value of the gain G(t) is further increased, the movement of the pointer 22 is more advanced in phase than the operation of the operation unit.
- the processes performed here include a process of removing a hand-shake component of the operation unit through a low-pass filter, and a process of, when the operation velocity is low (a low velocity and a low acceleration), setting an extremely low moving velocity of the pointer 22 to make it easy to stop the pointer 22 on the icon 21 .
- the gain G(t) is determined in accordance with the velocity as the operation signal and the acceleration as the differential value of the velocity.
- the gain G(t) can also be determined in accordance with the displacement as the operation signal and the velocity as the differential value of the displacement. With reference to FIGS. 37 and 38 , an embodiment of this case will be described.
- FIG. 37 is a block diagram illustrating a functional configuration of the image processing unit 723 in this case.
- the storage unit 824 and the acceleration acquisition unit 825 of FIG. 35 are omitted, and the output from the velocity acquisition unit 823 is directly supplied to the gain acquisition unit 831 . Further, the correction unit 832 and the multiplication unit 834 are supplied with the displacement acquired by the displacement acquisition unit 821 in place of the velocity acquired by the velocity acquisition unit 823 .
- the other parts of the configuration of the image processing unit 723 in FIG. 37 are similar to the corresponding parts in FIG. 35 . Description thereof is redundant, and thus will be omitted.
- the displacement acquisition unit 821 and the velocity acquisition unit 823 constitute a first acquisition unit.
- pointer display processing of the television receiver 711 will be described. This processing is performed when the user operates the operation unit in an arbitrary predetermined direction, i.e., when the entire operation unit is moved in an arbitrary direction in a three-dimensional free space to move the pointer 22 displayed on the output unit 725 of the television receiver 711 in a predetermined direction. This processing is also performed to generate, in the television receiver 711 which practically includes therein (i.e., is integrated with) the input device, the operation signal for controlling the display on the screen of the television receiver 711 .
- the displacement acquisition unit 821 acquires a displacement (x(t), y(t)). Specifically, the image of the operation unit of the user is picked up by the image sensor 726 and stored in the video RAM 722 . The displacement acquisition unit 821 acquires the coordinates of the operation unit from this image.
- the storage unit 822 buffers the acquired displacement (x(t), y(t)).
- the velocity acquisition unit 823 acquires a velocity (x′(t), y′(t)). Specifically, the velocity acquisition unit 823 divides the difference between the displacement (x(t), y(t)) of this time and the displacement (x(t ⁇ 1), y(t ⁇ 1)) stored the last time in the storage unit 822 by the time therebetween, to thereby calculate the velocity (x′(t), y′(t)). That is, the velocity (x′(t), y′(t)) as the differential value of the displacement (x(t), y(t)) as the operation signal is acquired.
- the compensation processing unit 826 performs an operation for compensating for the delay in response of the operation signal on the basis of the acquired displacement and velocity.
- the gain acquisition unit 831 acquires the gain G(t) according to the velocity (x′(t), y′(t)) acquired at Step S 153 .
- This gain G(t) as a function is multiplied by the displacement at Step S 157 described later. Therefore, a gain G(t) value of 1 serves as a reference value.
- the gain G(t) is larger than the reference value, the displacement as the operation signal is corrected to be increased.
- the displacement is corrected to be reduced.
- the gain G(t) When the velocity is positive (e.g., when the operation unit moves in the left direction (or the upper direction)), the gain G(t) is a value equal to or larger than the reference value (equal to or larger than the value of 1). When the velocity is negative (e.g., when the operation unit moves in the right direction (or the lower direction)), the gain G(t) is a value smaller than the reference value (smaller than the value of 1). Further, the larger the absolute value of the velocity is, the larger the difference between the absolute value of the gain G(t) and the reference value (the value of 1) is.
- the gain G(t) may be acquired by performing an operation or by reading the gain G(t) from a previously mapped table. Further, the gain G(t) may be obtained separately for the X-direction and the Y-direction. Alternatively, the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single gain G(t).
- the correction unit 832 corrects the gain G(t) on the basis of the displacement (x(t), y(t)) as the operation signal acquired by the displacement acquisition unit 821 . Specifically, the gain G(t) is corrected such that the larger the displacement (x(t), y(t)) is, the closer to the reference value (the value of 1) the gain G(t) is.
- the corrected value may be obtained separately for the X-direction and the Y-direction, or the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single corrected value.
- This correction process can be omitted. If the correction process is not omitted, the gain G(t) is a function defined by the displacement and the velocity. If the correction process is omitted, the gain G(t) is a function defined by the velocity.
- the limitation unit 833 limits the gain G(t) not to exceed the threshold value. That is, the corrected gain G(t) is limited to be within the range of the predetermined threshold value.
- the threshold value is set to be the maximum or minimum value, and the absolute value of the gain G(t) is limited not to exceed the threshold value. If the operation unit of the user is vibrated, therefore, a situation is suppressed in which the absolute value of the gain G(t) is too small to compensate for the delay or too large to prevent oscillation.
- Steps S 154 to S 156 can be performed by a single reading process, if the gain G(t) has previously been mapped in the gain acquisition unit 831 to satisfy the conditions of the respective steps.
- the multiplication unit 834 multiplies the displacement (x(t), y(t)) by the gain G(t). That is, the displacement is multiplied by the gain G(t) as a coefficient, and thereby the corrected displacement (x1(t), y1(t)) is generated. For example, if the gain G(t) is used as the representative value integrating the value in the X-axis direction and the value in the Y-axis direction, the corrected displacement (x1(t), y1(t)) is calculated with the following formula.
- the output unit 827 outputs the corrected displacement (x1(t), y1(t)). That is, the larger the gain G(t) is, i.e., the larger the correction amount of the displacement is, the larger the compensation amount of the delay in response is. That is, as the gain G(t) is increased, the delay between the operation of the operation unit and the movement of the pointer 22 is reduced. If the value of the gain G(t) is further increased, the movement of the pointer 22 is more advanced in phase than the operation of the operation unit.
- the processes performed here include a process of removing a hand-shake component of the operation unit through a low-pass filter, and a process of, when the operation velocity is low (a low velocity and a low acceleration), setting an extremely low moving velocity of the pointer 22 to make it easy to stop the pointer 22 on the icon 21 .
- the gain G(t) is determined in accordance with the displacement and the velocity.
- FIGS. 39A to 39C are diagrams illustrating the changes in displacement.
- the vertical axis represents the coordinate (pixel) as the displacement, and the horizontal axis represents the time.
- FIG. 39B illustrates the changes in displacement occurring in a case in which the delay of the velocity as the operation signal has been compensated for with the use of the gain G(t) defined on the basis of the velocity and the acceleration, as in the embodiment of FIG. 36 .
- FIG. 39C illustrates the changes in displacement occurring in a case in which the delay of the displacement as the operation signal has been compensated for with the use of the gain G(t) defined on the basis of the displacement and the velocity, as in the embodiment of FIG. 38 .
- FIG. 39A illustrates the changes in displacement occurring in a case in which the compensation for the delay of the operation signal as in the embodiments of FIGS. 36 and 38 is not performed.
- a line L 301 represents the displacement of the operation unit
- a line L 302 represents the displacement of the pointer 22 occurring in a case in which the display of the pointer 22 is controlled on the basis of the detection result of the displacement of the operation unit.
- the delay of the operation signal with respect to the operation is not compensated for. Therefore, the line L 302 is delayed in phase from the line L 301 .
- a line L 311 represents the displacement of the operation unit, similarly to the line L 301 of FIG. 39A .
- a line L 312 represents the change in displacement occurring in the case in which the delay of the velocity as the operation signal has been compensated for with the use of the gain G(t) defined on the basis of the detection result of the velocity and the acceleration of the operation signal, as in the embodiment of FIG. 36 .
- the delay of the operation signal with respect to the operation has been compensated for. Therefore, the line L 312 is hardly delayed in phase with respect to the line L 311 , and thus is substantially the same in phase as the line L 311 .
- a line L 321 represents the displacement of the operation unit, similarly to the line L 301 of FIG. 39A .
- a line L 322 represents the change in displacement occurring in the case in which the delay of the displacement as the operation signal has been compensated for with the use of the gain G(t) defined on the basis of the detection result of the displacement and the velocity of the operation signal, as in the embodiment of FIG. 38 .
- the delay of the operation signal with respect to the operation has been compensated for. Therefore, the line L 322 is hardly delayed in phase with respect to the line L 321 , and thus is substantially the same in phase as the line L 321 .
- the electronic device operated by the input device 31 is the television receiver 10 .
- the present invention is also applicable to the control of a personal computer and other electronic devices.
- the input device 31 can be configured separately from or integrally with the mobile electronic device. If the input device 31 is integrated with the mobile electronic device, the entire mobile electronic device is operated in a predetermined direction to perform an input operation.
- the series of processes described above can be performed both by hardware and software.
- a program forming the software is installed from a program recording medium on a computer incorporated in special hardware or a general-purpose personal computer, for example, which can perform a variety of functions by installing a variety of programs thereon.
- the steps of describing a program include not only processes performed chronologically in the described order but also processes not necessarily performed chronologically but performed concurrently or individually.
- a system refers to the entirety of a device formed by a plurality of devices.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Position Input By Displaying (AREA)
- Details Of Television Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
An input device includes a detection unit, a first acquisition unit, a second acquisition unit, and a compensation unit. The detection unit is configured to detect an operation by a user for controlling an electronic device and output an operation signal corresponding to the operation. The first acquisition unit is configured to acquire the detected operation signal and a differential value of the operation signal. The second acquisition unit is configured to acquire a function defined by the differential value to compensate for a delay in response of the operation signal with respect to the operation by the user. The compensation unit is configured to compensate the operation signal with the acquired function.
Description
- The present application claims the benefit under 35 U.S.C. §120 as a continuation application of U.S. patent application Ser. No. 12/606,464, filed on Oct. 27, 2009, under Attorney Docket No. 51459.70656US00 and entitled “INPUT DEVICE AND METHOD AND PROGRAM”, which claims priority to Japanese Patent Application No. JP2008-280764, filed on Oct. 31, 2008, the entire contents of both of which are incorporated herein by reference.
- 1. Field of the Invention
- The present invention relates to an input device and method and a program, particularly to an input device and method and a program capable of improving the operational feeling in an input operation.
- 2. Description of the Related Art
- With the recent start of terrestrial digital television broadcasting, an EPG (Electronic Program Guide) can be displayed on a television receiver. In the EPG, respective programs are arranged and displayed in a matrix. A user operates a remote controller to move a pointer to an arbitrary position and select a given program.
- Generally, a remote controller supplied with a television receiver is capable of moving the pointer only in the vertical or horizontal directions. That is, the pointer is not directly moved from a given display position to an intended position located diagonally therefrom.
- In view of this, a remote controller has been proposed which detects an operation performed by a user in an arbitrary direction in a three-dimensional free space and moves the pointer in the direction of the operation. According to this type of remote controller, however, the operation by the user and the actual movement of the pointer do not match in timing. As a result, the user has uncomfortable operational feeling in many cases.
- Japanese Patent No. 3217945 proposes not a remote controller enabling the operation in an arbitrary direction in a three-dimensional free space, but the improvement of the operational feeling of a controller provided at the center of a keyboard of a personal computer to move the pointer in accordance with the operation of a pressure-sensitive device called an isometric joystick.
- As illustrated in
FIG. 1 , the invention of the above patent publication realizes a transfer function capable of providing the output as indicated by the broken line with respect to the input indicated by the solid line, to thereby solve the slow motion of the pointer at the start of the movement thereof, which is caused mainly by the dead zone of the above-described device (i.e., the dead zone in which low pressure is ignored), and the overshoot occurring when the movement is stopped. - In a so-called consumer-use AV (Audio Visual) device, such as a television receiver, the clock of an MPU (Micro Processing Unit) is slower than in a personal computer or the like. As a result, in the movement of a pointer on a screen, for example, a relatively long delay occurs between the reception of a movement signal and the movement of the pointer on the screen. In this case, a user has uncomfortable feeling about the delay which occurs not only at the start or stop of the movement of the pointer but also in the acceleration or deceleration phase during the movement.
- Further, in the case of using the type of remote controller which is operated in a three-dimensional free space by a user, a time delay between the operation and the output of an operation signal corresponding to the operation additionally occurs. Further, the hand operating the remote controller is freely movable. Therefore, the user more easily recognizes the delay in the movement of the pointer in response to the operation than in the case of using a joystick or the like. As a result, the unconformable feeling felt by the user is more noticeable.
- According to the technique of the above patent publication, however, it is difficult in the above-described situation to promptly start the movement of the pointer, move the pointer as desired, or promptly stop the movement of the pointer without causing uncomfortable feeling to the user during the operation.
- The present invention has been made in light of the above-described circumstances, and it is desirable to improve the operational feeling in an input operation. Particularly, in a system having a relatively long delay, it is desirable to improve the operational feeling in an input operation.
- According to an embodiment of the present invention, an input device includes a detection unit, a first acquisition unit, a second acquisition unit, and a compensation unit. The detection unit is configured to detect an operation by a user for controlling an electronic device and output an operation signal corresponding to the operation. The first acquisition unit is configured to acquire the detected operation signal and a differential value of the operation signal. The second acquisition unit is configured to acquire a function defined by the differential value to compensate for a delay in response of the operation signal with respect to the operation by the user. The compensation unit is configured to compensate the operation signal with the acquired function.
- According to an embodiment of the present invention, a detection unit detects an operation by a user for controlling an electronic device and outputs an operation signal corresponding to the operation, a first acquisition unit acquires the detected operation signal and a differential value of the operation signal, a second acquisition unit acquires a function defined by the differential value to compensate for a delay in response of the operation signal with respect to the operation by the user, and a compensation unit compensates the operation signal with the acquired function.
- As described above, according to an embodiment of the present invention, the operational feeling in an input operation can be improved. Particularly, in a system having a relatively long delay, the operational feeling in an input operation can be improved.
-
FIG. 1 is a diagram illustrating a characteristic of a transfer function of an existing input device; -
FIG. 2 is a block diagram illustrating a configuration of an input system according to an embodiment of the present invention; -
FIG. 3 is a perspective view illustrating a configuration of the exterior of an input device; -
FIG. 4 is a diagram illustrating a configuration of the interior of the input device; -
FIG. 5 is a perspective view illustrating a configuration of a sensor substrate; -
FIG. 6 is a diagram illustrating a use state of the input device; -
FIG. 7 is a block diagram illustrating an electrical configuration of the interior of the input device; -
FIG. 8 is a block diagram illustrating a functional configuration of an MPU; -
FIG. 9 is a flowchart explaining pointer display processing of the input device; -
FIG. 10 is a diagram explaining characteristics of a gain; -
FIG. 11 is a diagram illustrating changes in velocity; -
FIG. 12 is a diagram illustrating changes in displacement; -
FIGS. 13A and 13B are diagrams illustrating changes in characteristics occurring when the input device is vibrated; -
FIGS. 14A and 14B are diagrams illustrating changes in characteristics occurring when the input device is vibrated; -
FIG. 15 is a diagram illustrating changes in velocity; -
FIG. 16 is a diagram illustrating changes in displacement; -
FIG. 17 is a diagram illustrating changes in displacement; -
FIG. 18 is a diagram illustrating changes in velocity; -
FIG. 19 is a diagram illustrating changes in displacement; -
FIG. 20 is a diagram illustrating changes in velocity; -
FIG. 21 is a diagram illustrating changes in displacement; -
FIG. 22 is a diagram illustrating changes in velocity; -
FIG. 23 is a flowchart explaining timer processing of a television receiver; -
FIG. 24 is a flowchart explaining pointer display processing of the input device; -
FIG. 25 is a flowchart explaining pointer display processing of the input device; -
FIG. 26 is a diagram illustrating changes in velocity; -
FIG. 27 is a diagram illustrating changes in displacement; -
FIG. 28 is a diagram illustrating changes in displacement; -
FIG. 29 is a diagram illustrating changes in displacement; -
FIG. 30 is a diagram illustrating changes in velocity; -
FIG. 31 is a diagram illustrating a configuration of another embodiment of the input device; -
FIG. 32 is a diagram illustrating a configuration of another embodiment of the input device; -
FIG. 33 is a diagram illustrating a configuration of another embodiment of the input device; -
FIG. 34 is a block diagram illustrating a configuration of an input system according to another embodiment of the present invention; -
FIG. 35 is a block diagram illustrating a functional configuration of an image processing unit; -
FIG. 36 is a flowchart explaining pointer display processing of a television receiver; -
FIG. 37 is a block diagram illustrating another functional configuration of the image processing unit; -
FIG. 38 is a flowchart explaining pointer display processing of the television receiver; and -
FIGS. 39A to 39C are diagrams illustrating changes in displacement. - Preferred embodiments (hereinafter referred to as embodiments) for implementing the invention will be described below. The description will be made in the following order: 1. First Embodiment (Configuration of System), 2. First Embodiment (Configuration of Input Device), 3. First Embodiment (Electrical Configuration of Input Device), 4. First Embodiment (Functional Configuration of MPU in Input Device), 5. First Embodiment (Operation of Input Device), 6. First Embodiment (Characteristics of Input Device), 7. Second Embodiment (Operation of Television Receiver), 8. Second Embodiment (Operation of Input Device), 9. Third Embodiment (Operation of Input Device), 10. Third Embodiment (Characteristics of Input Device), 11. Fourth Embodiment (Configuration of Input Device), 12. Fifth Embodiment (Configuration of Input Device), 13. Sixth Embodiment (Configuration of Input Device), 14. Seventh Embodiment (Configuration of Input System), 15. Seventh Embodiment (Functional Configuration of Image Processing Unit), 16. Seventh Embodiment (Operation of Television Receiver), 17. Eighth Embodiment (Functional Configuration of Image Processing Unit), 18. Eighth Embodiment (Operation of Television Receiver), 19. Changes in Displacement, and 20. Modified Examples.
-
FIG. 2 illustrates a configuration of an input system according to an embodiment of the present invention. - This
input system 1 is configured to include atelevision receiver 10 functioning as an electronic device and aninput device 31 functioning as a pointing device or remote controller for remote-controlling thetelevision receiver 10. - The
television receiver 10 is configured to include anantenna 11, acommunication unit 12, an MPU (Micro Processing Unit) 13, ademodulation unit 14, a video RAM (Random Access Memory) 15, and anoutput unit 16. - The
antenna 11 receives radio waves from theinput device 31. Thecommunication unit 12 demodulates the radio waves received via theantenna 11, and outputs the demodulated radio waves to theMPU 13. Further, thecommunication unit 12 modulates a signal received from theMPU 13, and transmits the modulated signal to theinput device 31 via theantenna 11. TheMPU 13 controls the respective units on the basis of an instruction received from theinput device 31. - The
demodulation unit 14 demodulates a television broadcasting signal received via a not-illustrated antenna, and outputs a video signal and an audio signal to thevideo RAM 15 and theoutput unit 16, respectively. Thevideo RAM 15 combines an image based on the video signal supplied from thedemodulation unit 14 with an image of on-screen data such as a pointer and an icon received from theMPU 13, and outputs the combined image to an image display unit of theoutput unit 16. Theoutput unit 16 displays the image on the image display unit, and outputs sound from an audio output unit formed by a speaker and so forth. - In the display example of
FIG. 2 , the image display unit of theoutput unit 16 displays anicon 21 and apointer 22. Theinput device 31 is operated by a user to change the display position of theicon 21 or thepointer 22 and to remote-control thetelevision receiver 10. - Configuration of Input Device:
FIG. 3 illustrates a configuration of the exterior of theinput device 31. Theinput device 31 includes abody 32 functioning as an operation unit operated by the user to generate an operation signal for controlling an electronic device. Thebody 32 is provided withbuttons jog dial 35 on the right surface thereof. -
FIG. 4 illustrates a configuration of the interior of thebody 32 of theinput device 31. In the interior of theinput device 31, amain substrate 51, asensor substrate 57, andbatteries 56 are stored. Themain substrate 51 is attached with anMPU 52, acrystal oscillator 53, acommunication unit 54, and anantenna 55. - As illustrated on an enlarged scale in
FIG. 5 , thesensor substrate 57 is attached with anangular velocity sensor 58 and anacceleration sensor 59, which are manufactured by the technique of MEMS (Micro Electro Mechanical Systems). Thesensor substrate 57 is set to be parallel to the X-axis and the Y-axis, which are two mutually perpendicular sensitivity axes of theangular velocity sensor 58 and theacceleration sensor 59. - If the
entire body 32 is operated by the user in, for example, an arbitrary direction D1 or direction D2 illustrated inFIG. 6 with the head of the body 32 (an end portion in the left direction inFIG. 6 described later) directed toward thetelevision receiver 10 typically located ahead thereof (located in the left direction, although not illustrated inFIG. 6 ), theangular velocity sensor 58 formed by a biaxial oscillating angular velocity sensor detects the respective angular velocities of a pitch angle and a yaw angle rotating around apitch rotation axis 71 and ayaw rotation axis 72 parallel to the X-axis and the Y-axis, respectively. Theacceleration sensor 59 is a biaxial acceleration sensor which detects the acceleration in the directions of the X-axis and the Y-axis. Theacceleration sensor 59 is capable of detecting the gravitational acceleration as the vector quantity by using thesensor substrate 57 as the sensitivity plane. A triaxial acceleration sensor using three axes of the X-axis, the Y-axis, and the Z-axis as the sensitivity axes can also be used as theacceleration sensor 59. - The two
batteries 56 supply necessary electric power to the respective units. -
FIG. 6 illustrates a use state of theinput device 31. As illustrated in the drawing, the user holds theinput device 31 in hishand 81, and operates theentire input device 31 in an arbitrary direction in a three-dimensional free space. Theinput device 31 detects the direction of the operation, and outputs an operation signal corresponding to the direction of the operation. Further, if thebutton jog dial 35 is operated, theinput device 31 outputs an operation signal corresponding to the operation. - The
buttons button 33, thebutton 34, and thejog dial 35 are operated by the index finger, the middle finger, and the thumb, respectively. The commands issued when the buttons and the dial are operated are arbitrary, but may be set as follows, for example. - With a single-press of the
button 33, which corresponds to a left-click, a selection operation is performed. With a press-and-hold of thebutton 33, which corresponds to a drag operation, an icon is moved. With a double-press of thebutton 33, which corresponds to a double-click, a file or folder is opened, or a program is executed. With a single-press of thebutton 34, which corresponds to a right-click, the menu is displayed. With rotation of thejog dial 35, a scroll operation is performed. With pressing of thejog dial 35, a confirmation operation is performed. - With the above-described settings, the user can use the
input device 31 with operational feeling similar to the operational feeling which the user has when operating a normal mouse of a personal computer. - The
button 33 can be configured as a two-stage switch. In this case, when the first-stage switch is operated or kept in the pressed state, an operation signal representing the movement of theinput device 31 is output. Further, when the second-stage switch is operated, a selection operation is performed. It is also possible, of course, to provide a special button and output an operation signal representing the movement when the button is operated. - Electrical Configuration of Input Device:
FIG. 7 illustrates an electrical configuration of theinput device 31. As illustrated in the drawing, theinput device 31 includes aninput unit 101 and asensor 102, in addition to theMPU 52, thecrystal oscillator 53, thecommunication unit 54, and theantenna 55. - The
crystal oscillator 53 supplies theMPU 52 with a reference clock. When theinput unit 101 formed by thebuttons jog dial 35, and other buttons is operated by the user, theinput unit 101 outputs to the MPU 52 a signal corresponding to the operation. When theentire body 32 is operated by the user, thesensor 102 formed by theangular velocity sensor 58 and theacceleration sensor 59 detects the angular velocity and the acceleration in the operation, and outputs the detected angular velocity and acceleration to theMPU 52. Thesensor 102 functions as a detection unit which detects an operation by a user for controlling an electronic device and outputs an operation signal corresponding to the operation. - The
MPU 52 generates an operation signal corresponding to an input, and outputs the operation signal in the form of radio waves from thecommunication unit 54 to thetelevision receiver 10 via theantenna 55. The radio waves are received by thetelevision receiver 10 via theantenna 11. Further, thecommunication unit 54 receives the radio waves from thetelevision receiver 10 via theantenna 55, demodulates the signal, and outputs the demodulated signal to theMPU 52. - Functional Configuration of MPU in Input Device:
FIG. 8 illustrates a functional configuration of theMPU 52 which operates in accordance with a program stored in an internal memory thereof. TheMPU 52 includes avelocity acquisition unit 201, astorage unit 202, anacceleration acquisition unit 203, acompensation processing unit 204, anacceleration acquisition unit 205, avelocity operation unit 206, and a movementamount calculation unit 207. - The
compensation processing unit 204 is configured to include afunction unit 221 and acompensation unit 222. Thefunction unit 221 includes again acquisition unit 211, acorrection unit 212, and alimitation unit 213. Thecompensation unit 222 includes amultiplication unit 214. - In this embodiment, the
velocity acquisition unit 201 and theacceleration acquisition unit 203 constitute a first acquisition unit which acquires the detected operation signal and a differential value of the operation signal. Thevelocity acquisition unit 201 acquires, as the operation signal corresponding to the operation by the user, an angular velocity signal from theangular velocity sensor 58 of thesensor 102. Thestorage unit 202 stores the angular velocity signal acquired by thevelocity acquisition unit 201. Theacceleration acquisition unit 203, which functions as the first acquisition unit that acquires the acceleration of the operated operation unit, calculates the difference between the angular velocity signal at one step and the angular velocity signal at the next step stored in thestorage unit 202, to thereby calculate an angular acceleration signal. That is, theacceleration acquisition unit 203 acquires the angular acceleration signal as the differential value of the angular velocity signal as the operation signal. - The
function unit 221, which functions as a second acquisition unit that acquires a function for compensating for a delay in response of the operation signal on the basis of the acquired acceleration, generates a gain G(t) which is a function defined by the acceleration as the differential value acquired by theacceleration acquisition unit 203, or generates a gain G(t) which is a function defined by the velocity as the operation signal acquired by thevelocity acquisition unit 201 and the acceleration as the differential value acquired by theacceleration acquisition unit 203. Then, the velocity as the operation signal is multiplied by the generated gain G(t). That is, the operation signal is corrected to perform a process of compensating for the delay. - The
gain acquisition unit 211 acquires the gain G(t) corresponding to the acceleration acquired by theacceleration acquisition unit 203. On the basis of the angular velocity acquired by thevelocity acquisition unit 201 or the timer value received from thetelevision receiver 10, thecorrection unit 212 corrects the gain G(t) as appropriate. Thelimitation unit 213 limits the gain G(t) or the corrected gain G(t) not to exceed a threshold value. Themultiplication unit 214, which constitutes thecompensation unit 222 functioning as a compensation unit that compensates the operation signal with a function, multiplies the angular velocity acquired by thevelocity acquisition unit 201 by the gain G(t) limited by thelimitation unit 213, and outputs the corrected angular velocity. - The
acceleration acquisition unit 205 acquires the acceleration signal from theacceleration sensor 59 of thesensor 102. Thevelocity operation unit 206 calculates the velocity by using the corrected angular velocity and the acceleration acquired by theacceleration acquisition unit 205. - On the basis of the velocity supplied from the
velocity operation unit 206, the movementamount calculation unit 207 calculates the movement amount of thebody 32, and outputs the movement amount to thecommunication unit 54 as the operation signal of theinput device 31. - As described above, the
communication unit 54 modulates this signal, and transmits the modulated signal to thetelevision receiver 10 via theantenna 55. - Operation of Input Device:
- Subsequently, pointer display processing of the
input device 31 will be described with reference toFIG. 9 . This processing is performed when the user holding thebody 32 in his hand operates the first-stage switch of thebutton 33 or keeps the first-stage switch in the pressed state, and at the same time operates theentire input device 31 in an arbitrary predetermined direction, i.e., theentire input device 31 is operated in an arbitrary direction in a three-dimensional free space to move thepointer 22 displayed on theoutput unit 16 of thetelevision receiver 10 in a predetermined direction. That is, this processing is performed to output the operation signal for controlling the display on the screen of thetelevision receiver 10 from theinput device 31 to thetelevision receiver 10. - At Step S1, the
velocity acquisition unit 201 acquires the angular velocity signal output from thesensor 102. That is, the operation performed in a predetermined direction in a three-dimensional free space by the user holding thebody 32 in his hand is detected by theangular velocity sensor 58, and a detection signal representing an angular velocity (ω(t), ωy(t)) according to the movement of thebody 32 is acquired. - At Step S2, the
storage unit 202 buffers the acquired angular velocity (ωx(t), ωy(t)). At Step S3, theacceleration acquisition unit 203 acquires an angular acceleration ω′x(t), ω′y(t)). Specifically, theacceleration acquisition unit 203 divides the difference between the angular velocity (ωx(t), ωy(t)) of this time and the angular velocity (ωx(t−1), ωy(t−1)) stored the last time in thestorage unit 202 by the time therebetween, to thereby calculate the angular acceleration (ω′x(t), ω′y(t)). - Then, at Steps S4 to S7, the
compensation processing unit 204 performs an operation to compensate for the delay in response of the operation signal on the basis of the acquired velocity and acceleration. - That is, at Step S4, the
gain acquisition unit 211 acquires the gain G(t) according to the angular acceleration (ω′x(t), ω′y(t)) acquired at Step S3. This gain G(t) as a function is multiplied by the angular velocity at Step S7 described later. Therefore, a gain G(t) value of 1 serves as a reference value. When the gain G(t) is larger than the reference value, the angular velocity as the operation signal is corrected to be increased. When the gain G(t) is smaller than the reference value, the angular velocity is corrected to be reduced. - In the acceleration phase (i.e., when the angular acceleration as the differential value is positive), the gain G(t) is a value equal to or larger than the reference value (equal to or larger than the value of 1). In the deceleration phase (i.e., when the angular acceleration as the differential value is negative), the gain G(t) is a value smaller than the reference value (smaller than the value of 1). Further, the larger the absolute value of the acceleration is, the larger the difference between the absolute value of the gain G(t) and the reference value (the value of 1) is.
- The gain G(t) may be acquired by performing an operation or by reading the gain G(t) from a previously mapped table. Further, the gain G(t) may be obtained separately for the X-direction and the Y-direction. Alternatively, the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single gain G(t).
- At Step S5, the
correction unit 212 corrects the gain G(t) on the basis of the angular velocity (ωx(t), ωy(t)) acquired by thevelocity acquisition unit 201. Specifically, the gain G(t) is corrected such that the larger the angular velocity (ωx(t), ωy(t)) is, the closer to the reference value (the value of 1) the gain G(t) is. That is, in this embodiment, with the process of Step S4 (the process based on the angular acceleration) and the process of Step S5 (the process based on the angular velocity), the gain G(t) is acquired which is the function defined by both the angular velocity as the operation signal and the angular acceleration as the differential value of the angular velocity. - Also in this case, the corrected value may be obtained separately for the X-direction and the Y-direction, or the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single corrected value.
- At Step S6, the
limitation unit 213 limits the gain G(t) not to exceed the threshold value. That is, the corrected gain G(t) is limited to be within the range of the predetermined threshold value. In other words, the threshold value is set to be the maximum or minimum value, and the absolute value of the gain G(t) is limited not to exceed the threshold value. If theinput device 31 is vibrated, therefore, a situation is suppressed in which the absolute value of the gain G(t) is too small to compensate for the delay or too large to prevent oscillation. - The processes of Steps S4 to S6 can be performed by a single reading process, if the gain G(t) has previously been mapped in the
gain acquisition unit 211 to satisfy the conditions of the respective steps. -
FIG. 10 illustrates an example of mapping satisfying these conditions. In this embodiment, the horizontal axis and the vertical axis represent the angular acceleration and the gain G(t), respectively. The gain G(t) is represented by a straight line with an intercept of 1 and a positive slope for each of angular velocities. The angular velocity is represented in absolute value. - Therefore, if the angular acceleration represented by the horizontal axis in the drawing is positive (in the right half region of
FIG. 10 ), the gain G(t) represented by the vertical axis is a value equal to or larger than the reference value (the value of 1). If the angular acceleration is negative (in the left half region ofFIG. 10 ), the gain G(t) is a value smaller than the reference value (the value of 1) (Step S4). - Further, the gain G(t) is a value represented by a straight line with an intercept corresponding to the reference value (the value of 1) and a positive slope. Therefore, the larger the absolute value of the angular acceleration is, the larger the absolute value of the difference between the absolute value of the gain G(t) and the reference value (the value of 1) is (Step S4). In other words, the gain G(t) is set to a value with which the larger the absolute value of the angular acceleration as the differential value is, the larger the correction amount of the angular velocity as the operation signal is. For example, in the case of an angular velocity of 1 digit/s, when the angular acceleration is 5 digit/s2 (i.e., the absolute value of the angular acceleration is small), the value of the gain G(t) is approximately 3 (the absolute value of the difference from the value of 1 is 2, which is small). Meanwhile, when the angular acceleration is −10 digit/s2 (i.e., the absolute value of the angular acceleration is large), the value of the gain G(t) is approximately −5 (the absolute value of the difference from the value of 1 is 6, which is large).
- Further, the larger the angular velocity is, the closer to the reference value (the value of 1) the gain G(t) is (Step S5). In other words, the gain G(t) is set to a value with which the smaller the angular velocity as the operation signal is, the larger the correction amount of the angular velocity is. For example, in the case of an angular acceleration of 15 digit/s2, when the angular velocity is 1 digit/s (i.e., when the angular velocity is small), the gain G(t) is approximately 8 (i.e., the absolute value of the gain G(t) is large). Meanwhile, when the angular velocity is 2 digit/s (i.e., the angular velocity is large), the gain G(t) is approximately 5 (i.e., the absolute value of the gain G(t) is small). In the case of an angular acceleration of −50 digit/s2, when the angular velocity is 4 digit/s (i.e., when the angular velocity is small), the gain G(t) is approximately −6 (i.e., the absolute value of the gain G(t) is large). Meanwhile, when the angular velocity is 16 digit/s (i.e., when the angular velocity is large), the gain G(t) is approximately −1 (i.e., the absolute value of the gain G(t) is small).
- Further, the above indicates that practically the correction of the angular velocity is performed only when the angular velocity is small, and is not performed when the angular velocity is large. In
FIG. 10 , when the angular velocity is large, as in the angular velocity of 64 digit/s or 128 digit/s, the gain G(t) is a value equal or close to the reference value (the value of 1). Practically, therefore, the velocity is not corrected. That is, the correction of the angular velocity is performed immediately after the start of the movement of theinput device 31 and immediately before the stop of the movement. - With the absolute value of the gain G(t) thus increased in accordance with the reduction in the angular velocity, natural operational feeling can be realized.
- Further, the value of the gain G(t) is limited to be within the range from a threshold value of −10 to a threshold value of 10, i.e., limited not to exceed these threshold values (Step S6). That is, the maximum value of the gain G(t) is to be the threshold value of 10, and the minimum value of the gain G(t) is set to be the threshold value of −10.
- The respective lines representing the characteristics of the respective velocities in
FIG. 10 may not be straight lines, and may be curved lines. - At Step S7, the
multiplication unit 214 multiplies the angular velocity (ωx(t), ωy(t)) as the operation signal by the gain G(t). That is, the angular velocity is multiplied by the gain G(t) as a coefficient, and thereby the corrected angular velocity (ωx1(t), ωy1(t)) is generated. For example, if the gain G(t) is used as the representative value integrating the value in the X-axis direction and the value in the Y-axis direction, the corrected angular velocity (ωx1(t), ωy1(t)) is calculated with the following formula. -
ωx1(t)=ωx(t)·G(t) -
ωy1(t)=ωy(t)·G(t) (1) - At Step S8, the
velocity operation unit 206 calculates a velocity (Vx(t), Vy(t)). The velocity is obtained by the multiplication of the angular velocity by the radius of gyration. That is, the motion of theinput device 31 occurring when the user operates theinput device 31 corresponds to the combination of rotational motions centering around a shoulder, elbow, or wrist of the user. Further, the radius of gyration of the motion corresponds to the distance from the rotational center of the combined rotational motions, which changes over time, to theinput device 31. - When the velocity of the
input device 31 is represented as (Vx(t), Vy(t)), the radius of gyration (Rx, Ry) is represented by the following formula. -
(Rx,Ry)=(Vx(t),Vy(t))/(ωx(t),ωy(t)) (2) - In the formula (2), (Vx(t), Vy(t)) and (ωx(t), ωy(t)) on the right side represent the dimension of the velocity. Even if each of the velocity and the angular velocity represented by the right side of this formula (2) is differentiated to represent the dimension of the acceleration or the time rate of change of the acceleration, the correlation is not lost. Similarly, even if each of the velocity and the angular velocity is integrated to represent the dimension of the displacement, the correlation is not lost.
- Therefore, when the velocity and the angular velocity represented by the right side of the formula (2) are used to represent the dimension of the displacement, the acceleration, or the time rate of change of the acceleration, the following formulae (3) to (5) are obtained.
-
(Rx,Ry)=(x(t),y(t))/(ψ(t),θ(t)) (3) -
(Rx,Ry)=(ax(t),ay(t))/(ω′x(t),ω′y(t)) (4) -
(Rx,Ry)=(a′x(t),a′y(t))/(ω″x(t),ω″y(t)) (5) - It is understood from, for example, the formula (5) of the above formulae that the radius of gyration (Rx, Ry) is obtained if the value of change (a′x(t), a′y(t)) of the acceleration (ax(t), ay(t)) and the value of change (w″x(t), ω″y(t)) of the angular acceleration ω′x(t), ω′y(t)) are known. In this embodiment, the radius (Rx, Ry) is obtained on the basis of the formula (5).
- That is, the
acceleration acquisition unit 205 acquires the acceleration (ax(t), ay(t)) detected by theacceleration sensor 59 constituting thesensor 102. Therefore, thevelocity operation unit 206 differentiates the acceleration (ax(t), ay(t)) to calculate the value of change (a′x(t), a′y(t)) of the acceleration. Further, thevelocity operation unit 206 performs a second-order differentiation on the angular velocity (ωx(t), ωy(t)) detected by thevelocity acquisition unit 201, to thereby calculate the rate of change (w″x(t), ω″y(t)) of the angular acceleration ω′x(t), ω′y(t)). Then, thevelocity operation unit 206 divides the rate of change (a′x(t), a′y(t)) of the acceleration by the rate of change (ω″x(t), ω″y(t)) of the angular acceleration ω′x(t), ω′y(t)) to calculate the radius of gyration (Rx, Ry). - Further, the
velocity operation unit 206 multiplies the obtained radius (Rx, Ry) by the angular velocity to calculate the velocity (Vx(t), Vy(t)). As the angular velocity, the corrected angular velocity (ωx1(t), ωy1(t)), i.e., the angular velocity (ωx(t), ωy(t)) multiplied by the gain G(t) is used. - At Step S9, the movement
amount calculation unit 207 calculates the pointer movement amount by using the corrected angular velocity (ωx1(t), ωy1(t)), and outputs the calculated pointer movement amount. The movementamount calculation unit 207 adds the velocity to the immediately preceding position coordinates of thepointer 22 to calculate new position coordinates. That is, the displacement per unit time in the X-direction and the Y-direction of theinput device 31 is converted into the displacement amount per unit time in the X-direction and the Y-direction of thepointer 22 displayed on the image display unit of theoutput unit 16. Thereby, the pointer movement amount is calculated such that the larger the gain G(t) is, i.e., the larger the correction amount of the angular velocity as the operation signal is, the larger the compensation amount of the delay in response is. That is, as the gain G(t) is increased, the delay between the operation of theinput device 31 and the movement of thepointer 22 is reduced. If the value of the gain G(t) is further increased, the movement of thepointer 22 is more advanced in phase than the operation of theinput device 31. - As a simpler method, Step S8 may be omitted, and the pointer movement amount may be obtained with the use of the corrected angular velocity obtained at Step S7.
- Further, the processes performed here include a process of removing a hand-shake component of the
input device 31 through a low-pass filter, and a process of, when the operation velocity is low (a low velocity and a low acceleration), setting an extremely low moving velocity of thepointer 22 to make it easy to stop thepointer 22 on theicon 21. Also, other processes are performed to prevent a situation in which the movement of theinput device 31 occurs during the operation of thebutton entire input device 31 to cause the movement of thepointer 22. These processes include a process of prohibiting the movement of thepointer 22 during the button operation, and a process for correcting the inclination of theinput device 31 by setting the gravity direction detected by theacceleration sensor 59 as the lower direction. - The above-described processes are repeatedly performed during the operation of the
body 32. - The operation signal representing the pointer movement amount is transmitted from the
communication unit 54 to thetelevision receiver 10 via theantenna 55. - In the
television receiver 10, thecommunication unit 12 receives the signal from theinput device 31 via theantenna 11. TheMPU 13 maps thevideo RAM 15 such that thepointer 22 is displayed at a position corresponding to the received signal. As a result, in theoutput unit 16, thepointer 22 is displayed at a position corresponding to the operation by the user. - A part or all of the respective processes of Steps S1 to S9 in
FIG. 9 can also be performed by thetelevision receiver 10. For example, it is possible to perform the processes up to Step S8 in theinput device 31 and perform the process of Step S9 in thetelevision receiver 10. With this configuration, it is possible to simplify the configuration of theinput device 31 and reduce the load on theinput device 31. In this case, a part or all of the functional blocks inFIG. 8 is provided to thetelevision receiver 10. - Further, the angular velocity and angular acceleration used in the above-described processes can be replaced by simple velocity and acceleration.
- Characteristics of Input Device:
FIGS. 11 and 12 illustrate the movement of thepointer 22 occurring when the user performs an operation of moving theinput device 31 in a predetermined direction and then stopping theinput device 31. InFIG. 11 , the vertical axis represents the velocity, and the horizontal axis represents the time. InFIG. 12 , the vertical axis represents the displacement, and the horizontal axis represents the time. The unit shown in the respective drawings is a relative value used in a simulation. The same applies to other drawings illustrating characteristics described later. - In
FIG. 11 illustrating changes in velocity, a line L1 represents the velocity corresponding to an actual operation (i.e., the ideal state in which the delay of thepointer 22 is absent). The velocity gradually increases from a velocity of 0 at a constant rate, and reaches a velocity of 30. Then, the velocity is maintained for a predetermined time. Thereafter, the velocity gradually decreases from the velocity of 30 at a constant rate, and reaches the velocity of 0. A line L2 represents the velocity in a system having a delay in response, i.e., the velocity of thepointer 22 in a system having a time delay between the operation of theinput device 31 and the movement of thepointer 22 in response to the operation. The line L2 is similar in characteristic to the line L1, but is delayed (i.e., delayed in phase) from the line L1 by a time T0. That is, when theinput device 31 is operated by the user, the velocity of the operation changes as indicated by the line L1. However, the operation signal corresponding to the operation is detected with a delay. Therefore, the velocity of the operation signal (which corresponds to the velocity of thepointer 22 controlled on the basis of the operation signal) changes as indicated by the line L2. - A line L3 represents the result of the process of compensating for the delay, as illustrated in the flowchart of
FIG. 9 . At the start of the motion, the start point of the change in velocity of the line L3 is the same as the start point of the line L2. The velocity of the line L3 rapidly increases at the start point, at which the velocity is 0, with a slope steeper than the slope of the line L2 (i.e., with a larger absolute value of the acceleration) to exceed the line L2, and reaches a value located slightly below and close to the line L1. In other words, immediately after the start of the motion, the line L3 representing the result of compensation rapidly obtains a characteristic substantially the same as the characteristic of the line L1 which has no delay. In terms of the position in the drawing, immediately after the start of the motion, the line L3 is located above the line L2 and close to and below the line L1. That is, the delay time rapidly shifts from the maximum time T0 to the minimum time T1. This means that a prompt response is made upon start of the operation by the user. That is, it is understood that the line L3 is a line resembling the line L1 and compensating for the delay of the line L2. - Thereafter, in the vicinity of the line L1, the line L3 gradually increases with a constant slope substantially similar to the slope of the line L1 (therefore, the line L2) (i.e., with the constant delay time T1). The line L3 is more advanced in phase than the line L2, but is slightly delayed in phase from the line L1 (i.e., in
FIG. 11 , the line L3 is located above and on the left side of the line L2, but is located slightly below and on the right side of the line L1). That is, immediately after the start of the movement, thepointer 22 is accelerated with little delay (with the minimum delay time T1). - The line L3 exceeds the velocity of 30 and further increases. Then, at timing immediately before the line L2 reaches the constant velocity of 30, the line L3 reaches a velocity of 40, and thereafter rapidly decreases with a steep slope to fall to the velocity of 30. This means that the transitional increase in velocity rapidly ceases and the line L3 reaches a stable velocity.
- Thereafter, the velocity of the line L3 remains at the constant value of 30 for a predetermined time. That is, the velocity of the
pointer 22 gradually increases from the value of 0, and thereafter is stabilized at the value of 30. - When the motion is stopped, the velocity of the line L3 remains at the value of 30 for a while, even after the velocity of the line L1 starts to fall below the value of 30. Then, at timing immediately before the velocity of the line L2 starts to decrease from the value of 30, the line L3 rapidly decreases with a steep slope (i.e., with a larger absolute value of the acceleration) to fall to a velocity of 18, which is a value close to and above the line L1. That is, the delay time rapidly shifts from the maximum value T0 to the minimum value T1. This means that a prompt response is made when the user attempts to stop the operation. That is, it is understood that the line L3 is a line resembling the line L1 and compensating for the delay of the line L2, and that the delay has been compensated for.
- Thereafter, in the vicinity of the line L1, the line L3 gradually decreases with a slope substantially similar to the slope of the line L1 (therefore, the line L2), which is a constant slope (i.e., with the constant delay time T1). The line L3 is more advanced in phase than the line L2, but is slightly delayed in phase from the line L1 (i.e., in
FIG. 11 , the line L3 is located below and on the left side of the line L2, but is slightly above and on the right side of the line L1). That is, immediately after the start of the stopping operation of the movement, thepointer 22 is decelerated with little delay (with the minimum delay time T1). - The line L3 falls below the velocity of 0 and further decreases. Then, at timing immediately before the velocity of the line L2 reaches the velocity of 0, the velocity of the line L3 reaches a velocity of approximately −9, and thereafter increases with a steep slope (i.e., rapidly) to reach the velocity of 0. This means that the transitional decrease in velocity rapidly ceases and the line L3 reaches the velocity of 0.
- Eventually, the line L3 has a characteristic close to the characteristic of the line L1, in which the delay of the line L2 has been compensated for.
-
FIG. 12 illustrates displacements of thepointer 22 corresponding to the changes in velocity ofFIG. 11 . A line L11 represents the displacement corresponding to the actual operation (i.e., the displacement with no delay). A line L12 represents the displacement of the system having a delay. A line L13 represents the result of the process of compensating for the delay, as illustrated in the flowchart ofFIG. 9 . - The line L11 has a characteristic of increasing from a displacement of 0 with a substantially constant slope and thereafter reaching a displacement of approximately 2900. The line L12 is substantially the same in characteristic of change as the line L11, but is delayed (i.e., delayed in phase) from the line L11. In the drawing, the line L12 is located below and on the right side of the line L11.
- The line L13 starts to be displaced at a start point substantially the same as the start point of the line L12, and swiftly reaches a value close to the line L11. Thereafter, the line L13 gradually increases at a constant rate with a slope substantially similar to the slope of the line L11 (therefore, the line L12). The line L13 is higher than the line L12 but slightly lower than the line L11. That is, in
FIG. 12 , the line L13 is higher than the line L12 and close to and lower than the line L11. As described above, the line L13 is a line having a characteristic resembling the characteristic of the line L11, and compensating for the delay of the line L12. - Immediately before reaching the displacement of 2900, the line L13 slightly exceeds the line L11 (i.e., in
FIG. 12 , the line L13 is located slightly above the line L11), and thereafter converges to the constant value of 2900. As described above, the line L13 is a line having a characteristic resembling the characteristic of the line L11, and compensating for the delay of the line L12. - As a result, the user can operate the
input device 31 in an arbitrary direction in a free space to, for example, swiftly move thepointer 22 to the desiredicon 21 located in the direction of the operation and stop thepointer 22 at the location. In this operation, the uncomfortable operational feeling felt by the user is suppressed. That is, a situation is suppressed in which the user feels that the movement of thepointer 22 starts later than the start of the operation of theinput device 31, or that the movement of thepointer 22 stops later than the stop of the operation of theinput device 31. As a result, the operational feeling can be improved. - This is noticeable in a so-called consumer-use electronic device. That is, in the consumer-use electronic device, the clock of the MPU is slower and the delay is longer than in a business-use electronic device. Even in the case of such an electronic device, however, the unconformable operational feeling felt by the user is suppressed, and the operational feeling can be improved.
- Description has been made above on the operation of moving the
pointer 22. The operational feeling can also be similarly improved, when the present invention is applied to other operations on a GUI (Graphical User Interface) screen, such as scrolling, zooming (scaling up and down), and rotation. -
FIGS. 13A and 13B illustrate characteristics obtained when theinput device 31 is vibrated. The vertical axis represents the velocity inFIG. 13A and the displacement inFIG. 13B . The horizontal axis represents the time in both drawings. - In
FIG. 13A , lines L21, L22, and L23 represent the result of a case in which there is no delay, the result of a case in which there is a delay, and the result of a case in which the delay has been compensated for, respectively. - In
FIG. 13B , lines L31, L32, and L33 represent the result of the case in which there is no delay, the result of the case in which there is a delay, and the result of the case in which the delay has been compensated for, respectively. It is understood that, when the frequency of the vibration of theinput device 31 is high, the delay has not been compensated for and oscillation is occurring. -
FIGS. 14A and 14B illustrate characteristics obtained when the gain G(t) has been limited and theinput device 31 is vibrated. The vertical axis represents the velocity inFIG. 14A and the displacement inFIG. 14B . The horizontal axis represents the time in both drawings. - In
FIG. 14A , lines L51, L52, and L53 represent the result of a case in which there is no delay, the result of a case in which there is a delay, and the result of a case in which the delay has been compensated for, respectively. InFIG. 14B , lines L61, L62, and L63 represent the result of the case in which there is no delay, the result of the case in which there is a delay, and the result of the case in which the delay has been compensated for, respectively. It is understood from these drawings that the oscillation is suppressed. This suppression of oscillation is the effect of the process of Step S6 inFIG. 9 . - A similar effect can also be achieved by the elimination of oscillation frequency through a low-pass filter.
- It is also possible in the present embodiment to look ahead the operation by the user. That is, the delay can be overly compensated for to make the movement of the
pointer 22 more advanced in phase than the operation by the user. This compensation can be achieved by increasing the value of the gain G(t).FIGS. 15 and 16 illustrate characteristics obtained in this case.FIGS. 15 and 16 correspond toFIGS. 11 and 12 , respectively. Lines L81, L82, and L83 inFIG. 15 correspond to the lines L1, L2, and L3 inFIG. 11 , respectively. Lines L91, L92, and L93 inFIG. 16 correspond to the lines L11, L12, and L13 inFIG. 12 , respectively. - As obvious from the comparison of
FIG. 15 withFIG. 11 , inFIG. 15 , the line L83 immediately after the start of the motion rapidly increases at the same start point as the start point of the line L82, and thereafter exceeds the line L81 (the line L83 is located above and on the left side of the line L81 in the drawing). Thereafter, the line L83 gradually increases with the same slope as the slope of the line L81. Further, when the motion is stopped, the line L83 rapidly decreases from the constant value of 30 to fall below the line L81 (the line L83 is located below and on the left side of the line L81 in the drawing), and thereafter gradually decreases with the same slope as the slope of the line L81. - Also in
FIG. 16 , immediately after the start of the motion, the line L93 is higher than the line L92, and also rapidly increases to exceed the line L91. Thereafter, in the vicinity of the line L91, the line L93 increases substantially similarly to the line L91, and converges to the displacement of 2900. - In this case, the movement of the
pointer 22 precedes the operation by the user. - Then, comparison is made among three cases, i.e., a case in which compensation has been made to maintain a slight delay, a case in which compensation has been made to advance in phase the movement of the
pointer 22, and a case in which compensation has been made to set the delay to be substantially zero.FIGS. 17 to 22 illustrate the results of the comparison.FIGS. 17 and 18 illustrate the case in which compensation has been made to maintain a slight delay.FIGS. 19 and 20 illustrate the case in which compensation has been made to advance in phase the movement of thepointer 22.FIGS. 21 and 22 illustrate the case in which compensation has been made to set the delay to be substantially zero. All of the drawings illustrate a case in which thepointer 22 is moved in a predetermined direction and thereafter moved in the opposite direction. - The vertical axis represents the displacement amount in
FIGS. 17 , 19, and 21 and the velocity inFIGS. 18 , 20, and 22. The horizontal axis represents the time in all of the drawings. - In
FIGS. 17 and 18 , lines L101 and L111 represent the results of a case in which there is no delay, and lines L102 and L112 represent the results of a case in which there is a delay of 0.2 seconds in the system (a case in which compensation is not made). Further, lines L103 and L113 represent the results of a case in which compensation has been made to maintain a slight delay. InFIG. 17 , the line L103 is located between the lines L101 and L102. Also inFIG. 18 , the line L113 is located between the lines L111 and L112. It is therefore understood that the compensation has been made to reduce the delay to a time shorter than 0.2 seconds. - In
FIGS. 19 and 20 , lines L121 and L131 represent the results of a case in which there is no delay, and lines L122 and L132 represent the results of a case in which there is a delay in the system (a case in which compensation is not made). Further, lines L123 and L133 represent the results of a case in which compensation has been made to advance in phase the movement of thepointer 22. InFIG. 19 , when the displacement increases, the line L123 is located above and on the left side of the line L121. When the displacement decreases, the line L123 is located below and on the left side of the line L121. Also inFIG. 20 , when the velocity increases, the line L133 is located above and on the left side of the line L131. When the velocity decreases, the line L133 is located below and on the left side of the line L131. It is therefore understood that the lines L123 and L133 are more advanced in phase than the line L121 and L131, respectively. - In
FIGS. 21 and 22 , lines L141 and L151 represent the results of a case in which there is no delay, and lines L142 and L152 represent the results of a case in which there is a delay (a case in which compensation is not made). Further, lines L143 and L153 represent the results of a case in which compensation has been made to eliminate the delay. InFIG. 21 , when the displacement increases and decreases, the line L143 changes substantially along the line L141. Also inFIG. 22 , when the velocity increases and decreases, the line L153 changes substantially along the line L151. It is therefore understood that appropriate compensation has been performed. - An experiment was conducted with a plurality of subjects. According to the evaluation of the subjects, the subjects had the least uncomfortable operational feeling in the case in which compensation was made to set the delay in response to be substantially zero, as illustrated in
FIGS. 21 and 22 , as compared with the cases illustrated inFIGS. 17 to 20 . - The value of the gain G(t) can also be changed in accordance with the delay amount in the
television receiver 10.FIGS. 23 and 24 illustrate the processing of thetelevision receiver 10 and the processing of theinput device 31, respectively, which are performed in this case. - The
television receiver 10 performs the timer processing illustrated inFIG. 23 . - At Step S31, the
television receiver 10 sets the timer value to zero. At Step S32, thetelevision receiver 10 stands by until the completion of a processing cycle. That is, upon completion of the processing cycle from the reception of the information of the pointer movement amount output from theinput device 31 to the completion of the movement of thepointer 22 on the screen, thetelevision receiver 10 at Step S33 transmits the timer value measured during the processing cycle. Thereafter, the processing returns to Step S31 to repeatedly perform similar processes. - That is, every time the processing cycle is completed, the
television receiver 10 transmits to theinput device 31 the timer value corresponding to the time taken for the processing of the processing cycle. In other words, the time taken for thetelevision receiver 10 to perform the above-described processing varies, depending on the capability of theMPU 13 used in thetelevision receiver 10 and on the state of the load on theMPU 13 during the processing and so forth. Therefore, thetelevision receiver 10 measures the processing time by using a timer, and transmits the measured processing time to theinput device 31. - Operation of Input Device:
- The longer the processing time is, the larger the delay amount is. Therefore, on the basis of the timer value received from the
television receiver 10, theinput device 31 controls the value of the gain G(t), as illustrated in the flowchart ofFIG. 24 . The processes of Steps S51 to S60 inFIG. 24 are basically similar to the processes of Steps S1 to S9 inFIG. 9 . InFIG. 24 , however, the process of Step S5 inFIG. 9 of correcting the gain G(t) on the basis of the angular velocity is omitted. Alternatively, the process may not be omitted. If the correction process is not omitted, the gain G(t) is a function defined by the angular velocity and the angular acceleration. If the correction process is omitted, the gain G(t) is a function defined by the angular acceleration. - Further, in
FIG. 24 , a process of receiving the timer value is performed as Step S55 after the process of Step S54 corresponding to Step S4 inFIG. 9 . - That is, after the angular velocity (ωx(t), ωy(t)) is acquired at Step S51, the angular velocity (ωx(t), ωy(t)) is temporarily buffered in the
storage unit 202 at Step S52. At Step S53, the difference between the angular velocity of this time (ωx(t), ωy(t)) and the stored angular velocity of the last time (ωx(t−1), ωy(t−1)) (the difference between the angular velocity at one step and the angular velocity at the next step) is calculated, and thereby the angular acceleration (ω′x(t), ω′y(t)) is calculated. That is, the angular velocity is differentiated, and the angular acceleration as the differential value is acquired. At Step S54, the gain G(t) according to the angular acceleration (ω′x(t), ω′y(t)) is acquired. - At Step S55, the
correction unit 212 receives the timer value transmitted from thetelevision receiver 10 at Step S33 inFIG. 23 . Specifically, the signal from thetelevision receiver 10 is received by thecommunication unit 54 via theantenna 55, demodulated, and acquired by thecorrection unit 212. Then, at Step S56, thecorrection unit 212 corrects the gain G(t) in accordance with the timer value. Specifically, an operation with the following formula is performed. -
In acceleration phase: G(t)+α -
In deceleration phase: G(t)−α (6) - In the above formula, α represents a positive value which increases as the timer value increases. The value α is calculated on the basis of a predetermined function, or is acquired from a mapped memory. Therefore, in the acceleration phase, the longer the delay is, the larger value the gain G(t) is corrected to. In the deceleration phase, the longer the delay is, the smaller value the gain G(t) is corrected to.
- The subsequent processes of Steps S57 to S60 are similar to the processes of Steps S6 to S9 in
FIG. 9 . Description thereof is redundant, and thus will be omitted. - If the time taken for the processing increases, the delay amount increases proportionally thereto. Therefore, the gain G(t) is changed in accordance with the delay amount, as illustrated in the above formula (6). Thereby, the operational feeling can be further improved.
- Also in this case, a part or all of the processes can be performed by the
television receiver 10. - In the embodiment of
FIG. 9 , the gain G(t) is determined on the basis of the velocity and the acceleration. Alternatively, the gain G(t) can be determined solely on the basis of the acceleration.FIG. 25 illustrates pointer display processing performed in this case. - At Step S81, the
velocity acquisition unit 201 acquires the angular velocity (ωx(t), ωy(t)) from the output of theangular velocity sensor 58. At Step S82, the angular velocity is temporarily stored by thestorage unit 202. Then, at Step S83, theacceleration acquisition unit 203 calculates the difference between the angular velocity of this time (ωx(t), ωy(t)) and the angular velocity of the last time (ωx(t−1), ωy(t−1)) stored in the storage unit 202 (the difference between the angular velocity at one step and the angular velocity at the next step), to thereby acquire the angular acceleration (ω′x(t), ω′y(t)). That is, the angular velocity is differentiated, and the angular acceleration as the differential value is acquired. - At Step S84, the
gain acquisition unit 211 acquires the gain G(t) according to the angular acceleration (ω′x(t), ω′y(t)). In the acceleration phase, the gain G(t) is a value larger than one. In the deceleration phase, the gain G(t) is a value smaller than one. - At Step S85, the
limitation unit 213 limits the gain G(t) not to exceed the reference value. - At Step S86, the
multiplication unit 214 multiplies the angular velocity (ωx(t), ωy(t)) by the gain G(t) to calculate the corrected angular velocity (ωx1, ωy1). That is, the operation with the following formula is performed. This formula (7) is the same as the above-described formula (1). -
ωx1(t)=ωx(t)·G(t) -
ωy1(t)=ωy(t)·G(t) (7) - At Step S87, the
velocity operation unit 206 calculates the velocity (Vx(t), Vy(t)). That is, thevelocity operation unit 206 divides the rate of change (a′x(t), a′y(t)) of the acceleration by the rate of change (ω″x(t), ω″y(t)) of the angular acceleration, to thereby obtain the radius (Rx, Ry) of the motion of theinput device 31 occurring when the user operates theinput device 31. - Then, the
velocity operation unit 206 multiplies the obtained radius (Rx, Ry) by the angular velocity to calculate the velocity (Vx(t), Vy(t)). As the angular velocity of this case, the corrected angular velocity (ωx1(t), ωy1(t)), i.e., the angular velocity (ωx(t), ωy(t)) multiplied by the gain G(t) is used. - At Step S88, the movement
amount calculation unit 207 calculates the pointer movement amount by using the velocity (Vx(t), Vy(t)) calculated in the process of Step S87, and outputs the calculated pointer movement amount. The movementamount calculation unit 207 adds the velocity to the immediately preceding position coordinates of thepointer 22, to thereby calculate new position coordinates. That is, the displacement per unit time in the X-direction and the Y-direction of theinput device 31 is converted into the displacement amount per unit time in the X-direction and the Y-direction of thepointer 22 displayed on the image display unit of theoutput unit 16. Thereby, the pointer movement amount is calculated such that the larger the gain G(t) is, i.e., the larger the correction amount of the angular velocity is, the larger the compensation amount of the delay in response is. - As described above, in this embodiment, the process at Step S5 in
FIG. 9 of correcting the gain G(t) on the basis of the angular velocity (ωx(t), ωy(t)) is not performed. That is, the gain G(t) is acquired and determined solely on the basis of the angular acceleration. - Also in this case, a part or all of the processes can be performed by the
television receiver 10. - Characteristics of Input Device:
-
FIGS. 26 and 27 illustrate changes in velocity and displacement occurring when the process of compensating for the delay is performed with the use of the gain G(t) determined solely on the basis of the acceleration, as illustrated inFIG. 25 .FIG. 26 corresponds toFIG. 11 , andFIG. 27 corresponds toFIG. 12 . - Lines L161, L162, and L163 in
FIG. 26 correspond to the lines L1, L2, and L3 inFIG. 11 , respectively. At the start of the movement, the line L163 representing the compensated velocity starts at a start point substantially the same as the start point of the line L162 representing the delayed velocity, and increases at a constant rate with a slope steeper than the slope of the line L161 (therefore, the line L162) to reach a velocity of approximately 50. Then, the line L163 decreases with a steep slope to fall to the velocity of 30. Thereafter, the line L163 remains at the constant velocity of 30 for a predetermined time. Then, when the movement is completed, the line L163 representing the compensated velocity starts to decrease at a point substantially the same as the point at which the line L162 representing the delayed velocity starts to decrease, and decreases at a constant rate with a slope steeper than the slope of the line L161 to fall to a velocity of approximately −17. Further, the line L163 increases with a steep slope to reach the velocity of 0. - Lines L171, L172, and L173 in
FIG. 27 correspond to the lines L11, L12, and L13 inFIG. 12 , respectively. At the start of the movement, the line L173 starts to be displaced at a start point substantially the same as the start point of the line L172, and swiftly reaches a value close to the line L171. Thereafter, the line L173 gradually increases at a constant rate with a slope substantially similar to the slope of the line L171 (therefore, the line L172). The line L173 is higher than the line L172 but slightly lower than the line L171. That is, inFIG. 27 , the line L173 is higher than the line L172 and close to and lower than the line L171. As described above, the line L173 is a line having a characteristic resembling the characteristic of the line L171, and compensating for the delay of the line L172. - Immediately before reaching the displacement of approximately 2900, the line L173 exceeds the line L171 (i.e., in
FIG. 27 , the line L173 is located slightly above the line L171), and thereafter converges to the constant value of 2900. As described above, the line L173 is a line having a characteristic resembling the characteristic of the line L171, and compensating for the delay of the line L172. - As obvious from the comparison of
FIG. 26 withFIG. 11 , however, the time taken for the line L163 to move to the vicinity of the line L161 in the acceleration and deceleration phases is longer than inFIG. 11 . - Further, as obvious from the comparison of
FIG. 27 withFIG. 12 , immediately after the start of the movement, the time taken for the line L173 to move to the vicinity of the line L171 is longer than inFIG. 12 , and the distance between the lines L173 and L171 is longer than inFIG. 12 . Also, at the start of the stopping operation, the time taken for the line L173 to move to the vicinity of the line L171 is longer than inFIG. 12 , and the distance between the lines L173 and L171 is longer than inFIG. 12 . -
FIGS. 28 and 29 illustrate the results of an operation of moving thepointer 22 in a predetermined direction and thereafter moving thepointer 22 back to the opposite direction by using the gain G(t) determined solely on the basis of the acceleration. Lines L181 and L191 represent the results of a case in which there is no delay, and lines L182 and L192 represent the results of a case in which there is a delay (a case in which compensation is not made). Further, lines L183 and L193 represent the results of a case in which compensation has been made to set the delay to be substantially zero. -
FIG. 28 illustrates the result of a case in which the delay in a high-velocity region has been compensated for. According to the evaluation of the subjects in this case, the delay was unnoticed when the velocity was high but noticed when the velocity was low. -
FIG. 29 illustrates the result of a case in which the delay in a low-velocity region has been compensated for. According to the evaluation of the subjects in this case, the delay was unnoticed when the velocity was low but noticed when the velocity was high. -
FIG. 30 illustrates changes in velocity occurring in the operation of moving thepointer 22 in a predetermined direction and thereafter moving thepointer 22 back to the opposite direction by using the gain G(t) determined solely on the basis of the acceleration. A line L201 represents the result of a case in which there is no delay, and a line L202 represents the result of a case in which there is a delay (a case in which compensation is not made). Further, a line L203 represents the result of a case in which compensation has been made to set the delay to be substantially zero. - According to the evaluation of the subjects in this case, the subjects felt that the delay was substantially compensated for, but had uncomfortable feeling about the sensitivity of the
pointer 22 at the start and end of the movement thereof. That is, when the user operates theinput device 31, excessive acceleration of thepointer 22 abruptly starts with a delay of one beat. Also in the stopping operation, thepointer 22 rapidly decelerates and stops. As a result, the user feels unnaturalness. - As described above, an evaluation of the subjects was obtained, according to which the delay of the movement of the
pointer 22 is compensated for to a certain degree, although not sufficiently. However, another evaluation was obtained, according to which the velocity profile deviates from the reality and theinput device 31 arbitrarily performs unnatural acceleration and deceleration, i.e., very artificial compensation is made. Therefore, it is preferable to determine the gain G(t) both on the basis of the acceleration and the velocity, not solely on the basis of the acceleration. - In the above, the
angular velocity sensor 58 and theacceleration sensor 59 are used as the sensor. Alternatively, an image sensor can also be used.FIG. 31 illustrates a configuration of this case. - In this embodiment, a leading end of the
input device 31 is attached with animage sensor 401, such as a CMOS (Complementary Metal Oxide Semiconductor). The user operates theinput device 31 to have theimage sensor 401 pick up the image in the direction in which theimage sensor 401 is oriented. With the current coordinates (X1, Y1) of the image picked up by theimage sensor 401 and the coordinates (X2, Y2) preceding the current coordinates by a time Δt, the velocity (Vx, Vy) is calculated in accordance with the following formula. -
Vx=(X1−X2)/Δt -
Vy=(Y1−Y2)/Δt (8) - Subsequently, with the use of this velocity, the compensation process can be performed in a similar manner as in the above-described case.
- Further, a geomagnetic sensor can be used as the sensor.
FIG. 32 illustrates an embodiment of this case. - In this embodiment, the
input device 31 includes asensor 501 and anoperation unit 502. Thesensor 501 includes ageomagnetic sensor 511 and anacceleration sensor 512. - The user moves the
input device 31 in an arbitrary direction. When theinput device 31 is operated, thegeomagnetic sensor 511 detects the absolute angle (direction) of the operatedinput device 31. In a similar manner as illustrated in the formula (8) (wherein the coordinate values are replaced by angle values), theoperation unit 502 divides the difference between two temporally adjacent angles by the time therebetween to calculate the angular velocity. - Subsequently, with the use of this angular velocity, the compensation process can be performed in a similar manner as in the above-described case.
- On the basis of the detection output from the
acceleration sensor 512, theoperation unit 502 calculates a pitch angle and a roll angle. Then, on the basis of the calculated angles, theoperation unit 502 compensates for the slope to correct the position coordinates to more accurate values. In this process, a commonly used slope compensation algorithm can be used. - Further, a variable resistor can also be used as the sensor.
FIG. 33 illustrates an embodiment of this case. - In this embodiment, the
input device 31 includes avariable resistor 600 as the sensor. In thevariable resistor 600,slide portions like resistors slide portions slide portions like resistors slide portions - A
bar 602 attached with theslide portions groove 603. Abar 606 attached with theslide portions groove 607. In thegrooves operation unit 601 is slidably disposed. - Therefore, when the user moves the
operation unit 601 in an arbitrary direction within aframe 614, the resistance value in the X-direction and the resistance value in the Y-direction at the position of theoperation unit 601 are changed. These resistance values represent the coordinates in the X-direction and the Y-direction in theframe 614. Therefore, in a similar manner as illustrated in the formula (8), the difference between two coordinate points is divided by the time. Thereby, the velocity can be obtained. - Subsequently, with the use of this velocity, the compensation process can be performed in a similar manner as in the above-described case.
- In the
input device 31 ofFIG. 33 , the mass of theoperation unit 601 may be increased such that, when theentire input device 31 is tilted in a predetermined direction, theoperation unit 601 is moved within theframe 614. Alternatively, theoperation unit 601 may be operated by the user with his finger. -
FIG. 34 illustrates a configuration of an input system according to another embodiment of the present invention. - In an
input system 701 ofFIG. 34 , the operation by the user using a gesture with his hand or finger is detected, and thereby a command is input. - A
television receiver 711 of theinput system 701 includes ademodulation unit 721, avideo RAM 722, animage processing unit 723, anMPU 724, and anoutput unit 725. Further, an upper portion of thetelevision receiver 711 is attached with animage sensor 726. - The
demodulation unit 721 demodulates a television broadcasting signal received via a not-illustrated antenna, and outputs a video signal and an audio signal to thevideo RAM 722 and theoutput unit 725, respectively. Thevideo RAM 722 stores the video signal supplied from thedemodulation unit 721, and stores the image picked up by theimage sensor 726. From the image of the user stored in thevideo RAM 722, theimage processing unit 723 detects the gesture with a hand or finger (which corresponds to the operation unit of theinput device 31, and thus will be hereinafter referred to also as the operation unit), and assigns a command to the gesture. This function can be realized by commonly used techniques, such as the techniques of Japanese Unexamined Patent Application Publication Nos. 59-132079 and 10-207618, for example. - The
image processing unit 723 detects the gesture of the operation unit of the user picked up by theimage sensor 726. In this embodiment, therefore, a part of the configuration of thetelevision receiver 711 functioning as an electronic device constitutes an input device. Theimage processing unit 723 outputs the coordinates of thepointer 22 or the like to theMPU 724. On the basis of the input coordinates, theMPU 724 controls the display position of thepointer 22 displayed on theoutput unit 725. - The
image processing unit 723 and theMPU 724 can be integrally configured. - The
output unit 725 includes an image display unit and an audio output unit. Theimage sensor 726 functioning as a detection unit picks up the image of the operation unit, which is at least a part of the body of the user performing a gesture motion while viewing the image displayed on the image display unit of theoutput unit 725. - Functional Configuration of Image Processing Unit:
-
FIG. 35 illustrates a functional configuration of theimage processing unit 723 which operates in accordance with a program stored in an internal memory thereof. Theimage processing unit 723 includes adisplacement acquisition unit 821, astorage unit 822, avelocity acquisition unit 823, astorage unit 824, anacceleration acquisition unit 825, acompensation processing unit 826, and anoutput unit 827. - The
displacement acquisition unit 821 acquires the displacement of the operation unit of the user stored in thevideo RAM 722. Thestorage unit 822 stores the displacement acquired by thedisplacement acquisition unit 821. Thevelocity acquisition unit 823 calculates the difference between the displacement at one step and the displacement at the next step stored in thestorage unit 822, to thereby calculate a velocity signal. That is, the displacement is differentiated to acquire the velocity as the operation signal. Thestorage unit 824 stores the velocity acquired by thevelocity acquisition unit 823. Theacceleration acquisition unit 825 calculates the difference between the velocity signal at one step and the velocity signal at the next step stored in thestorage unit 824, to thereby calculate an acceleration signal. That is, the velocity as the operation signal is differentiated to acquire the acceleration as the differential value of the velocity. In this embodiment, thevelocity acquisition unit 823 and theacceleration acquisition unit 825 constitute a first acquisition unit. - The
compensation processing unit 826 generates a gain G(t) defined by the acceleration as the differential value acquired by theacceleration acquisition unit 825. - Alternatively, the
compensation processing unit 826 generates a gain G(t) defined by the velocity as the operation signal acquired by thevelocity acquisition unit 823 and the acceleration as the differential value acquired by theacceleration acquisition unit 825. Then, thecompensation processing unit 826 multiplies the velocity as the operation signal by the generated gain G(t). That is, the velocity as the operation signal is corrected. - The
compensation processing unit 826 includes afunction unit 841 and acompensation unit 842. Thefunction unit 841 includes again acquisition unit 831, acorrection unit 832, and alimitation unit 833. Thecompensation unit 842 includes amultiplication unit 834. - The
gain acquisition unit 831 acquires the gain G(t) defined by the acceleration as the differential value acquired by theacceleration acquisition unit 825. On the basis of the velocity as the operation signal acquired by thevelocity acquisition unit 823, thecorrection unit 832 corrects the gain G(t) as appropriate. Thelimitation unit 833 limits the uncorrected gain G(t) or the corrected gain G(t) not to exceed a threshold value. Themultiplication unit 834 multiplies the velocity as the operation signal acquired by thevelocity acquisition unit 823 by the gain G(t), which is a function limited by thelimitation unit 833, to thereby correct the velocity as the operation signal and compensate for the delay. - On the basis of the velocity as the operation signal compensated by the
multiplication unit 834, theoutput unit 827 calculates the coordinates of thepointer 22, and outputs the calculated coordinates. - Operation of Television Receiver:
- Subsequently, with reference to
FIG. 36 , pointer display processing of thetelevision receiver 711 will be described. This processing is performed when the user operates the operation unit in an arbitrary predetermined direction, i.e., when the entire operation unit is moved in an arbitrary direction in a three-dimensional free space to move thepointer 22 displayed on theoutput unit 725 of thetelevision receiver 711 in a predetermined direction. This processing is performed to generate, in thetelevision receiver 711 which practically includes therein (i.e., is integrated with) the input device, the operation signal for controlling the display on the screen of thetelevision receiver 711. - At Step S101, the
displacement acquisition unit 821 acquires a displacement (x(t), y(t)). Specifically, the image of the operation unit of the user is picked up by theimage sensor 726 and stored in thevideo RAM 722. Thedisplacement acquisition unit 821 acquires the coordinates of the operation unit from this image. - At Step S102, the
storage unit 822 buffers the acquired displacement (x(t), y(t)). At Step S103, thevelocity acquisition unit 823 acquires a velocity (x′(t), y′(t)). Specifically, thevelocity acquisition unit 823 divides the difference between the displacement (x(t), y(t)) of this time and the displacement (x(t−1), y(t−1)) stored the last time in thestorage unit 822 by the time therebetween, to thereby calculate the velocity (x′(t), y′(t)) as the operation signal. That is, the differential value is calculated. - At Step S104, the
storage unit 824 buffers the acquired velocity (x′(t), y′(t)). At Step S105, theacceleration acquisition unit 825 acquires an acceleration (x″(t), y″(t)). Specifically, theacceleration acquisition unit 825 divides the difference between the velocity (x′(t), y′(t)) of this time and the velocity (x′(t−1), y′(t−1)) stored the last time in thestorage unit 824 by the time therebetween, to thereby calculate the acceleration (x″(t), y″ (t)) as the differential value. That is, the differential value of the operation signal is acquired. - Then, at Steps S106 to S109, on the basis of the velocity acquired as the operation signal and the acceleration as the differential value of the velocity, the
compensation processing unit 826 performs an operation for compensating for the delay in response of the operation signal. - That is, at Step S106, the
gain acquisition unit 831 acquires the gain G(t) defined by the acceleration (x″(t), y″(t)) acquired at Step S105. This gain G(t) as a function is multiplied by the velocity as the operation signal at Step S109 described later. Therefore, a gain G(t) value of 1 serves as a reference value. When the gain G(t) is larger than the reference value, the velocity is corrected to be increased. When the gain G(t) is smaller than the reference value, the velocity is corrected to be reduced. - In the acceleration phase (i.e., when the acceleration is positive), the gain G(t) is a value equal to or larger than the reference value (equal to or larger than the value of 1). In the deceleration phase (i.e., when the acceleration is negative), the gain G(t) is a value smaller than the reference value (smaller than the value of 1). Further, the larger the absolute value of the acceleration is, the larger the difference between the absolute value of the gain G(t) and the reference value (the value of 1) is.
- The gain G(t) may be acquired by performing an operation or by reading the gain G(t) from a previously mapped table. Further, the gain G(t) may be obtained separately for the X-direction and the Y-direction. Alternatively, the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single gain G(t).
- At Step S107, the
correction unit 832 corrects the gain G(t) on the basis of the velocity (x′(t), y′(t)) acquired by thevelocity acquisition unit 823. Specifically, the gain G(t) is corrected such that the larger the velocity (x′(t), y′(t)) is, the closer to the reference value (the value of 1) the gain G(t) is. - Also in this case, the corrected value may be obtained separately for the X-direction and the Y-direction, or the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single corrected value.
- This correction process can be omitted. If the correction process is not omitted, the gain G(t) is a function defined by the velocity and the acceleration. If the correction process is omitted, the gain G(t) is a function defined by the acceleration.
- At Step S108, the
limitation unit 833 limits the gain G(t) not to exceed the threshold value. That is, the corrected gain G(t) is limited to be within the range of the predetermined threshold value. In other words, the threshold value is set to be the maximum or minimum value, and the absolute value of the gain G(t) is limited not to exceed the threshold value. If the operation unit of the user is vibrated, therefore, a situation is suppressed in which the absolute value of the gain G(t) is too small to compensate for the delay or too large to prevent oscillation. - The processes of Steps S106 to S108 can be performed by a single reading process, if the gain G(t) has previously been mapped in the
gain acquisition unit 831 to satisfy the conditions of the respective steps. - At Step S109, the
multiplication unit 834 constituting thecompensation unit 842 multiplies the velocity (x′(t), y′(t)) as the operation signal by the gain G(t). That is, the velocity is multiplied by the gain G(t) as a coefficient, and thereby the corrected velocity (x′1(t), y′1(t)) is generated. For example, if the gain G(t) is used as the representative value integrating the value in the X-axis direction and the value in the Y-axis direction, the corrected velocity (x′1(t), y′1(t)) is calculated with the following formula. -
x′1(t)=x′(t)·G(t) -
y′1(t)=y′(t)·G(t) (9) - At Step S110, the
output unit 827 calculates the coordinates on the basis of the corrected velocity (x′1(t), y′1(t)), and outputs the calculated coordinates. Theoutput unit 827 adds the velocity to the immediately preceding position coordinates of thepointer 22 to calculate new position coordinates. That is, the displacement per unit time in the X-direction and the Y-direction of the operation unit of the user is converted into the displacement amount per unit time in the X-direction and the Y-direction of thepointer 22 displayed on the image display unit of theoutput unit 725. Thereby, the pointer movement amount is calculated such that the larger the gain G(t) is, i.e., the larger the correction amount of the velocity is, the larger the compensation amount of the delay in response is. That is, as the gain G(t) is increased, the delay between the operation of the operation unit and the movement of thepointer 22 is reduced. If the value of the gain G(t) is further increased, the movement of thepointer 22 is more advanced in phase than the operation of the operation unit. - Further, the processes performed here include a process of removing a hand-shake component of the operation unit through a low-pass filter, and a process of, when the operation velocity is low (a low velocity and a low acceleration), setting an extremely low moving velocity of the
pointer 22 to make it easy to stop thepointer 22 on theicon 21. - The above-described processes are repeatedly performed during the operation of the operation unit.
- As described above, in this embodiment, the gain G(t) is determined in accordance with the velocity as the operation signal and the acceleration as the differential value of the velocity.
- The gain G(t) can also be determined in accordance with the displacement as the operation signal and the velocity as the differential value of the displacement. With reference to
FIGS. 37 and 38 , an embodiment of this case will be described. -
FIG. 37 is a block diagram illustrating a functional configuration of theimage processing unit 723 in this case. - In the
image processing unit 723 ofFIG. 37 , thestorage unit 824 and theacceleration acquisition unit 825 ofFIG. 35 are omitted, and the output from thevelocity acquisition unit 823 is directly supplied to thegain acquisition unit 831. Further, thecorrection unit 832 and themultiplication unit 834 are supplied with the displacement acquired by thedisplacement acquisition unit 821 in place of the velocity acquired by thevelocity acquisition unit 823. The other parts of the configuration of theimage processing unit 723 inFIG. 37 are similar to the corresponding parts inFIG. 35 . Description thereof is redundant, and thus will be omitted. In this embodiment, thedisplacement acquisition unit 821 and thevelocity acquisition unit 823 constitute a first acquisition unit. - Operation of Television Receiver:
- Subsequently, with reference to
FIG. 38 , pointer display processing of thetelevision receiver 711 will be described. This processing is performed when the user operates the operation unit in an arbitrary predetermined direction, i.e., when the entire operation unit is moved in an arbitrary direction in a three-dimensional free space to move thepointer 22 displayed on theoutput unit 725 of thetelevision receiver 711 in a predetermined direction. This processing is also performed to generate, in thetelevision receiver 711 which practically includes therein (i.e., is integrated with) the input device, the operation signal for controlling the display on the screen of thetelevision receiver 711. - At Step S151, the
displacement acquisition unit 821 acquires a displacement (x(t), y(t)). Specifically, the image of the operation unit of the user is picked up by theimage sensor 726 and stored in thevideo RAM 722. Thedisplacement acquisition unit 821 acquires the coordinates of the operation unit from this image. - At Step S152, the
storage unit 822 buffers the acquired displacement (x(t), y(t)). At Step S153, thevelocity acquisition unit 823 acquires a velocity (x′(t), y′(t)). Specifically, thevelocity acquisition unit 823 divides the difference between the displacement (x(t), y(t)) of this time and the displacement (x(t−1), y(t−1)) stored the last time in thestorage unit 822 by the time therebetween, to thereby calculate the velocity (x′(t), y′(t)). That is, the velocity (x′(t), y′(t)) as the differential value of the displacement (x(t), y(t)) as the operation signal is acquired. - Then, at Steps S154 to S157, the
compensation processing unit 826 performs an operation for compensating for the delay in response of the operation signal on the basis of the acquired displacement and velocity. - That is, at Step S154, the
gain acquisition unit 831 acquires the gain G(t) according to the velocity (x′(t), y′(t)) acquired at Step S153. This gain G(t) as a function is multiplied by the displacement at Step S157 described later. Therefore, a gain G(t) value of 1 serves as a reference value. When the gain G(t) is larger than the reference value, the displacement as the operation signal is corrected to be increased. When the gain G(t) is smaller than the reference value, the displacement is corrected to be reduced. - When the velocity is positive (e.g., when the operation unit moves in the left direction (or the upper direction)), the gain G(t) is a value equal to or larger than the reference value (equal to or larger than the value of 1). When the velocity is negative (e.g., when the operation unit moves in the right direction (or the lower direction)), the gain G(t) is a value smaller than the reference value (smaller than the value of 1). Further, the larger the absolute value of the velocity is, the larger the difference between the absolute value of the gain G(t) and the reference value (the value of 1) is.
- The gain G(t) may be acquired by performing an operation or by reading the gain G(t) from a previously mapped table. Further, the gain G(t) may be obtained separately for the X-direction and the Y-direction. Alternatively, the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single gain G(t).
- At Step S155, the
correction unit 832 corrects the gain G(t) on the basis of the displacement (x(t), y(t)) as the operation signal acquired by thedisplacement acquisition unit 821. Specifically, the gain G(t) is corrected such that the larger the displacement (x(t), y(t)) is, the closer to the reference value (the value of 1) the gain G(t) is. - Also in this case, the corrected value may be obtained separately for the X-direction and the Y-direction, or the larger one of the respective absolute values of the two values may be selected as a representative value, for example, to obtain a single corrected value.
- This correction process can be omitted. If the correction process is not omitted, the gain G(t) is a function defined by the displacement and the velocity. If the correction process is omitted, the gain G(t) is a function defined by the velocity.
- At Step S156, the
limitation unit 833 limits the gain G(t) not to exceed the threshold value. That is, the corrected gain G(t) is limited to be within the range of the predetermined threshold value. In other words, the threshold value is set to be the maximum or minimum value, and the absolute value of the gain G(t) is limited not to exceed the threshold value. If the operation unit of the user is vibrated, therefore, a situation is suppressed in which the absolute value of the gain G(t) is too small to compensate for the delay or too large to prevent oscillation. - The processes of Steps S154 to S156 can be performed by a single reading process, if the gain G(t) has previously been mapped in the
gain acquisition unit 831 to satisfy the conditions of the respective steps. - At Step S157, the
multiplication unit 834 multiplies the displacement (x(t), y(t)) by the gain G(t). That is, the displacement is multiplied by the gain G(t) as a coefficient, and thereby the corrected displacement (x1(t), y1(t)) is generated. For example, if the gain G(t) is used as the representative value integrating the value in the X-axis direction and the value in the Y-axis direction, the corrected displacement (x1(t), y1(t)) is calculated with the following formula. -
x1(t)=x(t)·G(t) -
y1(t)=y(t)·G(t) (10) - At Step S158, the
output unit 827 outputs the corrected displacement (x1(t), y1(t)). That is, the larger the gain G(t) is, i.e., the larger the correction amount of the displacement is, the larger the compensation amount of the delay in response is. That is, as the gain G(t) is increased, the delay between the operation of the operation unit and the movement of thepointer 22 is reduced. If the value of the gain G(t) is further increased, the movement of thepointer 22 is more advanced in phase than the operation of the operation unit. - Further, the processes performed here include a process of removing a hand-shake component of the operation unit through a low-pass filter, and a process of, when the operation velocity is low (a low velocity and a low acceleration), setting an extremely low moving velocity of the
pointer 22 to make it easy to stop thepointer 22 on theicon 21. - The above-described processes are repeatedly performed during the operation of the operation unit.
- As described above, in this embodiment, the gain G(t) is determined in accordance with the displacement and the velocity.
- Changes in Displacement
- Subsequently, with reference to
FIGS. 39A to 39C , description will be made of changes in displacement occurring when the delay of the operation signal is compensated for, as in the embodiments ofFIGS. 36 and 38 described above. -
FIGS. 39A to 39C are diagrams illustrating the changes in displacement. The vertical axis represents the coordinate (pixel) as the displacement, and the horizontal axis represents the time.FIG. 39B illustrates the changes in displacement occurring in a case in which the delay of the velocity as the operation signal has been compensated for with the use of the gain G(t) defined on the basis of the velocity and the acceleration, as in the embodiment ofFIG. 36 .FIG. 39C illustrates the changes in displacement occurring in a case in which the delay of the displacement as the operation signal has been compensated for with the use of the gain G(t) defined on the basis of the displacement and the velocity, as in the embodiment ofFIG. 38 . Meanwhile,FIG. 39A illustrates the changes in displacement occurring in a case in which the compensation for the delay of the operation signal as in the embodiments ofFIGS. 36 and 38 is not performed. - In
FIG. 39A , a line L301 represents the displacement of the operation unit, and a line L302 represents the displacement of thepointer 22 occurring in a case in which the display of thepointer 22 is controlled on the basis of the detection result of the displacement of the operation unit. The delay of the operation signal with respect to the operation is not compensated for. Therefore, the line L302 is delayed in phase from the line L301. - In
FIG. 39B , a line L311 represents the displacement of the operation unit, similarly to the line L301 ofFIG. 39A . A line L312 represents the change in displacement occurring in the case in which the delay of the velocity as the operation signal has been compensated for with the use of the gain G(t) defined on the basis of the detection result of the velocity and the acceleration of the operation signal, as in the embodiment ofFIG. 36 . The delay of the operation signal with respect to the operation has been compensated for. Therefore, the line L312 is hardly delayed in phase with respect to the line L311, and thus is substantially the same in phase as the line L311. - In
FIG. 39C , a line L321 represents the displacement of the operation unit, similarly to the line L301 ofFIG. 39A . A line L322 represents the change in displacement occurring in the case in which the delay of the displacement as the operation signal has been compensated for with the use of the gain G(t) defined on the basis of the detection result of the displacement and the velocity of the operation signal, as in the embodiment ofFIG. 38 . The delay of the operation signal with respect to the operation has been compensated for. Therefore, the line L322 is hardly delayed in phase with respect to the line L321, and thus is substantially the same in phase as the line L321. - In the above, the electronic device operated by the
input device 31 is thetelevision receiver 10. However, the present invention is also applicable to the control of a personal computer and other electronic devices. - Further, if the electronic device to be controlled is a mobile electronic device, such as a mobile phone and a PDA (Personal Digital Assistant), for example, the
input device 31 can be configured separately from or integrally with the mobile electronic device. If theinput device 31 is integrated with the mobile electronic device, the entire mobile electronic device is operated in a predetermined direction to perform an input operation. - The series of processes described above can be performed both by hardware and software. To have the series of processes performed by software, a program forming the software is installed from a program recording medium on a computer incorporated in special hardware or a general-purpose personal computer, for example, which can perform a variety of functions by installing a variety of programs thereon.
- In the present specification, the steps of describing a program include not only processes performed chronologically in the described order but also processes not necessarily performed chronologically but performed concurrently or individually.
- Further, in the present specification, a system refers to the entirety of a device formed by a plurality of devices.
- The embodiments of the present invention are not limited to the embodiments described above. Thus, the present invention can be modified in a variety of ways within the scope not departing from the gist of the present invention.
- The present application contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2008-280764 filed in the Japan Patent Office on Oct. 31, 2008, the entire content of which is hereby incorporated by reference.
- It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Claims (1)
1. An input device comprising:
a detection unit configured to detect an operation by a user for controlling an electronic device and output an operation signal corresponding to the operation;
a first acquisition unit configured to acquire the detected operation signal and a differential value of the operation signal:
a second acquisition unit configured to acquire a function defined by the differential value to compensate for a delay in response of the operation signal with respect to the operation by the user; and
a compensation unit configured to compensate the operation signal with the acquired function.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/151,667 US20140195016A1 (en) | 2008-10-31 | 2014-01-09 | Input device and method and program |
US14/878,392 US9990056B2 (en) | 2008-10-31 | 2015-10-08 | Input device and method and program |
US15/985,211 US10474250B2 (en) | 2008-10-31 | 2018-05-21 | Input device and method and program |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008280764 | 2008-10-31 | ||
JP2008-280764 | 2008-10-31 | ||
US12/606,484 US8648798B2 (en) | 2008-10-31 | 2009-10-27 | Input device and method and program |
US14/151,667 US20140195016A1 (en) | 2008-10-31 | 2014-01-09 | Input device and method and program |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/606,484 Continuation US8648798B2 (en) | 2008-10-31 | 2009-10-27 | Input device and method and program |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/878,392 Continuation US9990056B2 (en) | 2008-10-31 | 2015-10-08 | Input device and method and program |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140195016A1 true US20140195016A1 (en) | 2014-07-10 |
Family
ID=42130764
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/606,484 Expired - Fee Related US8648798B2 (en) | 2008-10-31 | 2009-10-27 | Input device and method and program |
US14/151,667 Abandoned US20140195016A1 (en) | 2008-10-31 | 2014-01-09 | Input device and method and program |
US14/878,392 Active US9990056B2 (en) | 2008-10-31 | 2015-10-08 | Input device and method and program |
US15/985,211 Active US10474250B2 (en) | 2008-10-31 | 2018-05-21 | Input device and method and program |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/606,484 Expired - Fee Related US8648798B2 (en) | 2008-10-31 | 2009-10-27 | Input device and method and program |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/878,392 Active US9990056B2 (en) | 2008-10-31 | 2015-10-08 | Input device and method and program |
US15/985,211 Active US10474250B2 (en) | 2008-10-31 | 2018-05-21 | Input device and method and program |
Country Status (5)
Country | Link |
---|---|
US (4) | US8648798B2 (en) |
JP (1) | JP5464416B2 (en) |
KR (1) | KR101676030B1 (en) |
CN (1) | CN101727220B (en) |
TW (1) | TWI442263B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110255764A1 (en) * | 2010-04-15 | 2011-10-20 | Roger Lin | Orientating an oblique plane in a 3d representation |
US9524579B2 (en) * | 2010-04-15 | 2016-12-20 | Roger Lin | Orientating an oblique plane in a 3D representation |
US9569012B2 (en) | 2008-12-24 | 2017-02-14 | Sony Corporation | Input apparatus, control apparatus, and control method for input apparatus |
US9990056B2 (en) | 2008-10-31 | 2018-06-05 | Sony Corporation | Input device and method and program |
US10139913B1 (en) * | 2017-07-19 | 2018-11-27 | Sunrex Technology Corp. | Rotational input device |
CN109284019A (en) * | 2017-07-19 | 2019-01-29 | 精元电脑股份有限公司 | Rotating input device |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8223121B2 (en) | 2008-10-20 | 2012-07-17 | Sensor Platforms, Inc. | Host system and method for determining an attitude of a device undergoing dynamic acceleration |
US8587519B2 (en) * | 2009-01-07 | 2013-11-19 | Sensor Platforms, Inc. | Rolling gesture detection using a multi-dimensional pointing device |
JP2010170388A (en) * | 2009-01-23 | 2010-08-05 | Sony Corp | Input device and method, information processing apparatus and method, information processing system, and program |
US8957909B2 (en) | 2010-10-07 | 2015-02-17 | Sensor Platforms, Inc. | System and method for compensating for drift in a display of a user interface state |
KR101304407B1 (en) * | 2011-11-02 | 2013-09-05 | 연세대학교 산학협력단 | Apparatus for estimating laser pointing location of remote mouse and method thereof |
ITTO20111144A1 (en) * | 2011-12-13 | 2013-06-14 | St Microelectronics Srl | SYSTEM AND METHOD OF COMPENSATION OF THE ORIENTATION OF A PORTABLE DEVICE |
US9459276B2 (en) | 2012-01-06 | 2016-10-04 | Sensor Platforms, Inc. | System and method for device self-calibration |
US9316513B2 (en) | 2012-01-08 | 2016-04-19 | Sensor Platforms, Inc. | System and method for calibrating sensors for different operating environments |
CN102611849A (en) * | 2012-03-20 | 2012-07-25 | 深圳市金立通信设备有限公司 | Anti-shaking system and anti-shaking method for mobile phone photographing |
US9228842B2 (en) | 2012-03-25 | 2016-01-05 | Sensor Platforms, Inc. | System and method for determining a uniform external magnetic field |
CN103391300B (en) * | 2012-05-08 | 2014-11-05 | 腾讯科技(深圳)有限公司 | Method and system for achieving synchronous movement in remote control |
JP5550124B2 (en) * | 2012-08-17 | 2014-07-16 | Necシステムテクノロジー株式会社 | INPUT DEVICE, DEVICE, INPUT METHOD, AND PROGRAM |
TWI467467B (en) * | 2012-10-29 | 2015-01-01 | Pixart Imaging Inc | Method and apparatus for controlling object movement on screen |
CN107783669B (en) * | 2016-08-23 | 2021-04-16 | 群光电子股份有限公司 | Cursor generation system, method and computer program product |
JP7318557B2 (en) * | 2020-02-18 | 2023-08-01 | トヨタ自動車株式会社 | Communication system, control method and control program |
CN116909241B (en) * | 2023-09-14 | 2023-11-24 | 中科合肥技术创新工程院 | Digital full-automatic production line control system of intelligent radiator |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070252813A1 (en) * | 2004-04-30 | 2007-11-01 | Hillcrest Laboratories, Inc. | 3D pointing devices and methods |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63128417A (en) * | 1986-11-19 | 1988-06-01 | Toshiba Corp | Control device for external input means of picture processor |
JP3776206B2 (en) * | 1997-05-07 | 2006-05-17 | 株式会社リコー | Pen-type input device |
JPH11305895A (en) * | 1998-04-21 | 1999-11-05 | Toshiba Corp | Information processor |
JP2004062656A (en) * | 2002-07-30 | 2004-02-26 | Canon Inc | Coordinate input device, control method for the same, and program |
US7586654B2 (en) | 2002-10-11 | 2009-09-08 | Hewlett-Packard Development Company, L.P. | System and method of adding messages to a scanned image |
US7489299B2 (en) * | 2003-10-23 | 2009-02-10 | Hillcrest Laboratories, Inc. | User interface devices and methods employing accelerometers |
EP1743322A4 (en) * | 2004-04-30 | 2008-04-30 | Hillcrest Lab Inc | Methods and devices for removing unintentional movement in free space pointing devices |
KR100704630B1 (en) * | 2005-05-25 | 2007-04-09 | 삼성전자주식회사 | Computer system including wireless input device and coordinates processing method for the same |
JP2007034525A (en) * | 2005-07-25 | 2007-02-08 | Fuji Xerox Co Ltd | Information processor, information processing method and computer program |
CN100565434C (en) * | 2006-09-04 | 2009-12-02 | 达方电子股份有限公司 | Mouse and displacement amount compensation process thereof |
CN101206537B (en) * | 2006-12-22 | 2010-05-19 | 财团法人工业技术研究院 | Inertia sensing type coordinate input device and method |
CN201127066Y (en) * | 2007-12-18 | 2008-10-01 | 广州市弘元互动数字技术开发有限公司 | Television space remote control based on acceleration induction |
JP5464416B2 (en) | 2008-10-31 | 2014-04-09 | ソニー株式会社 | Input device and method, and program |
JP2010152493A (en) | 2008-12-24 | 2010-07-08 | Sony Corp | Input device, control apparatus, and control method for the input device |
JP2010152761A (en) | 2008-12-25 | 2010-07-08 | Sony Corp | Input apparatus, control apparatus, control system, electronic apparatus, and control method |
JP4702475B2 (en) | 2008-12-25 | 2011-06-15 | ソニー株式会社 | Input device, handheld device and control method |
US8868798B1 (en) | 2010-09-24 | 2014-10-21 | Emc Corporation | Techniques for modeling disk performance |
-
2009
- 2009-09-17 JP JP2009215255A patent/JP5464416B2/en not_active Expired - Fee Related
- 2009-10-22 TW TW098135775A patent/TWI442263B/en not_active IP Right Cessation
- 2009-10-27 US US12/606,484 patent/US8648798B2/en not_active Expired - Fee Related
- 2009-10-29 CN CN200910207178.2A patent/CN101727220B/en not_active Expired - Fee Related
- 2009-10-30 KR KR1020090104353A patent/KR101676030B1/en active IP Right Grant
-
2014
- 2014-01-09 US US14/151,667 patent/US20140195016A1/en not_active Abandoned
-
2015
- 2015-10-08 US US14/878,392 patent/US9990056B2/en active Active
-
2018
- 2018-05-21 US US15/985,211 patent/US10474250B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070252813A1 (en) * | 2004-04-30 | 2007-11-01 | Hillcrest Laboratories, Inc. | 3D pointing devices and methods |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9990056B2 (en) | 2008-10-31 | 2018-06-05 | Sony Corporation | Input device and method and program |
US10474250B2 (en) | 2008-10-31 | 2019-11-12 | Sony Corporation | Input device and method and program |
US9569012B2 (en) | 2008-12-24 | 2017-02-14 | Sony Corporation | Input apparatus, control apparatus, and control method for input apparatus |
US9823757B2 (en) | 2008-12-24 | 2017-11-21 | Sony Corporation | Input apparatus, control apparatus, and control method for input apparatus |
US20110255764A1 (en) * | 2010-04-15 | 2011-10-20 | Roger Lin | Orientating an oblique plane in a 3d representation |
US9189890B2 (en) * | 2010-04-15 | 2015-11-17 | Roger Lin | Orientating an oblique plane in a 3D representation |
US9524579B2 (en) * | 2010-04-15 | 2016-12-20 | Roger Lin | Orientating an oblique plane in a 3D representation |
US10139913B1 (en) * | 2017-07-19 | 2018-11-27 | Sunrex Technology Corp. | Rotational input device |
CN109284019A (en) * | 2017-07-19 | 2019-01-29 | 精元电脑股份有限公司 | Rotating input device |
Also Published As
Publication number | Publication date |
---|---|
JP5464416B2 (en) | 2014-04-09 |
US10474250B2 (en) | 2019-11-12 |
US20160098102A1 (en) | 2016-04-07 |
US9990056B2 (en) | 2018-06-05 |
KR20100048941A (en) | 2010-05-11 |
CN101727220B (en) | 2015-07-22 |
TWI442263B (en) | 2014-06-21 |
US20180267628A1 (en) | 2018-09-20 |
TW201032089A (en) | 2010-09-01 |
US20100110001A1 (en) | 2010-05-06 |
CN101727220A (en) | 2010-06-09 |
US8648798B2 (en) | 2014-02-11 |
KR101676030B1 (en) | 2016-11-14 |
JP2010134912A (en) | 2010-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10474250B2 (en) | Input device and method and program | |
US8542188B2 (en) | Pointing input device, pointing control device, pointing control system, and pointing control method | |
US10747338B2 (en) | Input apparatus, control apparatus, control system, control method, and handheld apparatus | |
KR101969318B1 (en) | Display apparatus and control method thereof | |
US8558787B2 (en) | Input device and method, information processing device and method, information processing system, and program | |
US8952993B2 (en) | Information processing apparatus, method, and program | |
EP2161651B1 (en) | Control device, input device, control system, hand-held type information processing device, control method and its program | |
JP5463790B2 (en) | Operation input system, control device, handheld device, and operation input method | |
USRE47433E1 (en) | Input apparatus, control apparatus, control system, control method, and handheld apparatus | |
US8884991B2 (en) | Control system, control apparatus, handheld apparatus, control method, and program | |
KR101893601B1 (en) | Input apparatus of display apparatus, display system and control method thereof | |
CN101149651A (en) | Input device and method and medium for providing movement information of the input device | |
JP2000267799A (en) | System and method for coordinate position control, and computer-readable recording medium for recording program for allowing computer to execute the method | |
JP2015049822A (en) | Display control apparatus, display control method, display control signal generating apparatus, display control signal generating method, program, and display control system | |
CN117891343A (en) | Method and device for debouncing input of an input device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YAMAMOTO, KAZUYUKI;REEL/FRAME:032658/0218 Effective date: 20140120 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |