WO2014061096A1 - Information display device and information display method - Google Patents

Information display device and information display method Download PDF

Info

Publication number
WO2014061096A1
WO2014061096A1 PCT/JP2012/076678 JP2012076678W WO2014061096A1 WO 2014061096 A1 WO2014061096 A1 WO 2014061096A1 JP 2012076678 W JP2012076678 W JP 2012076678W WO 2014061096 A1 WO2014061096 A1 WO 2014061096A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
window
display
map
control unit
Prior art date
Application number
PCT/JP2012/076678
Other languages
French (fr)
Japanese (ja)
Inventor
英一 有田
下谷 光生
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2014541845A priority Critical patent/JP6000367B2/en
Priority to PCT/JP2012/076678 priority patent/WO2014061096A1/en
Publication of WO2014061096A1 publication Critical patent/WO2014061096A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/10Map spot or coordinate position indicators; Map reading aids
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3664Details of the user input interface, e.g. buttons, knobs or sliders, including those provided on a touch screen; remote controllers; input using gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/006Representation of non-cartographic information on maps, e.g. population distribution, wind direction, radiation levels, air and sea routes
    • G09B29/007Representation of non-cartographic information on maps, e.g. population distribution, wind direction, radiation levels, air and sea routes using computer methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Definitions

  • the present invention relates to an information display device and an information display method.
  • Patent Document 1 discloses a system that links an in-vehicle device and a portable device. Specifically, when the navigation application is executed on the mobile device, the mobile device detects the current position and acquires map data around the current position. Then, the mobile device processes the acquired map data according to the display device of the in-vehicle device, and sends the processed data to the in-vehicle device. Thereby, map data around the host vehicle is displayed on the display device of the in-vehicle device. Similarly, if the route to the destination is calculated by the navigation function of the portable device, the map data on which the route to the destination is superimposed is displayed on the display device of the in-vehicle device.
  • the portable device employs a capacitive touch panel display capable of multipoint detection, whereas the in-vehicle device is basically a resistive film that is difficult to detect multipoint.
  • the touch panel display of the system is adopted.
  • the mobile device can accept a flick operation, an operation of tapping with two fingers, an operation of pinching with two fingers, and an operation of spreading two fingers, but the in-vehicle device accepts these operations. It is impossible.
  • Patent Document 1 discloses a technique for enabling an in-vehicle device to handle an input operation that is accepted by a mobile device but not accepted by the in-vehicle device.
  • Patent Document 2 discloses a map information display device for moving objects.
  • a touch panel is attached to the display means.
  • a base screen on which a map centered on the vehicle position is displayed is touched, a window screen is displayed on the base screen.
  • a map centered on the vehicle position is also displayed on the window screen.
  • the map scale of the base screen is 1 / 1.25 million, whereas the map scale of the window screen is 1 / 25,000.
  • the window screen When a part of the base screen is touched in a two-screen display with both the base screen and the window screen displayed, the window screen is erased and a single scale of 1 / 1.25 million centered on the vehicle position. A map is displayed. Further, when a part of the window screen is touched in the two-screen display state, the window screen is deleted, and a single map of 1 / 255,000 scale centered on the vehicle position is displayed on the base screen.
  • the navigation application is executed by the mobile device, and the map data generated as a result is output to the display device of the in-vehicle device. At this time, only a single map is displayed on the in-vehicle device.
  • maps with different scales can be displayed on the base screen and the window screen.
  • the display contents themselves are basically the same except that the scale of the map is different between the base screen and the window screen. For this reason, the display content may not meet the user's request. Further, the position and size of the window screen are predetermined and fixed. For this reason, the position and size of the window screen may not match the user's preference.
  • An object of the present invention is to provide a technique that allows a user to manipulate the position, size, content, and the like of display information according to preferences.
  • An information display device includes a display unit having a display surface, an input unit that receives a user operation, and a control unit.
  • a window opening operation for instructing to form a window on the display surface and specifying a window forming range is performed on the first display information on the display surface as a user operation
  • the control unit opens the window.
  • a window is formed according to the formation range designated by the operation.
  • the control unit causes the window to display second display information related to the first display information but having a content or expression format different from the first display information.
  • the position, size, content, etc. of the display information can be adjusted to the user's preference by adopting the window opening operation.
  • the window is formed at the execution position of the window opening operation, the information in the window can be viewed without moving the viewpoint greatly. For this reason, a user's cognitive load may be small.
  • the second display information displayed in the window is information related to the first display information, but differs in content or expression format from the first display information. For this reason, the efficiency of information recognition can be improved by displaying various information in one screen. In addition, the reduction of the cognitive load produced by the window opening operation also contributes to the efficiency of information recognition.
  • FIG. 1 illustrates a block diagram of an information display device 10 according to an embodiment.
  • the information display device 10 includes a display unit 12, an input unit 14, a control unit 16, and a storage unit 18.
  • Display unit 12 displays various information.
  • the display unit 12 drives each pixel based on, for example, a display surface configured by arranging a plurality of pixels in a matrix and image data acquired from the control unit 16 (in other words, each pixel's Driving device for controlling the display state).
  • the image displayed on the display unit 12 may be a still image, a moving image, or a combination of a still image and a moving image.
  • the display unit 12 can be configured by a liquid crystal display device, for example.
  • a display area of a display panel corresponds to the display surface
  • a drive circuit externally attached to the display panel corresponds to the drive device.
  • a part of the driver circuit may be incorporated in the display panel.
  • the display unit 12 can be configured by an electroluminescence (EL) display device, a plasma display device, or the like.
  • EL electroluminescence
  • the input unit 14 receives various information from the user.
  • the input unit 14 includes, for example, a detection unit that detects an indicator used by the user for input, and a detection signal output unit that outputs a result detected by the detection unit to the control unit 16 as a detection signal. .
  • the input unit 14 is configured by a so-called contact-type touch panel
  • the input unit 14 may be referred to as a “touch panel 14” below.
  • the touch panel may be referred to as a “touch pad” or the like.
  • indication used for an input is a user's finger
  • the detection unit of the touch panel 14 provides an input surface on which a user places a fingertip, and detects the presence of a finger on the input surface by a sensor group provided for the input surface.
  • a sensor group provided for the input surface.
  • an area where a finger can be detected by the sensor group corresponds to an input area where a user input can be received.
  • the input area corresponds to an input surface of a two-dimensional area.
  • the sensor group may be any of electrical, optical, mechanical, etc., or a combination thereof.
  • Various position detection methods have been developed, and any of them may be adopted for the touch panel 14.
  • a configuration capable of detecting the pressing force of the finger on the input surface may be employed.
  • the position of the fingertip on the input surface can be specified from the combination of the output signals of each sensor.
  • the identified position is expressed by coordinate data on coordinates set on the input surface, for example.
  • the coordinate data indicating the finger position changes, so that the movement of the finger can be detected by a series of coordinate data acquired continuously.
  • the finger position may be expressed by a method other than coordinates. That is, the coordinate data is an example of finger position data for expressing the position of the finger.
  • the detection signal output unit of the touch panel 14 generates coordinate data indicating the finger position from the output signals of the sensors, and transmits the coordinate data to the control unit 16 as a detection signal.
  • the conversion to coordinate data may be performed by the control unit 16.
  • the detection signal output unit converts the output signal of each sensor into a signal in a format that can be acquired by the control unit 16, and transmits the obtained signal to the control unit 16 as a detection signal.
  • the input surface 34 of the touch panel 14 (see FIG. 1) and the display surface 32 of the display unit 12 (see FIG. 1) are overlapped, in other words, the input surface 34.
  • a structure in which the display surface 32 is integrated is illustrated. With such an integrated structure, the input / display unit 20 (see FIG. 1), more specifically, the touch screen 20 is provided.
  • the input surface 34 and the display surface 32 are identified to the user, giving the user the feeling of performing an input operation on the display surface 32. For this reason, an intuitive operation environment is provided.
  • the expression “the user operates the display surface 32” may be used.
  • the control unit 16 performs various processes and controls in the information display device 10. For example, the control unit 16 analyzes information input from the touch panel 14, generates image data according to the analysis result, and outputs the image data to the display unit 12.
  • control unit 16 includes a central processing unit (for example, configured with one or a plurality of microprocessors) and a main storage unit (for example, one or a plurality of storage devices such as ROM, RAM, flash memory, etc.). ).
  • a central processing unit for example, configured with one or a plurality of microprocessors
  • main storage unit for example, one or a plurality of storage devices such as ROM, RAM, flash memory, etc.
  • main storage unit for example, one or a plurality of storage devices such as ROM, RAM, flash memory, etc.
  • Various programs may be stored in the main storage unit of the control unit 16 in advance, or may be read from the storage unit 18 during execution and stored in the main storage unit.
  • the main storage unit is used not only for storing programs but also for storing various data.
  • the main storage unit provides a work area when the central processing unit executes the program.
  • the main storage unit provides an image holding unit for writing an image to be displayed on the display unit 12.
  • the image holding unit may be referred to as “video memory”, “graphic memory”, or the like.
  • control unit 16 may be configured as hardware (for example, an arithmetic circuit configured to perform a specific calculation).
  • the storage unit 18 stores various information.
  • the storage unit 18 is provided as an auxiliary storage unit used by the control unit 16.
  • the storage unit 18 can be configured using one or more storage devices such as a hard disk device, an optical disk, a rewritable and nonvolatile semiconductor memory, and the like.
  • the information display device 10 may further include elements other than the above elements 12, 14, 16, and 18.
  • an audio output unit that outputs auditory information
  • a communication unit that performs wired or wireless communication with various devices
  • the current position of the information display device 10 conform to, for example, a GPS (Global Positioning System) system.
  • GPS Global Positioning System
  • One or more of the current position detection units to be detected may be added.
  • the voice output unit can output, for example, operation sounds, sound effects, guidance sounds, and the like.
  • the communication unit can be used for, for example, new acquisition and update of information stored in the storage unit 18. Further, the current position detection unit can be used for a navigation function, for example.
  • the information display device 10 may be a portable or desktop information device.
  • the information display device 10 may be applied to a navigation device or an audio / visual device mounted on a moving body such as an automobile.
  • touch operation is an operation in which at least one fingertip is brought into contact with the input surface of the touch panel and the contacted finger is moved away from the input surface without being moved on the input surface.
  • gesture operation is an operation in which at least one fingertip is brought into contact with the input surface, and the contacted finger is moved on the input surface (in other words, slid), and then released from the input surface. .
  • the coordinate data detected by the touch operation (in other words, finger position data) is basically static and static.
  • the coordinate data detected by the gesture operation changes with time and is dynamic. According to such a series of changing coordinate data, the point where the finger starts moving and the point on the input surface, the locus from the moving start point to the moving end point, the moving direction, the moving amount, the moving speed, the moving acceleration, Etc. can be acquired.
  • FIG. 3 is a conceptual diagram illustrating a one-point touch operation (also simply referred to as “one-point touch”) as a first example of the touch operation.
  • a top view of the input surface 34 is shown in the upper stage, and a side view or a sectional view of the input surface 34 is shown in the lower stage.
  • touch points in other words, finger detection points
  • black circles Such an illustration technique is also used in the drawings described later.
  • a black circle may be actually displayed on the display surface.
  • One-point touch can be classified into single tap, multi-tap and long press operations, for example.
  • Single tap is an operation of tapping the input surface 34 once with a fingertip.
  • a single tap is sometimes simply referred to as a “tap”.
  • Multi-tap is an operation of repeating a tap a plurality of times.
  • a double tap is a typical multi-tap.
  • the long press is an operation for maintaining the point contact of the fingertip.
  • FIG. 4 is a conceptual diagram illustrating a two-point touch operation (also simply referred to as “two-point touch”) as a second example of the touch operation.
  • the two-point touch is basically the same as the one-point touch except that two fingers are used. For this reason, it is possible to perform each operation of tap, multi-tap, and long press, for example, by two-point touch.
  • two fingers of one hand may be used, or one finger of the right hand and one finger of the left hand may be used.
  • the positional relationship between the two fingers is not limited to the example in FIG.
  • FIG. 5 is a conceptual diagram illustrating a drag operation (also simply referred to as “drag”) as a first example of the gesture operation.
  • Dragging is an operation of shifting the fingertip while it is placed on the input surface 34.
  • the moving direction and moving distance of the finger are not limited to the example in FIG.
  • the movement start point of the finger is schematically shown by a black circle
  • the movement end point of the finger is schematically shown by a black triangle
  • the direction of movement of the finger is expressed by the direction of the triangle
  • the black circle The trajectory is represented by a line connecting the triangle and the black triangle.
  • Such an illustration technique is also used in the drawings described later. Note that the black circle, the black triangle, and the locus may be actually displayed on the display surface.
  • FIG. 6 is a conceptual diagram illustrating a flick operation (also simply referred to as “flick”) as a second example of the gesture operation.
  • the flick is an operation for quickly paying the fingertip on the input surface 34.
  • the moving direction and moving distance of the finger are not limited to the example in FIG.
  • flicking unlike dragging, the finger leaves the input surface 34 while moving.
  • the touch panel 14 is a contact type, the finger movement after leaving the input surface 34 is not detected in principle.
  • a flick can be identified when the moving speed is equal to or higher than a predetermined threshold (referred to as “drag / flick identification threshold”).
  • the point where the finger finally arrives after it leaves the input surface 34 (more specifically, the point is defined as the input surface 34).
  • this estimation process can be interpreted as a process of converting a flick into a virtual drag.
  • the information display apparatus 10 treats the estimated point as the end point of finger movement.
  • the estimation process may be executed by the touch panel 14 or may be executed by the control unit 16.
  • the information display device 10 may be modified so that a point away from the input surface 34 is handled as an end point of finger movement.
  • FIG. 7 is a conceptual diagram illustrating a pinch out operation (also simply referred to as “pinch out”) as a third example of the gesture operation.
  • Pinch out is an operation of moving two fingertips away on the input surface 34.
  • Pinch out is also called “pinch open”.
  • FIG. 7 illustrates the case where both two fingers are dragged.
  • one fingertip is fixed on the input surface 34 (in other words, one fingertip maintains a touch state), and only the other fingertip is held. It is also possible to pinch out by dragging. 7 and 8 are referred to as “two-point movement type”, and the method in FIG. 8 is referred to as “one-point movement type”.
  • FIG. 9 is a conceptual diagram illustrating a pinch-in operation (also simply referred to as “pinch-in”) as a fifth example of the gesture operation.
  • Pinch-in is an operation of bringing two fingertips closer on the input surface 34.
  • Pinch-in is also referred to as “pinch close”.
  • FIG. 9 illustrates a two-point movement type pinch-in
  • FIG. 10 illustrates a one-point movement type pinch-in as a sixth example of the gesture operation.
  • pinch-out and pinch-in are collectively referred to as “pinch operation” or “pinch”, and the direction of finger movement is referred to as “pinch direction”.
  • the pinch operation is particularly referred to as pinch out.
  • the pinch operation is particularly called pinch-in.
  • pinch-out and pinch-in two fingers of one hand may be used, or one finger of the right hand and one finger of the left hand may be used. Further, the positional relationship, the moving direction, and the moving distance of the two fingers are not limited to the examples in FIGS. Further, in the one-point movement type pinch-out and pinch-in, the finger to be dragged is not limited to the examples of FIGS. 8 and 10. It is also possible to pinch out and pinch in using flicks instead of dragging.
  • Each user operation is associated with a specific function. Specifically, when a user operation is detected, a process associated with the user operation is executed by the control unit 16, thereby realizing a corresponding function. In view of this point, user operations can be classified based on functions to be realized.
  • a double tap performed on an icon on the display surface 32 is associated with a function for executing a program or command associated with the icon.
  • the double tap functions as an execution instruction operation.
  • dragging performed on display information is associated with a slide function for sliding the display information.
  • the drag operation functions as a slide operation.
  • the slide function and the slide operation are also referred to as a scroll function and a scroll operation.
  • the slide direction and the scroll direction differ by 180 °.
  • pinch out and the pinch in performed on the display information changes the size (in other words, the scale) of the information display.
  • pinch-out and pinch-in function as a display size change operation (may be referred to as a “display scale change operation”). More specifically, in the example of FIG. 12, pinch out corresponds to an enlargement operation, and pinch in corresponds to a reduction operation.
  • the drag is associated with a function that rotates the information display.
  • the two-point movement type rotary drag functions as a rotation operation.
  • the function of associating may be changed according to the number of fingers to be rotated and dragged.
  • a double tap may be assigned to a folder opening operation for opening a folder associated with an icon in addition to the above execution instruction operation.
  • the drag may be assigned to a slide function and a drawing function.
  • the execution instruction function for the icon may be associated with double tap, long press, and flick.
  • a program or the like associated with the icon can be executed by any of double tap, long press and flick.
  • the slide function may be associated with both dragging and flicking.
  • the rotation function may be associated with both the two-point movement type rotation drag and the one-point movement type rotation drag.
  • a gesture operation associated with a screen movement deformation type function may be expressed as “a screen movement deformation type function gesture operation”.
  • the screen movement deformation type function associated with the gesture operation is a function of controlling (in other words, manipulating) display information on the display surface in a control direction set according to the gesture direction.
  • the screen movement deformation type function includes, for example, a slide function, a display size change function, a rotation function, and a bird's eye view display function (more specifically, an elevation angle and depression angle change function).
  • the slide function can be classified as a screen movement function. Further, if the rotation function is viewed from the viewpoint of angle movement, the rotation function can be classified as a screen movement function. Further, the display size changing function and the bird's eye view display function can be classified into screen deformation functions.
  • a slide direction (that is, a control direction) is set according to a gesture direction (for example, a drag direction or a flick direction), and display information is slid in the slide direction.
  • a gesture direction for example, a drag direction or a flick direction
  • the control direction when the gesture direction (for example, pinch direction) is the enlargement direction, the control direction is set to the enlargement direction, and when the gesture direction is the reduction direction, the control direction is set to the reduction direction. Then, the display information size is changed in the set control direction.
  • the gesture direction for example, pinch direction
  • the control direction when the gesture direction is the reduction direction, the control direction is set to the reduction direction. Then, the display information size is changed in the set control direction.
  • control direction is set to the right rotation direction when the gesture direction (for example, the rotation direction in the rotation drag) is the right rotation direction, and the control direction is set to the left rotation direction when the gesture direction is the left rotation direction.
  • the counter is set to the left rotation direction, and the display information is rotated in the set control direction.
  • the screen movement / deformation function may control the display information by using not only the gesture direction but also the gesture amount (for example, the length of the gesture trajectory).
  • the control amount for example, the slide amount, the display size change amount, and the rotation amount
  • the control amount may be set larger as the gesture amount is larger.
  • the screen movement deformation type function may control the display information using the gesture speed in addition to or instead of the gesture amount.
  • the display information control speed for example, slide speed, display size change speed, and rotation speed
  • the display information control speed may be set higher as the gesture speed is higher.
  • the non-moving deformation type function does not use the gesture direction for realizing the function even if it is associated with the gesture operation. For example, even if a flick to an icon is associated with an execution instruction function of a specific program, the function belongs to the non-moving deformation type. For example, when dragging is used for the drawing function and the handwritten character input function, only the trajectory corresponding to the dragging is displayed, and the display information is not controlled according to the dragging direction.
  • FIG. 14 illustrates a block diagram of the control unit 16.
  • the display unit 12, the input unit 14, and the storage unit 18 are also illustrated for explanation.
  • the control unit 16 includes an input analysis unit 40, an overall control unit 42, a first image forming unit 44, a first image holding unit 46, a second image forming unit 48, A two-image holding unit 50, an image composition unit 52, a composite image holding unit 54, and a window management unit 56 are included.
  • the input analysis unit 40 analyzes the user operation detected by the input unit 14 and identifies the user operation. Specifically, the input analysis unit 40 acquires coordinate data detected along with a user operation from the input unit 14, and acquires user operation information from the coordinate data.
  • the user operation information is, for example, information such as the type of user operation, the start and end points of finger movement, the trajectory from the start point to the end point, the moving direction, the moving amount, the moving speed, and the moving acceleration.
  • the difference between the start point and the end point can be distinguished from a predetermined threshold value (referred to as a “touch / gesture identification threshold value”) to identify the touch operation and the gesture operation. It is. Further, as described above, the drag and the flick can be identified from the finger moving speed at the end of the trajectory.
  • pinch out and pinch in can be identified from the moving direction.
  • the two drags draw a circle while maintaining a distance, it is possible to identify that the rotation drag has been performed.
  • a drag and a one-point touch are identified at the same time, it can be identified that the pinch-out, the pinch-in, and the rotational drag are one-point moving types.
  • the overall control unit 42 performs various processes in the control unit 16. For example, the overall control unit 42 associates a position on the input surface of the input unit 14 with a position on the display surface of the display unit 12. According to this, the touch position in the touch operation, the gesture locus in the gesture operation, and the like are associated on the display surface. With such association, it is possible to identify the position on the display surface where the user operation is intended. Such association can be realized by a so-called graphical user interface (GUI) technique.
  • GUI graphical user interface
  • the overall control unit 42 identifies a function desired by the user, that is, a user instruction based on, for example, user operation information and function identification information.
  • the function identification information is, for example, information in which an association between a user operation and a function to be executed is defined through the operation status information.
  • the operation status information includes, for example, the usage status of the information display device 10 (in other words, the usage mode), the operation target on which the user operation has been performed, the type of user operation that can be accepted according to the usage status and the operation target, etc. Information.
  • the drag is identified as instructing execution of the slide function.
  • the tap is identified as instructing execution of the display size enlargement function. For example, if no function is associated with the flick for the enlarged icon, it is determined that the flick is an invalid operation.
  • the overall control unit 42 controls the display information on the display surface by controlling the first image forming unit 44, the second image forming unit 48, and the image composition unit 52.
  • the display information may be changed based on the identification result of the user instruction, or may be based on an instruction on program execution irrespective of the identification result of the user instruction.
  • the overall control unit 42 performs general control on the other functional units 40, 44, 46, 48, 50, 52, 54, and 56, for example, adjustment of execution timing.
  • the first image forming unit 44 reads the first information 60 according to the instruction from the overall control unit 42 from the storage unit 18, forms the first image from the first information 60, and converts the first image into the first image holding unit 46.
  • the second image forming unit 48 reads the second information 62 according to the instruction from the overall control unit 42 from the storage unit 18, forms a second image from the second information 62, and converts the second image into the second image. Store in the holding unit 50.
  • the image composition unit 52 reads the first image from the first image holding unit 46, reads the second image from the second image holding unit 50, and combines the first image and the second image.
  • the synthesized image is stored in the synthesized image holding unit 54.
  • the image composition is performed so that the first image and the second image are displayed in an overlapping manner.
  • first image is the lower image (in other words, the lower layer) and the second image is the upper image (in other words, the upper layer) is illustrated.
  • up and down means up and down in the normal direction of the display surface, and the side closer to the user viewing the display surface is expressed as “up”.
  • image data is superimposed based on such a concept.
  • the lower image is displayed in the transparent portion of the upper image.
  • the drawing portion of the upper image hides the lower image.
  • a composite image in which the lower image is transparent can be formed.
  • the setting of which of the first image and the second image is the upper image may be unchangeable or may be changeable.
  • the composite image stored in the composite image holding unit 54 is transferred to the display unit 12 and displayed on the display unit 12.
  • the display screen changes when the composite image is updated, that is, when at least one of the first image and the second image is updated.
  • the window management unit 56 manages windows formed on the display surface under the control of the overall control unit 42. Specifically, the window management unit 56 manages information such as the window formation range (position, shape, etc.), display attributes (whether or not the window is modified and the type, etc.), and based on the window management information. The window is managed by controlling the image composition unit 52.
  • FIG. 15 illustrates a processing flow S10 up to window formation.
  • the input unit 14 receives a user operation
  • the control unit 16 identifies the input user operation.
  • the control unit 16 determines whether or not the input user operation is a window opening operation defined in advance based on the identification result in step S12.
  • control unit 16 executes a function associated with the input user operation in step S14. Thereafter, the processing of the information display device 10 returns to step S11.
  • step S13 when it is determined in step S13 that the user operation is a window opening operation, the control unit 16 forms a window and displays information in the window in step S15.
  • the process flow S10 in FIG. 15 ends with the formation of the window.
  • the window opening operation is an operation for instructing to form a window on the display surface and an operation for designating a window forming range. Examples of window opening operations are shown in FIGS.
  • the long press operation of two-point touch is assigned to the window opening operation.
  • the window 80 is formed in a rectangular range having the two touched points as the vertices of the diagonal position. That is, the user designates the formation range of the window 80 by two touch points. Note that the positional relationship between the two points to be touched is not limited to the example of FIG.
  • FIG. 16 in order to make the window 80 easy to understand on the drawing, an area in the window 80 is sanded. That is, this sand hatching is only given for the description of the embodiment, and does not limit the design of the window 80 or the like. Such sand hatching may also be used in later-described drawings such as FIG.
  • step S12 when the control unit 16 identifies that the two-point touch state has reached a predetermined time (referred to as “window opening instruction time”) in step S12, In S13, it is determined that a window opening operation has been input. And the control part 16 forms the window 80 on the display surface 32 according to the range designated by window opening operation. For example, the control unit 16 associates the two touched points with the coordinate system of the display surface 32 and adopts the two points on the display surface 32 as the vertices of the diagonal position to form the rectangular window 80.
  • the formation range of the window 80 may be obtained on the coordinate system of the input surface 34, and the obtained range may be associated with the coordinate system of the display surface 32.
  • a one-point touch and a drag (referred to as “enclosed drag” or “enclosed gesture”) that surrounds an arbitrary range starting from the one-point touched point or the vicinity thereof.
  • the combination operation is assigned to the window opening operation.
  • any of single tap, multi-tap and long press can be adopted.
  • the end of the drag may be a flick.
  • the vicinity of the one-point touch point refers to a range of a predetermined distance from the one-point touch point, for example.
  • the user designates the formation range of the window 80 by the range surrounded by the surrounding drag or by the combination of the one-point touch point and the range surrounded by the surrounding drag.
  • the direction of the surrounding drag is not limited to the example of FIG.
  • control unit 16 when the control unit 16 identifies in step S12 that the one-point touch and the surrounding drag have been continuously performed, it determines that the window opening operation is input in step S13.
  • the condition “continuously” includes a condition that a one-point touch and a surrounding drag are performed within a predetermined operation time interval, and a condition that no other operation is performed in the middle.
  • the control part 16 forms the window 80 on the display surface 32 according to the range designated by window opening operation.
  • the control unit 16 associates the encircling drag locus 70 with the coordinate system of the display surface 32, and defines a range surrounded by the locus 70 on the coordinate system of the display surface 32 according to a predetermined conversion rule.
  • a window 80 is formed in the converted rectangular range.
  • the conversion rule for example, it is possible to adopt a rule for obtaining the maximum rectangle contained in the range of the enclosed locus 70.
  • the window formation range may be determined so that the point of the one-point touch performed at the beginning of the window opening operation becomes one vertex of the window 80.
  • the formation range of the window 80 may be obtained on the coordinate system of the input surface 34, and the obtained range may be associated with the coordinate system of the display surface 32.
  • an icon 90 is used.
  • a combination operation of a one-point touch on the icon 90 and a pinch out on the icon 90 is assigned to the window opening operation.
  • the one-point touch in this example, any of single tap, multi-tap and long press can be adopted.
  • the user designates the formation range of the window 80 by two end points of the pinch out.
  • 18 illustrates a two-point movement type pinch-out, but a one-point movement type pinch-out may be employed. Further, the direction of pinch out is not limited to the example of FIG.
  • step S ⁇ b> 12 when the control unit 16 identifies that the one-point touch and the pinch-out for the icon 90 have been continuously performed in step S ⁇ b> 12, the control unit 16 determines that the window opening operation is input in step S ⁇ b> 13.
  • the condition “continuously” is the same as in the example of FIG.
  • control part 16 forms the window 80 on the display surface 32 according to the range designated by window opening operation.
  • window formation processing can be executed in the same manner as in the example of FIG. That is, processing can be performed using two end points of the pinch out instead of the two points of the touch operation in the example of FIG.
  • information associated with the icon 90 in advance is displayed on the window 80.
  • the icon 90 illustrated in FIG. 18 simulates the appearance of a building, for example, a logotype such as a company or a product may be adopted in the design of the icon 90. Further, for example, a map symbol may be adopted in the design of the icon 90.
  • the window opening operation is performed, and information associated with the icon 90 is displayed on the window 80. You may make it identify as.
  • the window 80 can be formed at the execution position of the window opening operation.
  • the window opening operation itself is a natural operation imagining the window 80 to be formed.
  • the window opening operation is highly similar to the operation of opening an object and looking inside in daily life. From these points, the window 80 can be formed intuitively. For this reason, high operability can be realized.
  • the window 80 is formed at the execution position of the window opening operation, the information in the window 80 can be viewed without moving the viewpoint greatly. For this reason, a user's cognitive load may be small.
  • window opening operation is not limited to the examples shown in FIGS.
  • Various user operations or combinations thereof can be pre-assigned as window opening operations.
  • the window 80 is drawn with a simple thick frame to avoid complication of the drawings.
  • the design of the window 80 is not limited to this.
  • the shaded modification exemplified in FIG. 19 and the dimple modification exemplified in FIG. 20 may be employed.
  • the shadow modification it is possible to give an impression that the window portion is positioned above the surroundings.
  • the depression modification it is possible to give an impression that the window portion is positioned below the surroundings.
  • the window 80 is not limited to a quadrangle.
  • the setting of the modification regarding the window 80 may be fixed to one type or may be selected according to the information displayed on the window 80. Alternatively, the user may be able to set and change. The same applies to the shape of the window 80.
  • the formation range of the window 80 set according to the window opening operation is managed by the window management unit 56 as described above. More specifically, in the example of FIG. 14, when the overall control unit 42 detects an input of a window opening operation, the overall control unit 42 includes window management information including the formation range and display attributes of the window 80 according to the window opening operation. And the determined information is recorded in the window management unit 56. Here, for the display attribute of the window 80, a setting value (for example, an initial setting value) effective at that time is applied. The overall control unit 42 may record only the formation range of the window 80 in the window management unit 56, and the window management unit 56 may add display attributes accordingly.
  • a setting value for example, an initial setting value
  • the window management unit 56 controls the synthesis of the first image and the second image in the image synthesis unit 52 based on the window management information stored in itself.
  • An example of the control will be described with reference to FIGS. 22 and 23, the lower layer is the first image, and the upper layer is the second image.
  • the image composition unit 52 excludes the portion corresponding to the formation range of the window 80 in the second image stored in the second image holding unit 50. read out.
  • the image composition unit 52 reads the first image stored in the first image holding unit 46, including the part corresponding to the formation range of the window 80. Then, the image composition unit 52 sets the read first image and second image as the lower layer and the upper layer, respectively, and synthesizes both images.
  • the first image of the lower layer is displayed in the window inner area 82, which is the area inside the window 80
  • the upper layer first image is displayed in the window outer area 83, which is the area outside the window 80.
  • Two images are displayed.
  • the transparency of the upper layer is set to 0%.
  • the image composition unit 52 may read the second image constituting the upper layer, including the part corresponding to the formation range of the window 80. In this case, the image composition unit 52 sets the transparency of the portion corresponding to the formation range of the window 80 in the read second image to 100% under the control of the window management unit 56 and composes it with the first image. .
  • the image composition unit 52 reads a portion corresponding to the formation range of the window 80 in the second image stored in the second image holding unit 50.
  • the image composition unit 52 reads the first image stored in the first image holding unit 46, including the part corresponding to the formation range of the window 80. Then, the image composition unit 52 sets the read first image and second image as the lower layer and the upper layer, respectively, and synthesizes both images.
  • the second image of the upper layer is displayed in the window inner area 82, and the first image of the lower layer is displayed in the window outer area 84.
  • the transparency of the upper layer is set to 0%.
  • the image composition unit 52 may read out the second image constituting the upper layer, including the part other than the part corresponding to the formation range of the window 80. In this case, under the control of the window management unit 56, the image composition unit 52 sets the transparency to 100% except for the portion corresponding to the formation range of the window 80 in the read second image, and sets the first image. And synthesize.
  • image composition unit 52 modifies the window 80 as necessary, for example, after composition of the upper layer and the lower layer under the control of the window management unit 56.
  • the window 80 is related to display information (hereinafter also referred to as “first display information”) that is the object of the window opening operation, but the content or expression format is the first display information. Displays different display information (sometimes referred to as “second display information”).
  • the first display information and the second display information are, for example, map information, and this example will be mainly described below.
  • the map information is various information regarding the place.
  • the map information is, for example, geographic information.
  • the geographical information includes, for example, terrain information related to so-called terrain (sea and land, mountain rivers, etc.), place state information related to the state of the place (for example, usage state), name information related to the element name of the terrain information and place state information, Etc.
  • Terrain information is shape information such as coastline and water system, and may include height information such as altitude.
  • the location status information is roughly classified into, for example, ground status information and underground status information.
  • the ground (so-called land surface) is assumed as the boundary surface between the ground and the basement, but the present invention is not limited to this example.
  • ground state information examples include transportation networks (roads, railways, etc.), buildings, fields, forests, deserts, etc.
  • the building state information may include state information of each floor (for example, provided as a floor guide).
  • regional boundaries such as administrative divisions are exemplified as the state information on the ground.
  • underground state information examples include underground malls, underground facilities (water supply, electricity, gas, communications, railways, etc.), ruins, etc.
  • the floor state of the underground part may be classified into underground state information, or the state of the building is combined with the floor state information of the underground part and the ground part. It may be information.
  • place names such as mountains, rivers, seas, and addresses are exemplified.
  • Examples of the name information include names of roads, railways, buildings, stores, facilities, and the like.
  • Terrain information is visualized mainly by graphics such as maps. Further, the place state information can be visualized by a diagram, a symbol, a character, etc., and is mainly displayed at a corresponding place on the map.
  • the name information can be visualized by characters or the like, and is displayed at a corresponding place on the map, for example. Alternatively, the name information may be displayed in a list format or the like instead of being arranged on the map.
  • the map information is, for example, weather information.
  • the weather information is, for example, information on the situation (sunny, cloudy, rain, snow, etc.), temperature, humidity, rainfall, alarm, and the like.
  • Weather information can be visualized by symbols, characters, and the like. For example, a symbol may be displayed at a corresponding place on the map, or a character may be displayed at a position unrelated to the corresponding place on the map.
  • the weather information may be visualized by a graphic such as a weather map.
  • map information is useful information such as store information, sightseeing information, traffic jam information, and the like.
  • Useful information can be visualized by symbols, characters and the like. For example, a symbol may be displayed at a corresponding place on the map, or a character may be displayed at a position unrelated to the corresponding place on the map.
  • the map information is route information, for example.
  • the route information is information on a route (for example, a navigation route) connecting a plurality of points.
  • the route information can be visualized by applying an expression format such as a thick line to the corresponding route on the map.
  • a symbol such as an arrow may be displayed on the corresponding route on the map.
  • the map information may include only one type of information such as topographic information, or may include a plurality of types of information.
  • the first display information and the second display information have the same category in that both are the map information.
  • the map information as the first display information may be referred to as first map information
  • the map information as the second display information may be referred to as second map information.
  • the first map information and the second map information are different in specific configuration. That is, the second map information is information related to the first map information, but differs from the first map information in content or expression format.
  • the first map information and the second map information are related. . In this case, the same place is actually displayed as a part of the first map information (that is, the first display information).
  • the first map information and the second map information include a specific route on the map
  • the first map information and the second map information are related.
  • the specific route is a road, an iron road, or the like having the same name.
  • the specific route may be a route connecting a plurality of points (for example, a navigation route).
  • the portion on the second map information side in the specific route may not appear on the display surface.
  • the difference in content between the first map information and the second map information for example, if the target range is different on the map, the content that the map information provides to the user is different.
  • a map including only terrain information is different from a map including terrain information and location state information.
  • the contents of the ground state information and the underground state information are different. That is, map information having different contents is configured according to the type and combination of information displayed on the map. Note that the amount of information to be displayed on the map may be omitted or added according to the scale of the map.
  • the contents of the current map information and the old map information are different.
  • map information for example, even if they are maps of the same place, a line diagram, a monochrome diagram, a colored diagram, a photograph (including aerial photographs), a plane
  • the representation format of map information is different between a target diagram, a three-dimensional diagram, a bird's eye view, a realistic diagram, a simplified diagram, and a deformed map (a map in which some elements such as a navigation route are emphasized).
  • the display format differs depending on whether the location status information with the same content is displayed as symbols or characters. The same applies to weather information, useful information, and the like.
  • FIGS. 24 to 27 illustrate display information of the window 80.
  • an underground map is displayed in the window 80 as an example of an underground map. More specifically, a map is displayed as the first display information. On the map, the user can open a window for a place where he / she wants to see the map of the underground mall (in FIG. 24, two-point touch long press is performed). As illustrated). Thereby, the window 80 is formed in the range designated by the window opening operation, and the map of the underground shopping area in the range is displayed on the window 80 as the second display information.
  • the underground map is a visualization of underground state information, and may be a map of underground facilities (water, electricity, gas, communication, railway, etc.), ruins, and the like.
  • the second display information is an underground map
  • the spatial vertical position relationship of the same place can be easily grasped.
  • an aerial photograph, a deformed map, etc. may be displayed as the second display information.
  • the aerial photograph it is possible to easily grasp the relationship between the normal map appearance and the photograph appearance.
  • a place related to a normal map can be easily grasped by highlighting based on a specific viewpoint (for example, a navigation route).
  • an old map at the window position is displayed in the window 80. Comparing FIG. 25 with the upper diagram of FIG. 24, it can be seen that there was no railway in the past and the current station is not maintained. Thus, when the second display information is an old map, the temporal relationship between the same places can be easily grasped.
  • An old map may be displayed in an expression format such as an aerial photograph or a deformed map.
  • the map of the first display information may be transparent in the range of the window 80 by adjusting the transparency of the upper layer (see FIGS. 22 and 23) for the range of the window 80. According to this, since different maps of the same place are displayed in an overlapping manner, it is easy to compare the two maps.
  • the second display information is a map, but the second display information may be displayed by a visualization method such as characters.
  • the window opening operation is performed on the first display information visualized by the map
  • the information related to the location where the window 80 is located on the map is first displayed. 2 is displayed as display information.
  • the map of the first display information is partially replaced by the map of the second display information. According to this, the first display information and the second display information can be viewed with the continuity of the viewpoint (position) for viewing the map, in other words, with the continuity of the map information. For this reason, a user's cognitive load can be reduced.
  • map information near the current location is displayed as first display information by a deformed map with emphasized route information.
  • route information from the current location to the destination is displayed by a simplified map.
  • the display in the window 80 is shown enlarged in the upper right in the figure. According to this, the user can grasp the whole image up to the destination (specifically, the direction to the destination, the distance, the shape of the route, etc.).
  • the user can use the information, which is convenient.
  • the traffic jam information may be displayed instead of or in addition to the route information.
  • the user can estimate the time of the journey to the destination, etc., and can change the journey as necessary.
  • the route can be changed to a route that avoids traffic jams, or the time distribution can be adjusted by taking a break earlier.
  • FIG. 27 shows an example in which weather information around the destination is displayed as the second display information in the window 80. According to this, the user can make an action plan when arriving near the destination before arriving at the destination.
  • the second display information in the window 80 geographical information around the destination may be displayed. According to this, the user can make an action plan when approaching the destination before arriving at the destination.
  • the second display information may be information regarding the waypoint instead of the destination. That is, the second display information can be related to a preset setting place.
  • the set place and the route may be for the purpose of actual navigation, or may be for the purpose of simple route search without navigation.
  • the information related to the set location can be displayed in a preferred size on the preferred location on the display surface 32. For this reason, improvement in work efficiency can be expected for the planning and change of the process plan.
  • the second display information displayed in the window 80 is not limited to the one exemplified here.
  • the second display information displayed immediately after the window 80 is formed may be set in advance, for example. Further, the setting may be varied depending on the contents of the first display information, the usage status of the information display device 10, and the like.
  • the second display information displayed in the window 80 is information related to the first display information, but is different from the first display information in content or expression format. For this reason, the efficiency of information recognition can be improved by displaying various information in one screen. Moreover, the high intuition and the reduction of the cognitive load that the window opening operation plays as described above also contribute to the efficiency of information recognition.
  • the first display information and the second display information are information (corresponding to the first information 60 and the second information 62 illustrated in FIG. 14) stored in the storage unit 18 (see FIG. 14).
  • Information in the storage unit 18 can be newly acquired or updated by the communication unit as described above. According to the communication unit, for example, traffic jam information and the like can be acquired from outside the information display device 10.
  • the second display information is obtained by searching and extracting information stored in the storage unit 18 and a storage device (including a server on the network) external to the information display device 10 when necessary.
  • a storage device including a server on the network
  • the first display information and the second display information may be stored in a database in advance by organizing the relevance with the information used as the first display information.
  • FIG. 28 illustrates a processing flow S30 after the window is formed.
  • steps S31 and S32 are the same as steps S11 and S12 of FIG. That is, in step S31, the input unit 14 receives a user operation, and in step S32, the control unit 16 identifies the input user operation.
  • step S33 the control unit 16 determines that the user operation received in step S31 includes the first display information outside the window 80, the second display information inside the window 80, and the window 80 itself. Identify the target. Specifically, the control target by the user operation is identified from the input position and type of the user operation. At that time, it is possible to identify whether or not the input position of the user operation is related to the window 80 by referring to the management information by the window management unit 56 (see FIG. 14).
  • step S33 When it is determined in step S33 that the user operation is performed on the first display information outside the window 80, the control unit 16 controls the first display information according to the user operation in step S34. That is, control of slide (in other words, scroll), enlargement, reduction, rotation, and the like is performed according to the function associated with the user operation.
  • slide in other words, scroll
  • enlargement, reduction, rotation, and the like is performed according to the function associated with the user operation.
  • FIG. 29 illustrates the control in step S34.
  • P, Q, and R schematically show information drawn on the upper layer
  • p, q, and r schematically show information drawn on the lower layer.
  • the portion corresponding to the window 80 in the upper layer is set as a non-drawing portion (in other words, a transparent portion), and thus the upper layer displays the first display information.
  • the lower layer provides the second display information.
  • the upper layer and the lower layer are linked, and the content of the lower layer is also updated according to the slide operation. That is, the content of the second display information in the window 80 also changes.
  • Such interlocking of the display inside and outside the window 80 can be employed in the examples of FIGS. 24 and 25, for example (see FIG. 30). That is, when the first display information is partially replaced by the second display information, and the first display information and the second display information have continuity or integrity on the display even after the replacement, the display is performed. Interlocking is useful.
  • FIG. 29 illustrates the case where the upper layer corresponds to the first display information outside the window 80, but display interlocking can also be applied when the first display information corresponds to the lower layer (see FIG. 23). is there.
  • the display is not interlocked in and out of the window 80 (display is not interlocked).
  • schematically shows information drawn on the upper layer
  • P, Q, and R schematically show information drawn on the lower layer.
  • a portion corresponding to the window 80 is configured in the upper layer as in FIG. 23, whereby the upper layer provides the second display information and the lower layer provides the first display information. Yes.
  • Such non-interlocking of the display inside and outside the window 80 can be adopted in the examples of FIGS. 26 and 27, for example. That is, when the first display information and the second display information do not have display continuity or unity, display non-linkage is useful.
  • FIG. 31 illustrates the case where the lower layer corresponds to the first display information outside the window 80, but display non-linkage can also be applied when the first display information corresponds to the upper layer (see FIG. 22). It is.
  • Whether to link the display may be set in advance, for example. Further, the setting may be made different depending on the visualization method of the first display information and the second display information, the usage status of the information display device 10, and the like.
  • step S34 the processing of the information display device 10 returns to step S31.
  • step S33 if it is determined in step S33 that the user operation is performed on the second display information inside the window 80, the control unit 16 controls the second display information in accordance with the user operation in step S35. That is, control of slide (in other words, scroll), enlargement, reduction, rotation, and the like is performed according to the function associated with the user operation. It should be noted that also in step S35, interlocking display inside and outside the window may be employed.
  • the control unit 16 causes the window 80 to display the second display information having different contents or another expression format.
  • FIG. 32 shows a conceptual diagram of the switching operation of the second display information in the window 80.
  • N pieces of predetermined second display information are cyclically switched in a predetermined order by an in-window information switching operation. For example, it is possible to switch between an aerial photograph, an underground map, and an old map. Alternatively, it is possible to switch between route information from the current location to the set location, weather information around the set location, geographic information around the set location, and traffic information from the current location to the set location.
  • the second display information displayed immediately after the window 80 is formed may be, for example, always the same second display information, or may be the second display information displayed last in the previous display.
  • the number and type of second display information to be switched, the switching order, on / off of cyclic switching, and the like are set in advance, and the setting may not be changed or may be changed. Also good.
  • the window information switching operation is accepted at an arbitrary position in the window 80. For this reason, compared with the case where, for example, a specific button is pressed, the user does not have to worry about the execution position of the in-window information switching operation. That is, the user can perform the switching operation while keeping the viewpoint on the second display information in the window 80. For this reason, a user's cognitive load can be reduced. Moreover, since the user can instruct the switching speed, in other words, the switching timing, the cognitive load can be reduced in this respect as well.
  • the second display information to be switched may be hierarchized information, for example.
  • map information of each floor which is internal map information of a building, is exemplified. That is, the map information on each floor can be hierarchized according to the position in the direction of gravity. For example, as shown in FIG. 33, the first floor, the second floor,..., The top floor, the basement bottom floor,. Or the map information which shows the outline of all the floors may be displayed immediately after window formation, and cyclic switching may be started from the floor which the user selected.
  • information that is hierarchized according to the passage of time may be constituted by old map information about the same place. For example, location information about 10 years ago, 20 years ago, and 30 years ago for the same location can be hierarchized over time. Further, the transition of the name information of the same place may be hierarchized. In addition to the old map information, the current map information may be added to the hierarchy.
  • the underground state information is a ruins, etc.
  • the stratification of the ground state information and the underground state information is based on the position in the direction of gravity, or the time elapses You may understand that it is based on.
  • the above effect can also be obtained when the second display information layered is switched.
  • layered information is more related and continuity of each information, it is efficient to see a plurality of such highly related and continuous information in the same window 80.
  • the second display information layered may be associated with the icon 90 (see FIG. 18).
  • the hierarchical second display information is internal map information of each floor of a building, the window opening operation for the icon 90 and the window information switching operation can be combined to provide higher intuition.
  • step S35 the processing of the information display device 10 returns to step S31.
  • step S33 in FIG. 28 If it is determined in step S33 in FIG. 28 that the user operation is a window control operation for controlling the window 80 itself, the control unit 16 controls the control contents assigned to the input window control operation in step S36.
  • the window 80 is controlled accordingly. According to this, as exemplified below, the position and size of the window 80 can be controlled after the window 80 is formed, and control for deleting the window 80 can be performed by a gesture operation.
  • the window control operation is an operation for moving the window 80, for example.
  • FIG. 34 shows a conceptual diagram of such a window moving operation.
  • a drag operation by performing a drag operation in a state where a predetermined portion (for example, a frame portion of the window) of the window 80 is touched, the window 80 moves in the drag direction.
  • the drag operation since the drag operation is highly similar to an operation of moving an object on a desk in daily life, the window 80 can be moved intuitively. For this reason, high operability can be realized.
  • the window control operation is an operation for changing the size of the window 80, for example.
  • 35 to 37 show conceptual diagrams of such window size changing operation.
  • the size of the window 80 is changed by a one-point moving type pinch operation in a state where a predetermined portion (for example, a frame portion of the window) of the window 80 is touched.
  • a predetermined portion for example, a frame portion of the window
  • the window 80 is enlarged (see FIGS. 35 to 37), and if it is pinch in, the window 80 is reduced. That is, whether to enlarge or reduce is instructed according to the pinch direction.
  • the window size changing operation has high similarity or continuity with the display size changing operation for the display information inside and outside the window. As a result, it is possible to prevent the user from getting lost in the operation and shorten the operation time. That is, for this reason, high operability can be realized.
  • the enlargement direction and the reduction direction that is, the deformation direction are indicated by the pinch direction.
  • pinch out is performed in the left direction, and the window 80 extends in the left direction.
  • pinching out is performed in the downward direction, and the window 80 extends downward.
  • pinch-out is performed in the diagonally downward left direction, and the window 80 extends in the left and downward directions. According to this, the deformation direction of the window 80 can be intuitively and easily instructed.
  • 35 and 36 exemplify the case where the starting point of the finger to be moved in the one-point moving type pinch operation is in the frame portion of the window 80, but the present invention is not limited to this example. . That is, as shown in FIG. 37, the starting point of finger movement may be placed inside the window 80. This is based on the fact that the finger to be fixed in the one-point moving type pinch operation is in the frame portion of the window 80 so that the window size changing operation can be distinguished from the operation for the display information in the window 80.
  • the window control operation is, for example, an operation for deleting the window 80, in other words, an operation for ending the display of the window 80.
  • 38 to 40 show conceptual diagrams of such window erasing operation.
  • the window 80 is deleted by performing a flick operation on the window 80. Since the flick operation is highly similar to the operation of flipping off objects on the desk and erasing them from view in daily life, the window 80 can be erased intuitively. For this reason, high operability can be realized.
  • the flick direction is not limited to the example of FIG.
  • a predetermined portion of the window 80 (for example, a frame portion of the window) is set as the flick start point.
  • the start point of the flick for the window erasing operation may be in the window 80.
  • a drag operation is performed so as to enter the window 80 from the outside of the window 80 and exit outside the window 80 on a side different from the entry side, in other words, to divide the window 80.
  • the window 80 is erased. Since such a drag operation is highly similar to the operation of erasing unnecessary portions on a document by slashing them in daily life, the window 80 can be erased intuitively. For this reason, high operability can be realized.
  • drag direction is not limited to the example of FIG.
  • other gestures specifically flicks, may be employed for the window erasing operation in the example of FIG.
  • a two-point movement type pinch-in is illustrated as a drag operation so as to sandwich the window 80, but a one-point movement type pinch-in may be employed. Further, the pinch-in direction is not limited to the example of FIG. Also, the end of the drag may be a flick.
  • 34 to 38 illustrate the case where the upper layer corresponds to the first display information outside the window 80, but the window control operation is also performed when the first display information corresponds to the lower layer (see FIG. 23). Applicable.
  • step S36 determines whether or not step S36 is deletion control of the window 80 in step S37.
  • step S36 is not erasure control of the window 80
  • the process of the information display apparatus 10 returns to step S31.
  • step S36 is the deletion control of the window 80
  • the information display apparatus 10 ends the processing flow S30 of FIG. 28 and returns to the processing flow S10 of FIG.
  • the shape of the window 80 may be set according to the depression angle of the bird's eye view.
  • the second display information in the window 80 may also be displayed by a bird's eye view expression of the same depression angle. According to this, by adopting the same expression format, the continuity between the first display information and the second display information is increased, and the user's cognitive load can be reduced.
  • bird's-eye view expression Various methods of bird's-eye view expression are known, and such known methods are used here. For example, a method of converting an image drawn as a top view or a front view into a bird's eye view representation is employed. The generation of the bird's eye view image may be performed by the overall control unit 42, or may be performed by the first image forming unit 44 and the second image forming unit 48.
  • the bird's-eye view expression can be applied not only to figures but also to characters and the like.
  • the viewpoint in the bird's-eye view is set for the original image, and the parallel lines running in the vertical direction of the original image are bundled at the viewpoint.
  • windows 80 are provided at various positions for the sake of convenience in order to facilitate understanding of the bird's-eye view conversion.
  • 42 and 43 have different viewpoint positions, and a broken-line rectangle in FIG. 43 corresponds to the window 80 in FIG.
  • the rectangular range (see FIGS. 16 to 18) designated by the user by the window opening operation is converted into a substantially trapezoid when the control unit 16 converts it into a bird's eye view expression. According to this, the user does not need to specify the formation range of the window 80 while being aware of the bird's-eye view expression of the first display information. That is, if a normal rectangular range is designated, the control unit 16 automatically forms a window 80 that matches the bird's-eye view.
  • the trapezoidal window 80 may be deformed.
  • the trapezoidal side is curved.
  • FIG. 45 shows a further example of the shape of the window 80.
  • the first display information is visualized by a map, and a window 80 is formed in accordance with a section or the like in the map.
  • the control unit 16 converts the range specified by the window opening operation according to the following first rule and second rule, and forms the window 80 in the converted range.
  • the first rule is that the periphery of the range specified by the window opening operation (drag trajectory 70 in FIG. 45), the boundary of the partition in the map, and the periphery of the map display area (the entire display surface 32 in the example of FIG. 45). In addition, the content is to be deformed together.
  • the user-specified range is expanded so as to inflate a balloon, and the periphery of the user-specified range is made coincident with the partition boundary in the map.
  • the partition boundaries are roads, rivers, administrative partitions, and the like. The reason why the first rule considers the periphery of the map display area in addition to the partition boundaries in the map is to prevent the window 80 from exceeding the display area as a result of the expansion of the user-specified range.
  • the content of the second rule is that the user-specified range converted in accordance with the first rule includes a range originally specified by the user in a predetermined ratio or more.
  • the second rule is to set an upper limit value in order to prevent the window 80 from becoming too larger than the range specified by the user.
  • the window 80 is formed so as to accommodate the entire initial user-specified range.
  • the window 80 may be partially retracted from the initial user-specified range. For example, in order to find a partition boundary that satisfies the second rule, it may occur that a partition boundary at a position backward from the partition boundary selected as a candidate is selected again.
  • the adoption of the first and second rules described above eliminates the need for the user to track complicated partition boundaries. That is, if a normal rectangular range is designated, the control unit 16 automatically forms a window 80 that matches the partition boundary. In addition, a large section obtained by combining a plurality of adjacent sections can be easily specified.
  • the position, size, content, etc. of the display information can be adjusted to the user's preference by adopting a window opening operation or the like.
  • the above various effects can be obtained.
  • a further window 80 may be formed in the existing window 80 (window multiplexing).
  • the second display information in the existing window 80 is grasped as new first display information
  • the display information in the further window 80 is grasped as new second display information.
  • Window multiplexing Multiplexing of windows can be used for switching display information, for example, in the same way as in-window information switching operation.
  • first display information and the second display information are map information
  • the present invention is not limited to this example.
  • the first display information and the second display information may be music information.
  • the music information can be composed of various information such as a song name, artist name, songwriter name, composer name, arranger name, recorded album name, release date, release source, and the like.
  • a list of a plurality of artist names is displayed as the first display information in an expression format using characters, icons, and the like.
  • a window opening operation is performed on a certain artist name in the list, a window 80 is displayed, and information on one album of the artist is displayed in the window 80 as second display information.
  • the in-window information switching operation is performed, information on another album is displayed. At this time, the album information is switched in order of release date.
  • music information and operation are not limited to this example, and all of the above description applies to the case where the first display information and the second display information are music information.
  • a contact type touch panel is exemplified as the input unit 14.
  • a non-contact type also referred to as a three-dimensional (3D) type
  • 3D three-dimensional
  • a detectable region of the sensor group (in other words, an input region that can accept user input) is provided as a three-dimensional space on the input surface, and a finger in the three-dimensional space is placed on the input surface. The position projected on is detected.
  • Some non-contact types can detect the distance from the input surface to the finger. According to this method, the finger position can be detected as a three-dimensional position, and further, the approach and retreat of the finger can also be detected.
  • Various systems have been developed as non-contact type touch panels. For example, a projection capacity system which is one of electrostatic capacity systems is known.
  • the finger is exemplified as the indicator used by the user for input.
  • a part other than the finger can be used as the indicator.
  • a tool such as a touch pen (also referred to as a stylus pen) may be used as an indicator.
  • a so-called motion sensing technology may be used for the input unit 14.
  • Various methods have been developed as motion sensing technology. For example, a method is known in which a user's movement is detected by a user holding or wearing a controller equipped with an acceleration sensor or the like.
  • a method of extracting a feature point such as a finger from a captured image of a camera and detecting a user's movement from the extraction result is known.
  • An intuitive operation environment is also provided by the input unit 14 using the motion sensing technology.
  • the input / display unit 20 is illustrated above, the display unit 12 and the input unit 14 may be arranged separately. Even in this case, an intuitive operation environment is provided by configuring the input unit 14 with a touch panel or the like.

Abstract

This information display device includes a display unit which has a display surface, an input unit which receives user operations, and a control unit. If, as a user operation, a window opening operation instructing a window to be formed on the display surface and indicating the formation range of the window is performed on first display information on the display surface, the control unit forms a window corresponding to the formation range indicated by the window opening operation. The control unit displays, in the window, second display information which is related to the first display information but which has content or a display format different from that of the first display information.

Description

情報表示装置および情報表示方法Information display device and information display method
 本発明は、情報表示装置および情報表示方法に関する。 The present invention relates to an information display device and an information display method.
 下記特許文献1には、車載機器と携帯機器とを連携させるシステムが開示されている。具体的には、携帯機器でナビゲーションのアプリケーションを実行すると、携帯機器は、現在位置を検出し、該現在位置周辺の地図データを取得する。そして、携帯機器は、取得した地図データを車載機器の表示装置に合わせて加工し、加工したデータを車載装置に送る。これにより、車載機器の表示装置上に、自車両周辺の地図データが表示される。同様に、携帯機器のナビゲーション機能によって目的地までの経路が算出されたならば、車載機器の表示装置上に、該目的地までの経路が重畳された地図データが表示される。 The following Patent Document 1 discloses a system that links an in-vehicle device and a portable device. Specifically, when the navigation application is executed on the mobile device, the mobile device detects the current position and acquires map data around the current position. Then, the mobile device processes the acquired map data according to the display device of the in-vehicle device, and sends the processed data to the in-vehicle device. Thereby, map data around the host vehicle is displayed on the display device of the in-vehicle device. Similarly, if the route to the destination is calculated by the navigation function of the portable device, the map data on which the route to the destination is superimposed is displayed on the display device of the in-vehicle device.
 なお、特許文献1のシステムにおいて、携帯機器は、多点検出が可能な静電容量方式のタッチパネルディスプレイを採用しているのに対し、車載機器は、多点検出が基本的に困難な抵抗膜方式のタッチパネルディスプレイを採用している。この場合、携帯機器はフリック操作、2本の指でタップする操作、2本の指でつまむ操作、および、2本の指をひろげる操作を受け付け可能であるが、車載機器はこれらの操作を受け付け不能である。特許文献1には、携帯機器に対しては受け付けられるが車載機器には受け付けられない入力操作についても、車載機器で取り扱い可能にするための技術が開示されている。 In the system of Patent Document 1, the portable device employs a capacitive touch panel display capable of multipoint detection, whereas the in-vehicle device is basically a resistive film that is difficult to detect multipoint. The touch panel display of the system is adopted. In this case, the mobile device can accept a flick operation, an operation of tapping with two fingers, an operation of pinching with two fingers, and an operation of spreading two fingers, but the in-vehicle device accepts these operations. It is impossible. Patent Document 1 discloses a technique for enabling an in-vehicle device to handle an input operation that is accepted by a mobile device but not accepted by the in-vehicle device.
 下記特許文献2には移動体用地図情報表示装置が開示されている。この装置では、表示手段にタッチパネルが取り付けられている。車両位置を中心にした地図が表示されているベース画面をタッチすると、ベース画面上にウィンドウ画面が表示される。ウィンドウ画面にも車両位置を中心にした地図が表示されるが、ベース画面の地図縮尺は1/1.25万であるのに対し、ウィンドウ画面の地図縮尺は1/2.5万である。 The following Patent Document 2 discloses a map information display device for moving objects. In this apparatus, a touch panel is attached to the display means. When a base screen on which a map centered on the vehicle position is displayed is touched, a window screen is displayed on the base screen. A map centered on the vehicle position is also displayed on the window screen. The map scale of the base screen is 1 / 1.25 million, whereas the map scale of the window screen is 1 / 25,000.
 ベース画面とウィンドウ画面の両方が表示されている2画面表示の状態で、ベース画面の一部をタッチすると、ウィンドウ画面は消去され、車両位置を中心にした1/1.25万縮尺の単一地図が表示される。また、2画面表示の状態でウィンドウ画面の一部をタッチすると、ウィンドウ画面は消去され、ベース画面に車両位置を中心にした1/2.5万縮尺の単一地図が表示される。 When a part of the base screen is touched in a two-screen display with both the base screen and the window screen displayed, the window screen is erased and a single scale of 1 / 1.25 million centered on the vehicle position. A map is displayed. Further, when a part of the window screen is touched in the two-screen display state, the window screen is deleted, and a single map of 1 / 255,000 scale centered on the vehicle position is displayed on the base screen.
特開2012-8968号公報JP 2012-8968 A 特開平8-201071号公報JP-A-8-201071
 特許文献1のシステムでは、ナビゲーションのアプリケーションの実行は携帯機器で行われ、結果として生成される地図データが、車載機器の表示装置に出力される。この際、車載機器には単一の地図が表示されるに過ぎない。これに対し、特許文献2の装置では、縮尺の異なる地図をベース画面とウィンドウ画面に表示可能である。 In the system of Patent Document 1, the navigation application is executed by the mobile device, and the map data generated as a result is output to the display device of the in-vehicle device. At this time, only a single map is displayed on the in-vehicle device. On the other hand, in the apparatus of Patent Document 2, maps with different scales can be displayed on the base screen and the window screen.
 しかし、特許文献2の装置では、ベース画面とウィンドウ画面とは地図の縮尺が異なるだけで、表示内容自体は基本的に同じである。このため、表示内容が、ユーザの要望に合わない場合が生じうる。また、ウィンドウ画面の位置および大きさは、予め決められており固定されている。このため、ウィンドウ画面の位置および大きさが、ユーザの好みに合わない場合が生じうる。 However, in the device of Patent Document 2, the display contents themselves are basically the same except that the scale of the map is different between the base screen and the window screen. For this reason, the display content may not meet the user's request. Further, the position and size of the window screen are predetermined and fixed. For this reason, the position and size of the window screen may not match the user's preference.
 本発明は、表示情報の位置、大きさ、内容、等をユーザが好みに合わせて操作可能な技術を提供することを目的とする。 An object of the present invention is to provide a technique that allows a user to manipulate the position, size, content, and the like of display information according to preferences.
 本発明の一形態に係る情報表示装置は、表示面を有する表示部と、ユーザ操作を受け付ける入力部と、制御部とを含む。制御部は、ユーザ操作として、表示面にウィンドウを形成することを指示すると共にウィンドウの形成範囲を指定するウィンドウ開操作が、表示面上の第1表示情報に対して行われた場合、ウィンドウ開操作によって指定された形成範囲に応じてウィンドウを形成する。制御部は、ウィンドウに、第1表示情報に関連するが内容または表現形式が第1表示情報とは異なる第2表示情報を表示させる。 An information display device according to an embodiment of the present invention includes a display unit having a display surface, an input unit that receives a user operation, and a control unit. When a window opening operation for instructing to form a window on the display surface and specifying a window forming range is performed on the first display information on the display surface as a user operation, the control unit opens the window. A window is formed according to the formation range designated by the operation. The control unit causes the window to display second display information related to the first display information but having a content or expression format different from the first display information.
 上記一形態によれば、ウィンドウ開操作の採用により、表示情報の位置、大きさ、内容、等をユーザが好みに合わせることが可能である。 According to the above embodiment, the position, size, content, etc. of the display information can be adjusted to the user's preference by adopting the window opening operation.
 また、ウィンドウ開操作の実行位置にウィンドウが形成されるので、視点を大きく動かすことなくウィンドウ内の情報を見ることができる。このため、ユーザの認知負荷が小さくて済む。 Also, since the window is formed at the execution position of the window opening operation, the information in the window can be viewed without moving the viewpoint greatly. For this reason, a user's cognitive load may be small.
 また、ウィンドウに表示される第2表示情報は、第1表示情報に関連した情報であるが、第1表示情報とは内容または表現形式において異なる。このため、1画面内に多彩な情報が表示されることによって、情報認知の効率化を図ることができる。また、ウィンドウ開操作が奏する認知負荷の軽減も、情報認知の効率化に貢献する。 Further, the second display information displayed in the window is information related to the first display information, but differs in content or expression format from the first display information. For this reason, the efficiency of information recognition can be improved by displaying various information in one screen. In addition, the reduction of the cognitive load produced by the window opening operation also contributes to the efficiency of information recognition.
 本発明の目的、特徴および利点は、以下の詳細な説明と添付図面とによって、より明白となる。 The objects, features and advantages of the present invention will become more apparent from the following detailed description and the accompanying drawings.
情報表示装置を例示するブロック図である。It is a block diagram which illustrates an information display device. 入力兼表示部を例示する斜視図である。It is a perspective view which illustrates an input and display part. 1点タッチ操作の概念図である。It is a conceptual diagram of 1-point touch operation. 2点タッチ操作の概念図である。It is a conceptual diagram of a two-point touch operation. ドラッグ操作の概念図である。It is a conceptual diagram of drag operation. フリック操作の概念図である。It is a conceptual diagram of flick operation. ピンチアウト操作(2点移動型)の概念図である。It is a conceptual diagram of pinch out operation (two-point movement type). ピンチアウト操作(1点移動型)の概念図である。It is a conceptual diagram of pinch out operation (one point movement type). ピンチイン操作(2点移動型)の概念図である。It is a conceptual diagram of pinch-in operation (two-point movement type). ピンチイン操作(1点移動型)の概念図である。It is a conceptual diagram of pinch-in operation (one-point movement type). スライド操作の概念図である。It is a conceptual diagram of slide operation. 表示サイズ変更操作(拡大操作および縮小操作)の概念図である。It is a conceptual diagram of display size change operation (enlargement operation and reduction operation). 回転操作の概念図である。It is a conceptual diagram of rotation operation. 制御部を例示するブロック図である。It is a block diagram which illustrates a control part. ウィンドウ表示までの処理を例示するフローチャートである。It is a flowchart which illustrates the process until a window display. ウィンドウ開操作の概念図である(2点タッチ長押し)。It is a conceptual diagram of window opening operation (two-point touch long press). ウィンドウ開操作の概念図である(1点タッチと囲みドラッグ)。It is a conceptual diagram of window opening operation (one-point touch and surrounding drag). ウィンドウ開操作の概念図である(アイコンの利用)。It is a conceptual diagram of window opening operation (use of an icon). ウィンドウの影付き修飾を例示する図である。It is a figure which illustrates the shadow modification of a window. ウィンドウの窪み修飾を例示する図である。It is a figure which illustrates the hollow modification of a window. 円形のウィンドウを例示する図である。It is a figure which illustrates a circular window. ウィンドウ形成に関して画像合成制御の第1例を説明する図である。It is a figure explaining the 1st example of image composition control regarding window formation. ウィンドウ形成に関して画像合成制御の第2例を説明する図である。It is a figure explaining the 2nd example of image composition control regarding window formation. ウィンドウの表示情報の第1例を説明する図である。It is a figure explaining the 1st example of the display information of a window. ウィンドウの表示情報の第2例を説明する図である。It is a figure explaining the 2nd example of the display information of a window. ウィンドウの表示情報の第3例を説明する図である。It is a figure explaining the 3rd example of the display information of a window. ウィンドウの表示情報の第4例を説明する図である。It is a figure explaining the 4th example of the display information of a window. ウィンドウ表示後の処理を例示するフローチャートである。It is a flowchart which illustrates the process after a window display. ウィンドウ外でユーザ操作が行われた場合における表示制御の第1例を説明する図である(表示の連動)。It is a figure explaining the 1st example of the display control when a user operation is performed outside the window (linkage of display). ウィンドウ外でユーザ操作が行われた場合における表示制御の第2例を説明する図である(表示の連動)。It is a figure explaining the 2nd example of display control when a user operation is performed outside a window (linkage of a display). ウィンドウ外でユーザ操作が行われた場合における表示制御の第3例を説明する図である(表示の非連動)。It is a figure explaining the 3rd example of the display control when a user operation is performed outside the window (non-linkage of display). ウィンドウ内情報切り替え操作の概念図である。It is a conceptual diagram of information switching operation in a window. 階層化された情報に対するウィンドウ内情報切り替え操作を説明する概念図である。It is a conceptual diagram explaining the information switching operation in the window with respect to the hierarchical information. ウィンドウ移動操作の概念図である。It is a conceptual diagram of window movement operation. ウィンドウサイズ変更操作の概念図である。It is a conceptual diagram of window size change operation. ウィンドウサイズ変更操作の概念図である。It is a conceptual diagram of window size change operation. ウィンドウサイズ変更操作の概念図である。It is a conceptual diagram of window size change operation. ウィンドウ消去操作の第1例の概念図である。It is a conceptual diagram of the 1st example of window deletion operation. ウィンドウ消去操作の第2例の概念図である。It is a conceptual diagram of the 2nd example of window deletion operation. ウィンドウ消去操作の第3例の概念図である。It is a conceptual diagram of the 3rd example of window deletion operation. 鳥瞰表現によるウィンドウ形状の第1例を説明する図である。It is a figure explaining the 1st example of the window shape by bird's-eye view expression. 鳥瞰表現によるウィンドウ形状の第2例を説明する図である。It is a figure explaining the 2nd example of the window shape by bird's-eye view expression. 鳥瞰表現によるウィンドウ形状の第3例を説明する図である。It is a figure explaining the 3rd example of the window shape by bird's-eye view expression. 鳥瞰表現によるウィンドウ形状の第4例を説明する図である。It is a figure explaining the 4th example of the window shape by bird's-eye view expression. 地図上の区画に応じたウィンドウ形状を説明する図である。It is a figure explaining the window shape according to the division on a map.
 <全体構成の概略>
 図1に実施の形態に係る情報表示装置10のブロック図を例示する。図1の例によれば、情報表示装置10は、表示部12と、入力部14と、制御部16と、記憶部18とを含んでいる。
<Overview of overall configuration>
FIG. 1 illustrates a block diagram of an information display device 10 according to an embodiment. According to the example of FIG. 1, the information display device 10 includes a display unit 12, an input unit 14, a control unit 16, and a storage unit 18.
 表示部12は、各種情報を表示する。表示部12は、例えば、複数の画素がマトリクス状に配置されることによって構成された表示面と、制御部16から取得した画像データに基づいて各画素を駆動する(換言すれば、各画素の表示状態を制御する)駆動装置と、を含んでいる。なお、表示部12で表示する画像は、静止画像の場合もあるし、動画像の場合もあるし、さらには静止画像と動画像の組み合わせの場合もある。 Display unit 12 displays various information. The display unit 12 drives each pixel based on, for example, a display surface configured by arranging a plurality of pixels in a matrix and image data acquired from the control unit 16 (in other words, each pixel's Driving device for controlling the display state). The image displayed on the display unit 12 may be a still image, a moving image, or a combination of a still image and a moving image.
 表示部12は、例えば液晶表示装置によって構成可能である。この例によれば、表示パネル(ここでは液晶パネル)の表示領域が上記表示面に対応し、表示パネルに外付けされた駆動回路が上記駆動装置に対応する。なお、駆動回路の一部が表示パネルに内蔵される場合もある。液晶表示装置の他に、エレクトロルミネセンス(EL)表示装置、プラズマディスプレイ装置、等によって、表示部12を構成することも可能である。 The display unit 12 can be configured by a liquid crystal display device, for example. According to this example, a display area of a display panel (here, a liquid crystal panel) corresponds to the display surface, and a drive circuit externally attached to the display panel corresponds to the drive device. Note that a part of the driver circuit may be incorporated in the display panel. In addition to the liquid crystal display device, the display unit 12 can be configured by an electroluminescence (EL) display device, a plasma display device, or the like.
 入力部14は、ユーザから各種情報を受け付ける。入力部14は、例えば、ユーザが入力のために用いる指示物を検出する検出部と、検出部によって検出された結果を検出信号として制御部16へ出力する検出信号出力部と、を含んでいる。 The input unit 14 receives various information from the user. The input unit 14 includes, for example, a detection unit that detects an indicator used by the user for input, and a detection signal output unit that outputs a result detected by the detection unit to the control unit 16 as a detection signal. .
 ここでは、入力部14がいわゆる接触型のタッチパネルで構成される場合を例示し、以下では入力部14を「タッチパネル14」と称する場合もある。なお、タッチパネルは「タッチパッド」等と称される場合もある。また、入力に用いる上記指示物が、ユーザの指(より具体的には、指先)である場合を例示する。 Here, a case where the input unit 14 is configured by a so-called contact-type touch panel is illustrated, and the input unit 14 may be referred to as a “touch panel 14” below. The touch panel may be referred to as a “touch pad” or the like. Moreover, the case where the said instruction | indication used for an input is a user's finger | toe (specifically fingertip) is illustrated.
 タッチパネル14の上記検出部は、ユーザが指先を載せる入力面を提供し、当該入力面に対して設けられたセンサ群によって、入力面上の指の存在を検出する。換言すれば、センサ群によって指を検出可能な領域が、ユーザ入力を受け付け可能な入力領域に対応し、接触型タッチパネルの場合、入力領域は2次元領域の入力面に対応する。 The detection unit of the touch panel 14 provides an input surface on which a user places a fingertip, and detects the presence of a finger on the input surface by a sensor group provided for the input surface. In other words, an area where a finger can be detected by the sensor group corresponds to an input area where a user input can be received. In the case of a contact-type touch panel, the input area corresponds to an input surface of a two-dimensional area.
 センサ群は電気式、光学式、機械式、等のいずれでもよいし、あるいは、それらの組み合わせでもよい。また、各種の位置検出方式が開発されており、それらのうちのいずれをタッチパネル14に採用しても構わない。また、指の位置の検出だけでなく、入力面に対する指の押圧力を検出可能な構成を採用してもよい。 The sensor group may be any of electrical, optical, mechanical, etc., or a combination thereof. Various position detection methods have been developed, and any of them may be adopted for the touch panel 14. In addition to detecting the position of the finger, a configuration capable of detecting the pressing force of the finger on the input surface may be employed.
 各センサの出力信号の組み合わせから、入力面上における指先の位置を特定可能である。特定された位置は例えば、入力面に設定された座標上の座標データによって表現される。この場合、入力面上で指を移動させると指位置を示す座標データが変化するので、連続的に取得される一連の座標データによって指の移動を検出可能である。 • The position of the fingertip on the input surface can be specified from the combination of the output signals of each sensor. The identified position is expressed by coordinate data on coordinates set on the input surface, for example. In this case, when the finger is moved on the input surface, the coordinate data indicating the finger position changes, so that the movement of the finger can be detected by a series of coordinate data acquired continuously.
 なお、座標以外の手法によって指位置を表現してもよい。すなわち、座標データは、指の位置を表現するための指位置データの一例である。 Note that the finger position may be expressed by a method other than coordinates. That is, the coordinate data is an example of finger position data for expressing the position of the finger.
 ここでは、タッチパネル14の上記検出信号出力部が、各センサの出力信号から、指位置を示す座標データを生成し、その座標データを検出信号として制御部16へ送信する例を挙げる。但し、例えば、座標データへの変換を、制御部16に行わせてもよい。そのような例では、検出信号出力部は、各センサの出力信号を、制御部16が取得可能な形式の信号に変換し、得られた信号を検出信号として制御部16へ送信する。 Here, an example is given in which the detection signal output unit of the touch panel 14 generates coordinate data indicating the finger position from the output signals of the sensors, and transmits the coordinate data to the control unit 16 as a detection signal. However, for example, the conversion to coordinate data may be performed by the control unit 16. In such an example, the detection signal output unit converts the output signal of each sensor into a signal in a format that can be acquired by the control unit 16, and transmits the obtained signal to the control unit 16 as a detection signal.
 また、図2の斜視図に示すように、タッチパネル14(図1参照)の入力面34と表示部12(図1参照)の表示面32とが重ねられた構造、換言すれば入力面34と表示面32とが一体化した構造を例示する。そのような一体構造によって、入力兼表示部20(図1参照)、より具体的にはタッチスクリーン20が提供される。 Further, as shown in the perspective view of FIG. 2, the input surface 34 of the touch panel 14 (see FIG. 1) and the display surface 32 of the display unit 12 (see FIG. 1) are overlapped, in other words, the input surface 34. A structure in which the display surface 32 is integrated is illustrated. With such an integrated structure, the input / display unit 20 (see FIG. 1), more specifically, the touch screen 20 is provided.
 入力面34と表示面32との一体構造によれば、入力面34と表示面32とはユーザにとって同一視され、あたかも表示面32に対して入力操作を行っている感覚をユーザに与える。このため、直感的な操作環境が提供される。なお、かかる点に鑑み、例えば「ユーザが表示面32を操作する」といった表現を用いる場合もある。 According to the integrated structure of the input surface 34 and the display surface 32, the input surface 34 and the display surface 32 are identified to the user, giving the user the feeling of performing an input operation on the display surface 32. For this reason, an intuitive operation environment is provided. In view of this point, for example, the expression “the user operates the display surface 32” may be used.
 制御部16は、情報表示装置10における各種の処理および制御を行う。例えば、制御部16は、タッチパネル14から入力された情報を解析し、その解析結果に応じた画像データを生成し、その画像データを表示部12へ出力する。 The control unit 16 performs various processes and controls in the information display device 10. For example, the control unit 16 analyzes information input from the touch panel 14, generates image data according to the analysis result, and outputs the image data to the display unit 12.
 ここでは、制御部16が中央演算処理部(例えば1つまたは複数のマイクロプロセッサで構成される)と主記憶部(例えばROM、RAM、フラッシュメモリ等の1つまたは複数の記憶装置で構成される)とによって構成される場合を例示する。この例によれば、主記憶部に格納された各種プログラムを中央演算処理部が実行することによって(換言すれば、ソフトウェアによって)、各種機能が実現される。各種機能は並列的に実現させることも可能である。 Here, the control unit 16 includes a central processing unit (for example, configured with one or a plurality of microprocessors) and a main storage unit (for example, one or a plurality of storage devices such as ROM, RAM, flash memory, etc.). ). According to this example, various functions are realized by the central processing unit executing various programs stored in the main storage unit (in other words, by software). Various functions can be realized in parallel.
 各種プログラムは、予め制御部16の主記憶部に格納されていてもよいし、あるいは、実行時に記憶部18から読み出されて主記憶部に格納されてもよい。主記憶部は、プログラムだけでなく各種データの格納にも利用される。また、主記憶部は、中央演算処理部がプログラムを実行する際の作業領域を提供する。また、主記憶部は、表示部12に表示する画像を書き込むため画像保持部を提供する。画像保持部は「ビデオメモリ」、「グラフィックメモリ」等と称される場合もある。 Various programs may be stored in the main storage unit of the control unit 16 in advance, or may be read from the storage unit 18 during execution and stored in the main storage unit. The main storage unit is used not only for storing programs but also for storing various data. In addition, the main storage unit provides a work area when the central processing unit executes the program. The main storage unit provides an image holding unit for writing an image to be displayed on the display unit 12. The image holding unit may be referred to as “video memory”, “graphic memory”, or the like.
 なお、制御部16が行う処理および制御の全部又は一部を、ハードウェア(例えば、特定の演算を行うように構成された演算回路等)として構成されていてもよい。 Note that all or part of the processing and control performed by the control unit 16 may be configured as hardware (for example, an arithmetic circuit configured to perform a specific calculation).
 記憶部18は、各種情報を格納する。ここでは、記憶部18は、制御部16が利用する補助記憶部として設けられている。記憶部18は、例えば、ハードディスク装置、光ディスク、書き換え可能かつ不揮発性の半導体メモリ、等の記憶装置の1つ以上を利用して構成可能である。 The storage unit 18 stores various information. Here, the storage unit 18 is provided as an auxiliary storage unit used by the control unit 16. The storage unit 18 can be configured using one or more storage devices such as a hard disk device, an optical disk, a rewritable and nonvolatile semiconductor memory, and the like.
 また、情報表示装置10は、上記要素12,14,16,18以外の要素を、更に含んでもよい。例えば、聴覚的情報を出力する音声出力部と、各種機器との間で有線通信または無線通信を行う通信部と、情報表示装置10の現在位置を例えばGPS(Global Positioning System)方式に準拠して検出する現在位置検出部と、のうちの1つ以上が追加されてもよい。 Further, the information display device 10 may further include elements other than the above elements 12, 14, 16, and 18. For example, an audio output unit that outputs auditory information, a communication unit that performs wired or wireless communication with various devices, and the current position of the information display device 10 conform to, for example, a GPS (Global Positioning System) system. One or more of the current position detection units to be detected may be added.
 音声出力部によれば、例えば、操作音、効果音、ガイダンス音声、等を出力可能である。また、通信部は、例えば、記憶部18に格納する情報の新規取得および更新に利用可能である。また、現在位置検出部は、例えばナビゲーション機能に利用可能である。 The voice output unit can output, for example, operation sounds, sound effects, guidance sounds, and the like. The communication unit can be used for, for example, new acquisition and update of information stored in the storage unit 18. Further, the current position detection unit can be used for a navigation function, for example.
 情報表示装置10の用途は特に限定されるものではない。例えば、情報表示装置10は携帯型またはデスクトップ型の情報機器であってもよい。あるいは、自動車等の移動体に搭載される、ナビゲーション装置またはオーディオ・ビジュアル装置に、情報表示装置10が応用されてもよい。 The use of the information display device 10 is not particularly limited. For example, the information display device 10 may be a portable or desktop information device. Alternatively, the information display device 10 may be applied to a navigation device or an audio / visual device mounted on a moving body such as an automobile.
 <ユーザ操作とその機能>
 情報表示装置10のより具体的な構成および処理を説明する前に、タッチパネル14に対するユーザ操作について説明する。
<User operations and their functions>
Before describing a more specific configuration and processing of the information display device 10, user operations on the touch panel 14 will be described.
 ユーザ操作は、指の動きから、タッチ操作とジェスチャ操作とに大別される。なお、以下では、タッチ操作とジェスチャ操作を「タッチ」および「ジェスチャ」とそれぞれ称する場合もある。タッチ操作は、少なくとも1本の指先をタッチパネルの入力面に接触させ、接触させた指を、入力面上で移動させることなく、入力面から離す操作である。これに対し、ジェスチャ操作は、少なくとも1本の指先を入力面に接触させ、接触させた指を入力面上で移動させた(換言すれば、スライドさせた)後に、入力面から離す操作である。 User operations are roughly classified into touch operations and gesture operations based on finger movements. Hereinafter, the touch operation and the gesture operation may be referred to as “touch” and “gesture”, respectively. The touch operation is an operation in which at least one fingertip is brought into contact with the input surface of the touch panel and the contacted finger is moved away from the input surface without being moved on the input surface. On the other hand, the gesture operation is an operation in which at least one fingertip is brought into contact with the input surface, and the contacted finger is moved on the input surface (in other words, slid), and then released from the input surface. .
 タッチ操作によって検出される座標データ(換言すれば、指位置データ)は、基本的には変化が無く、静的である。これに対し、ジェスチャ操作によって検出される座標データは、時間経過と共に変化し、動的である。そのような変化する一連の座標データによれば、入力面上で指が移動を開始した地点および終了した地点、移動始点から移動終点に至る軌跡、移動方向、移動量、移動速度、移動加速度、等の情報を取得可能である。 The coordinate data detected by the touch operation (in other words, finger position data) is basically static and static. On the other hand, the coordinate data detected by the gesture operation changes with time and is dynamic. According to such a series of changing coordinate data, the point where the finger starts moving and the point on the input surface, the locus from the moving start point to the moving end point, the moving direction, the moving amount, the moving speed, the moving acceleration, Etc. can be acquired.
 図3に、タッチ操作の第1例として、1点タッチ操作(単に「1点タッチ」とも称する)を説明する概念図を示す。なお、図3および後述の図4~図10では、上段に入力面34の平面図を示し、下段に入力面34の側面図または断面図を示している。 FIG. 3 is a conceptual diagram illustrating a one-point touch operation (also simply referred to as “one-point touch”) as a first example of the touch operation. In FIG. 3 and FIGS. 4 to 10 described later, a top view of the input surface 34 is shown in the upper stage, and a side view or a sectional view of the input surface 34 is shown in the lower stage.
 図3に示すように、1点タッチでは、ユーザは1本の指を入力面34に点接触させる。図3ではタッチ地点(換言すれば、指の検出地点)を黒塗り丸印で模式的に示している。かかる図示手法は後述の図面でも用いることにする。なお、黒塗り丸印を実際に表示面に表示させてもよい。 As shown in FIG. 3, in the one-point touch, the user makes a point contact with one finger on the input surface 34. In FIG. 3, touch points (in other words, finger detection points) are schematically indicated by black circles. Such an illustration technique is also used in the drawings described later. A black circle may be actually displayed on the display surface.
 1点タッチは例えば、シングルタップ、マルチタップおよび長押しの各操作に分類可能である。シングルタップとは、指先で入力面34を1回、軽く叩く操作である。シングルタップは単に「タップ」と称される場合もある。マルチタップとは、タップを複数回繰り返す操作である。マルチタップとして、ダブルタップが代表的である。長押しとは、指先の点接触を維持する操作である。これらの操作は、例えば指の接触(換言すれば、指の検出)の継続時間および回数によって、識別可能である。 1 One-point touch can be classified into single tap, multi-tap and long press operations, for example. Single tap is an operation of tapping the input surface 34 once with a fingertip. A single tap is sometimes simply referred to as a “tap”. Multi-tap is an operation of repeating a tap a plurality of times. A double tap is a typical multi-tap. The long press is an operation for maintaining the point contact of the fingertip. These operations can be identified by, for example, the duration and number of times of finger contact (in other words, finger detection).
 図4は、タッチ操作の第2例として、2点タッチ操作(単に「2点タッチ」とも称する)を説明する概念図である。2点タッチは、指を2本使う点を除いて、基本的に1点タッチと同じである。このため、2点タッチによっても、例えば、タップ、マルチタップおよび長押しの各操作を行うことが可能である。2点タッチでは、片手のうちの2本の指を使ってもよいし、あるいは、右手の1本の指と左手の1本の指とを使ってもよい。なお、2本の指の位置関係は図4の例に限定されるものではない。 FIG. 4 is a conceptual diagram illustrating a two-point touch operation (also simply referred to as “two-point touch”) as a second example of the touch operation. The two-point touch is basically the same as the one-point touch except that two fingers are used. For this reason, it is possible to perform each operation of tap, multi-tap, and long press, for example, by two-point touch. In the two-point touch, two fingers of one hand may be used, or one finger of the right hand and one finger of the left hand may be used. The positional relationship between the two fingers is not limited to the example in FIG.
 なお、3本以上の指でタッチ操作を行うことも可能である。 It is also possible to perform touch operations with three or more fingers.
 図5は、ジェスチャ操作の第1例として、ドラッグ操作(単に「ドラッグ」とも称する)を説明する概念図である。ドラッグとは、指先を入力面34上に置いたままでずらす操作である。なお、指の移動方向および移動距離は、図5の例に限定されるものではない。 FIG. 5 is a conceptual diagram illustrating a drag operation (also simply referred to as “drag”) as a first example of the gesture operation. Dragging is an operation of shifting the fingertip while it is placed on the input surface 34. The moving direction and moving distance of the finger are not limited to the example in FIG.
 図5において、指の移動始点を黒塗り丸印で模式的に示し、指の移動終点を黒塗り三角形で模式的に示し、その三角形の向きで指の移動方向を表現し、黒塗り丸印と黒塗り三角形を結ぶ線によって軌跡を表現している。かかる図示手法は後述の図面でも用いることにする。なお、黒塗り丸印と黒塗り三角形と軌跡とを実際に表示面に表示させてもよい。 In FIG. 5, the movement start point of the finger is schematically shown by a black circle, the movement end point of the finger is schematically shown by a black triangle, the direction of movement of the finger is expressed by the direction of the triangle, and the black circle The trajectory is represented by a line connecting the triangle and the black triangle. Such an illustration technique is also used in the drawings described later. Note that the black circle, the black triangle, and the locus may be actually displayed on the display surface.
 図6は、ジェスチャ操作の第2例として、フリック操作(単に「フリック」とも称する)を説明する概念図である。フリックとは、指先を入力面34上で素早く払う操作である。なお、指の移動方向および移動距離は図6の例に限定されるものではない。 FIG. 6 is a conceptual diagram illustrating a flick operation (also simply referred to as “flick”) as a second example of the gesture operation. The flick is an operation for quickly paying the fingertip on the input surface 34. The moving direction and moving distance of the finger are not limited to the example in FIG.
 フリックでは、ドラッグと異なり、指が移動途中で入力面34から離れる。ここではタッチパネル14が接触型であるので、入力面34から離れた後の指移動は原理的には検出されない。しかし、例えば、指が入力面34上を移動する間に得られた一連の座標データの変化から、検出された最終地点における指の移動速度を算出することが可能である。その移動速度が、予め定められた閾値(「ドラッグ/フリック識別閾値」と称することにする)以上であることを以て、フリックを識別可能である。 In flicking, unlike dragging, the finger leaves the input surface 34 while moving. Here, since the touch panel 14 is a contact type, the finger movement after leaving the input surface 34 is not detected in principle. However, for example, it is possible to calculate the moving speed of the finger at the final point detected from a series of changes in coordinate data obtained while the finger moves on the input surface 34. A flick can be identified when the moving speed is equal to or higher than a predetermined threshold (referred to as “drag / flick identification threshold”).
 また、例えば、検出された最終地点における指の移動方向、移動速度および移動加速度から、指が入力面34から離れた後に最終的に到達する地点(より具体的には、その地点を入力面34に投影した地点)を推定可能である。なお、かかる推定処理は、フリックを仮想的なドラッグに変換する処理として解釈することが可能である。 Further, for example, from the detected movement direction, movement speed, and movement acceleration of the finger at the final point, the point where the finger finally arrives after it leaves the input surface 34 (more specifically, the point is defined as the input surface 34). Can be estimated. Note that this estimation process can be interpreted as a process of converting a flick into a virtual drag.
 そこで、情報表示装置10では、そのように推定された地点を指移動の終点として扱うことにする。この例において上記推定処理は、タッチパネル14によって実行してもよいし、あるいは制御部16によって実行してもよい。 Therefore, the information display apparatus 10 treats the estimated point as the end point of finger movement. In this example, the estimation process may be executed by the touch panel 14 or may be executed by the control unit 16.
 但し、そのような推定は行わず、入力面34から離れた地点を指移動の終点として扱うように、情報表示装置10を変形しても構わない。 However, such an estimation is not performed, and the information display device 10 may be modified so that a point away from the input surface 34 is handled as an end point of finger movement.
 図7は、ジェスチャ操作の第3例として、ピンチアウト操作(単に「ピンチアウト」とも称する)を説明する概念図である。ピンチアウトとは、入力面34上で2本の指先を遠ざける操作である。ピンチアウトは「ピンチオープン」とも称される。 FIG. 7 is a conceptual diagram illustrating a pinch out operation (also simply referred to as “pinch out”) as a third example of the gesture operation. Pinch out is an operation of moving two fingertips away on the input surface 34. Pinch out is also called “pinch open”.
 図7では、2本の指の両方をドラッグする場合を例示した。これに対し、図8にジェスチャ操作の第4例として示すように、一方の指先を入力面34上に固定し(換言すれば、一方の指先はタッチ状態を維持し)、他方の指先のみをドラッグすることによって、ピンチアウトを行うことも可能である。なお、図7および図8のやり方を区別する場合、図7のやり方を「2点移動型」と称し、図8のやり方を「1点移動型」と称することにする。 FIG. 7 illustrates the case where both two fingers are dragged. On the other hand, as shown as a fourth example of the gesture operation in FIG. 8, one fingertip is fixed on the input surface 34 (in other words, one fingertip maintains a touch state), and only the other fingertip is held. It is also possible to pinch out by dragging. 7 and 8 are referred to as “two-point movement type”, and the method in FIG. 8 is referred to as “one-point movement type”.
 図9は、ジェスチャ操作の第5例として、ピンチイン操作(単に「ピンチイン」とも称する)を説明する概念図である。ピンチインとは、入力面34上で2本の指先を近づける操作である。ピンチインは「ピンチクローズ」とも称される。図9には2点移動型のピンチインを例示しているが、図10にジェスチャ操作の第6例として、1点移動型のピンチインを例示する。 FIG. 9 is a conceptual diagram illustrating a pinch-in operation (also simply referred to as “pinch-in”) as a fifth example of the gesture operation. Pinch-in is an operation of bringing two fingertips closer on the input surface 34. Pinch-in is also referred to as “pinch close”. FIG. 9 illustrates a two-point movement type pinch-in, but FIG. 10 illustrates a one-point movement type pinch-in as a sixth example of the gesture operation.
 ここで、ピンチアウトとピンチインを「ピンチ操作」または「ピンチ」と総称し、指の移動方向を「ピンチ方向」と称することにする。この場合、ピンチ方向が、指の間隔が拡大する方向である場合、そのピンチ操作は特にピンチアウトと称される。逆に、ピンチ方向が、指の間隔が縮小する方向である場合、そのピンチ操作は特にピンチインと称される。 Here, pinch-out and pinch-in are collectively referred to as “pinch operation” or “pinch”, and the direction of finger movement is referred to as “pinch direction”. In this case, when the pinch direction is a direction in which the interval between fingers is increased, the pinch operation is particularly referred to as pinch out. Conversely, when the pinch direction is a direction in which the interval between fingers is reduced, the pinch operation is particularly called pinch-in.
 なお、ピンチアウトおよびピンチインでは、片手のうちの2本の指を使ってもよいし、あるいは、右手の1本の指と左手の1本の指とを使ってもよい。また、2本の指の位置関係、移動方向および移動距離は、図7~図10の例に限定されるものではない。また、1点移動型のピンチアウトおよびピンチインにおいて、ドラッグさせる方の指は、図8および図10の例に限定されるものではない。また、ドラッグの代わりにフリックを使って、ピンチアウトおよびピンチインを行うことも可能である。 In pinch-out and pinch-in, two fingers of one hand may be used, or one finger of the right hand and one finger of the left hand may be used. Further, the positional relationship, the moving direction, and the moving distance of the two fingers are not limited to the examples in FIGS. Further, in the one-point movement type pinch-out and pinch-in, the finger to be dragged is not limited to the examples of FIGS. 8 and 10. It is also possible to pinch out and pinch in using flicks instead of dragging.
 各ユーザ操作は、特定の機能に関連付けられている。具体的には、ユーザ操作が検出されると、そのユーザ操作に関連付けられた処理が制御部16によって実行され、それにより、対応する機能が実現される。かかる点に鑑みると、ユーザ操作は、実現される機能から、分類することも可能である。 Each user operation is associated with a specific function. Specifically, when a user operation is detected, a process associated with the user operation is executed by the control unit 16, thereby realizing a corresponding function. In view of this point, user operations can be classified based on functions to be realized.
 例えば、表示面32上のアイコンに対して行うダブルタップは、アイコンに関連付けられているプログラムまたはコマンドを実行させる機能に関連付けられる。この場合、ダブルタップは、実行指示操作として機能する。 For example, a double tap performed on an icon on the display surface 32 is associated with a function for executing a program or command associated with the icon. In this case, the double tap functions as an execution instruction operation.
 また、図11に例示するように、表示情報(図11には地図画像が例示されている)に対して行うドラッグは、その表示情報をスライドさせるスライド機能に関連付けられる。この場合、ドラッグ操作は、スライド操作として機能する。なお、ドラッグの代わりにフリックによって、スライドを行わせることも可能である。ここで、スライド機能およびスライド操作は、スクロール機能およびスクロール操作とも称される。但し、スライド方向とスクロール方向とは180°異なる。 Further, as illustrated in FIG. 11, dragging performed on display information (a map image is illustrated in FIG. 11) is associated with a slide function for sliding the display information. In this case, the drag operation functions as a slide operation. Note that it is also possible to slide by flicking instead of dragging. Here, the slide function and the slide operation are also referred to as a scroll function and a scroll operation. However, the slide direction and the scroll direction differ by 180 °.
 また、図12に例示するように、表示情報(図12には地図画像が例示されている)に対して行うピンチアウトおよびピンチインは、その情報表示のサイズ(換言すれば、スケール)を変更する機能に関連付けられる。この場合、ピンチアウトおよびピンチインは、表示サイズ変更操作(「表示スケール変更操作」と称してもよい)として機能する。より具体的には、図12の例では、ピンチアウトが拡大操作に対応し、ピンチインが縮小操作に対応する。 Further, as illustrated in FIG. 12, the pinch out and the pinch in performed on the display information (a map image is illustrated in FIG. 12) changes the size (in other words, the scale) of the information display. Associated with function. In this case, pinch-out and pinch-in function as a display size change operation (may be referred to as a “display scale change operation”). More specifically, in the example of FIG. 12, pinch out corresponds to an enlargement operation, and pinch in corresponds to a reduction operation.
 また、図13に例示するように、表示情報(図13には地図画像が例示されている)に対して、2本の指を、距離を保ったまま円を描くように、ドラッグさせる場合、そのドラッグは、その情報表示を回転させる機能に関連付けられる。この場合の2点移動型の回転ドラッグは、回転操作として機能する。なお、3本以上の指で行う回転ドラッグを採用してもよい。また、回転ドラッグを行う指の本数に応じて、関連付ける機能を違えてもよい。 In addition, as illustrated in FIG. 13, when two fingers are dragged with respect to the display information (a map image is illustrated in FIG. 13) to draw a circle while maintaining a distance, The drag is associated with a function that rotates the information display. In this case, the two-point movement type rotary drag functions as a rotation operation. In addition, you may employ | adopt the rotation drag performed with three or more fingers. Further, the function of associating may be changed according to the number of fingers to be rotated and dragged.
 ここで、1つのユーザ操作に複数種類の機能を割り当てることも可能である。例えば、ダブルタップを、上記の実行指示操作の他に、アイコンに関連付けられたフォルダを開くフォルダ開操作に割り当ててもよい。また、ドラッグを、スライド機能と、描画機能とに割り当ててもよい。1つのユーザ操作に複数種類の機能を割り当てられている場合、操作対象、使用状況(換言すれば、使用モード)等に応じて、各機能が切り換えられる。 Here, it is also possible to assign multiple types of functions to one user operation. For example, a double tap may be assigned to a folder opening operation for opening a folder associated with an icon in addition to the above execution instruction operation. Further, the drag may be assigned to a slide function and a drawing function. When a plurality of types of functions are assigned to one user operation, each function is switched according to the operation target, usage status (in other words, usage mode), and the like.
 また、1つの機能に複数種類のユーザ操作を割り当てることも可能である。例えば、アイコンに対する実行指示機能を、ダブルタップと長押しとフリックに関連付けてもよい。この場合、ダブルタップと長押しとフリックのいずれによっても、アイコンに関連付けられたプログラム等を実行可能である。また、例えば、スライド機能を、ドラッグとフリックとの両方に関連付けてもよい。また、例えば、回転機能を、2点移動型の回転ドラッグと1点移動型の回転ドラッグの両方に関連付けてもよい。 It is also possible to assign multiple types of user operations to one function. For example, the execution instruction function for the icon may be associated with double tap, long press, and flick. In this case, a program or the like associated with the icon can be executed by any of double tap, long press and flick. For example, the slide function may be associated with both dragging and flicking. Further, for example, the rotation function may be associated with both the two-point movement type rotation drag and the one-point movement type rotation drag.
 ここで、ユーザ操作に関連付けられている機能を、画面の移動および変形の観点から、画面移動変形型と非移動変形型とに大別する。なお、以下では、例えば、画面移動変形型機能に関連付けられているジェスチャ操作を、「画面移動変形型機能のジェスチャ操作」と表現する場合もある。 Here, the functions associated with user operations are roughly classified into screen movement deformation type and non-movement deformation type from the viewpoint of screen movement and deformation. In the following, for example, a gesture operation associated with a screen movement deformation type function may be expressed as “a screen movement deformation type function gesture operation”.
 ジェスチャ操作に関連付けられている画面移動変形型機能は、表示面上の表示情報を、ジェスチャ方向に応じて設定される制御方向に、制御する(換言すれば、操る)機能である。画面移動変形型機能には例えば、スライド機能と、表示サイズ変更機能と、回転機能と、鳥瞰図表示機能(より具体的には、仰角および俯角の変更機能)とが含まれる。なお、スライド機能は画面移動機能に分類可能である。また、回転機能を角度の移動という観点から捉えれば、回転機能を画面移動機能に分類可能である。また、表示サイズ変更機能および鳥瞰図表示機能は、画面変形機能に分類可能である。 The screen movement deformation type function associated with the gesture operation is a function of controlling (in other words, manipulating) display information on the display surface in a control direction set according to the gesture direction. The screen movement deformation type function includes, for example, a slide function, a display size change function, a rotation function, and a bird's eye view display function (more specifically, an elevation angle and depression angle change function). The slide function can be classified as a screen movement function. Further, if the rotation function is viewed from the viewpoint of angle movement, the rotation function can be classified as a screen movement function. Further, the display size changing function and the bird's eye view display function can be classified into screen deformation functions.
 より具体的には、スライド機能では、ジェスチャ方向(例えば、ドラッグ方向またはフリック方向)に応じてスライド方向(すなわち、制御方向)を設定し、そのスライド方向に表示情報をスライドする。 More specifically, in the slide function, a slide direction (that is, a control direction) is set according to a gesture direction (for example, a drag direction or a flick direction), and display information is slid in the slide direction.
 また、表示サイズ変更機能では、ジェスチャ方向(例えば、ピンチ方向)が拡大方向である場合には制御方向を拡大方向に設定し、ジェスチャ方向が縮小方向である場合には制御方向を縮小方向に設定し、その設定した制御方向に、表示情報のサイズを変更する。 In the display size changing function, when the gesture direction (for example, pinch direction) is the enlargement direction, the control direction is set to the enlargement direction, and when the gesture direction is the reduction direction, the control direction is set to the reduction direction. Then, the display information size is changed in the set control direction.
 また、回転機能では、ジェスチャ方向(例えば、回転ドラッグにおける回転方向)が右回転方向である場合には制御方向を右回転方向に設定し、ジェスチャ方向が左回転方向である場合には制御方向を左回転方向に設定し、その設定した制御方向に、表示情報を回転する。 In the rotation function, the control direction is set to the right rotation direction when the gesture direction (for example, the rotation direction in the rotation drag) is the right rotation direction, and the control direction is set to the left rotation direction when the gesture direction is the left rotation direction. The counter is set to the left rotation direction, and the display information is rotated in the set control direction.
 なお、画面移動変形型機能は、ジェスチャ方向だけでなく、更にジェスチャ量(例えば、ジェスチャ軌跡の長さ)も利用して、表示情報を制御してもよい。具体的には、ジェスチャ量が大きいほど、表示情報の制御量(例えば、スライド量、表示サイズ変更量および回転量)を大きく設定してもよい。 Note that the screen movement / deformation function may control the display information by using not only the gesture direction but also the gesture amount (for example, the length of the gesture trajectory). Specifically, the control amount (for example, the slide amount, the display size change amount, and the rotation amount) of the display information may be set larger as the gesture amount is larger.
 また、画面移動変形型機能は、ジェスチャ量に加えてまたは代えて、ジェスチャ速度を利用して表示情報を制御してもよい。具体的には、ジェスチャ速度が大きいほど、表示情報の制御速度(例えば、スライド速度、表示サイズ変更速度および回転速度)を大きく設定してもよい。 Further, the screen movement deformation type function may control the display information using the gesture speed in addition to or instead of the gesture amount. Specifically, the display information control speed (for example, slide speed, display size change speed, and rotation speed) may be set higher as the gesture speed is higher.
 他方、非移動変形型機能は、ジェスチャ操作に関連付けられていても、機能の実現にジェスチャ方向を利用しない。例えば、アイコンに対するフリックが、特定のプログラムの実行指示機能に関連付けられていても、当該機能は非移動変形型に属する。また、例えばドラッグを描画機能および手書き文字入力機能で利用する場合も、そのドラッグに応じた軌跡が表示されるだけであり、表示情報がドラッグ方向に応じて制御されるわけでない。 On the other hand, the non-moving deformation type function does not use the gesture direction for realizing the function even if it is associated with the gesture operation. For example, even if a flick to an icon is associated with an execution instruction function of a specific program, the function belongs to the non-moving deformation type. For example, when dragging is used for the drawing function and the handwritten character input function, only the trajectory corresponding to the dragging is displayed, and the display information is not controlled according to the dragging direction.
 なお、ユーザ操作およびそれによって実現される機能は、上記の各種例示に限定されるものではない。 Note that the user operations and the functions realized thereby are not limited to the above examples.
 <制御部16の構成例>
 図14に制御部16のブロック図を例示する。なお、図14には説明のため、表示部12と入力部14と記憶部18も記載している。図14の例によれば、制御部16は、入力解析部40と、全体制御部42と、第1画像形成部44と、第1画像保持部46と、第2画像形成部48と、第2画像保持部50と、画像合成部52と、合成画像保持部54と、ウィンドウ管理部56とを含んでいる。
<Configuration Example of Control Unit 16>
FIG. 14 illustrates a block diagram of the control unit 16. In FIG. 14, the display unit 12, the input unit 14, and the storage unit 18 are also illustrated for explanation. According to the example of FIG. 14, the control unit 16 includes an input analysis unit 40, an overall control unit 42, a first image forming unit 44, a first image holding unit 46, a second image forming unit 48, A two-image holding unit 50, an image composition unit 52, a composite image holding unit 54, and a window management unit 56 are included.
 入力解析部40は、入力部14によって検出されたユーザ操作を解析し、ユーザ操作を識別する。具体的には、入力解析部40は、ユーザ操作に伴って検出された座標データを入力部14から取得し、当該座標データからユーザ操作情報を取得する。ユーザ操作情報は、例えば、ユーザ操作の種類、指移動の始点および終点、始点から終点に至る軌跡、移動方向、移動量、移動速度、移動加速度、等の情報である。 The input analysis unit 40 analyzes the user operation detected by the input unit 14 and identifies the user operation. Specifically, the input analysis unit 40 acquires coordinate data detected along with a user operation from the input unit 14, and acquires user operation information from the coordinate data. The user operation information is, for example, information such as the type of user operation, the start and end points of finger movement, the trajectory from the start point to the end point, the moving direction, the moving amount, the moving speed, and the moving acceleration.
 ユーザ操作の種類の識別について、例えば始点と終点の差を、予め定められた閾値(「タッチ/ジェスチャ識別閾値」と称することにする)と比較することによって、タッチ操作とジェスチャ操作とを識別可能である。また、上記のように軌跡の最後における指移動速度から、ドラッグとフリックを識別可能である。 For identifying the type of user operation, for example, the difference between the start point and the end point can be distinguished from a predetermined threshold value (referred to as a “touch / gesture identification threshold value”) to identify the touch operation and the gesture operation. It is. Further, as described above, the drag and the flick can be identified from the finger moving speed at the end of the trajectory.
 また、例えば2つのドラッグが同時に識別された場合、移動方向からピンチアウトとピンチインを識別可能である。また、2つのドラッグが距離を保ったまま円を描いている場合、回転ドラッグが行われたことを識別可能である。また、ドラッグと1点タッチが同時に識別された場合、ピンチアウト、ピンチインおよび回転ドラッグが1点移動型であることを識別可能である。 Also, for example, when two drags are identified at the same time, pinch out and pinch in can be identified from the moving direction. In addition, when the two drags draw a circle while maintaining a distance, it is possible to identify that the rotation drag has been performed. When a drag and a one-point touch are identified at the same time, it can be identified that the pinch-out, the pinch-in, and the rotational drag are one-point moving types.
 全体制御部42は、制御部16における各種処理を行う。例えば、全体制御部42は、入力部14の入力面上の位置と、表示部12の表示面上の位置との対応付けを行う。これによれば、タッチ操作におけるタッチ位置、ジェスチャ操作におけるジェスチャ軌跡、等が表示面上に対応付けられる。そのような対応付けにより、ユーザ操作が表示面のどの位置を意図して行われたのかを識別可能である。かかる対応付けは、いわゆるグラフィカル・ユーザ・インターフェース(GUI)技術によって実現可能である。 The overall control unit 42 performs various processes in the control unit 16. For example, the overall control unit 42 associates a position on the input surface of the input unit 14 with a position on the display surface of the display unit 12. According to this, the touch position in the touch operation, the gesture locus in the gesture operation, and the like are associated on the display surface. With such association, it is possible to identify the position on the display surface where the user operation is intended. Such association can be realized by a so-called graphical user interface (GUI) technique.
 また、全体制御部42は、例えば、ユーザ操作情報と機能識別情報とに基づいて、ユーザが所望している機能、すなわちユーザ指示を識別する。機能識別情報は、例えば、ユーザ操作と、実行する機能との関連付けが、操作状況情報を介して規定された情報である。操作状況情報は、例えば、情報表示装置10の使用状況(換言すれば、使用モード)、ユーザ操作が行われた操作対象、使用状況および操作対象に応じて受け付け可能なユーザ操作の種類、等の情報である。 Further, the overall control unit 42 identifies a function desired by the user, that is, a user instruction based on, for example, user operation information and function identification information. The function identification information is, for example, information in which an association between a user operation and a function to be executed is defined through the operation status information. The operation status information includes, for example, the usage status of the information display device 10 (in other words, the usage mode), the operation target on which the user operation has been performed, the type of user operation that can be accepted according to the usage status and the operation target, etc. Information.
 より具体的に、例えば、地図閲覧ソフトウェアが使用されている状況下でその地図画像を操作対象としてドラッグが行われた場合、そのドラッグはスライド機能の実行を指示していると識別される。また、例えば、地図画像上の拡大アイコンを操作対象としてタップが行われた場合、そのタップは表示サイズ拡大機能の実行を指示していると識別される。また、例えば、拡大アイコンに対するフリックに何ら機能が関連付けられていない場合、そのフリックは無効な操作であると判断される。 More specifically, for example, when a drag is performed on the map image as an operation target in a situation where the map browsing software is used, the drag is identified as instructing execution of the slide function. Also, for example, when a tap is performed with an enlarged icon on the map image as an operation target, the tap is identified as instructing execution of the display size enlargement function. For example, if no function is associated with the flick for the enlarged icon, it is determined that the flick is an invalid operation.
 また、全体制御部42は、第1画像形成部44と第2画像形成部48と画像合成部52とを制御することによって、表示面上の表示情報を制御する。なお、表示情報の変更は、ユーザ指示の識別結果に基づく場合もあるし、あるいは、ユーザ指示の識別結果とは関係なく、プログラム実行上の指示に基づく場合もある。 The overall control unit 42 controls the display information on the display surface by controlling the first image forming unit 44, the second image forming unit 48, and the image composition unit 52. Note that the display information may be changed based on the identification result of the user instruction, or may be based on an instruction on program execution irrespective of the identification result of the user instruction.
 また、全体制御部42は、他の機能部40,44,46,48,50,52,54,56に対する全般的な制御、例えば実行タイミングの調整を行う。 The overall control unit 42 performs general control on the other functional units 40, 44, 46, 48, 50, 52, 54, and 56, for example, adjustment of execution timing.
 第1画像形成部44は、全体制御部42の指示に応じた第1情報60を記憶部18から読み出し、第1情報60から第1画像を形成し、第1画像を第1画像保持部46に格納する。同様に、第2画像形成部48は、全体制御部42の指示に応じた第2情報62を記憶部18から読み出し、第2情報62から第2画像を形成し、第2画像を第2画像保持部50に格納する。 The first image forming unit 44 reads the first information 60 according to the instruction from the overall control unit 42 from the storage unit 18, forms the first image from the first information 60, and converts the first image into the first image holding unit 46. To store. Similarly, the second image forming unit 48 reads the second information 62 according to the instruction from the overall control unit 42 from the storage unit 18, forms a second image from the second information 62, and converts the second image into the second image. Store in the holding unit 50.
 画像合成部52は、全体制御部42の指示の下、第1画像保持部46から第1画像を読み出し、第2画像保持部50から第2画像を読み出し、第1画像と第2画像とを合成し、合成した画像を合成画像保持部54に格納する。 Under the instruction of the overall control unit 42, the image composition unit 52 reads the first image from the first image holding unit 46, reads the second image from the second image holding unit 50, and combines the first image and the second image. The synthesized image is stored in the synthesized image holding unit 54.
 画像の合成は、第1画像と第2画像とが重なって表示されるように、行われる。ここでは、第1画像が下の画像(換言すれば、下のレイヤ)であり、第2画像が上の画像(換言すれば、上のレイヤ)である場合を例示する。なお、ここでいう「上下」は、表示面の法線方向における上下のことであり、表示面を見ているユーザに近い側を「上」と表現している。実際には、そのような概念に基づいて、画像データが重ねられる。 The image composition is performed so that the first image and the second image are displayed in an overlapping manner. Here, the case where the first image is the lower image (in other words, the lower layer) and the second image is the upper image (in other words, the upper layer) is illustrated. Here, “up and down” means up and down in the normal direction of the display surface, and the side closer to the user viewing the display surface is expressed as “up”. In practice, image data is superimposed based on such a concept.
 合成画像、すなわち表示画面では、上の画像のうちの透明部分には下の画像が表示される。換言すれば、上の画像のうちの描画部分は下の画像を隠すことになる。但し、上の画像の描画部分に透明度を設定することによって、下の画像が透けた状態の合成画像も形成可能である。 In the composite image, that is, the display screen, the lower image is displayed in the transparent portion of the upper image. In other words, the drawing portion of the upper image hides the lower image. However, by setting the transparency in the drawing portion of the upper image, a composite image in which the lower image is transparent can be formed.
 第1画像と第2画像のいずれを上の画像にするかの設定は、変更不可であってもよいし、あるいは、変更可能であってもよい。 The setting of which of the first image and the second image is the upper image may be unchangeable or may be changeable.
 ここでは、第1画像および第2画像による2つのレイヤを合成する例を挙げるが、更に多くのレイヤを合成可能な構成を採用してもよい。また、他の合成手法を採用してもよい。 Here, an example is given in which two layers of the first image and the second image are combined, but a configuration capable of combining more layers may be adopted. Also, other synthesis methods may be adopted.
 合成画像保持部54に格納された合成画像は、表示部12へ転送され、表示部12において表示される。合成画像が更新されることによって、すなわち第1画像と第2画像の少なくとも一方が更新されることによって、表示画面が変化する。 The composite image stored in the composite image holding unit 54 is transferred to the display unit 12 and displayed on the display unit 12. The display screen changes when the composite image is updated, that is, when at least one of the first image and the second image is updated.
 ウィンドウ管理部56は、全体制御部42の制御下、表示面に形成するウィンドウを管理する。具体的には、ウィンドウ管理部56は、ウィンドウの形成範囲(位置、形状、等)、表示属性(ウィンドウの修飾の有無および種類、等)等の情報を管理し、当該ウィンドウ管理情報に基づいて画像合成部52を制御することによってウィンドウを管理する。 The window management unit 56 manages windows formed on the display surface under the control of the overall control unit 42. Specifically, the window management unit 56 manages information such as the window formation range (position, shape, etc.), display attributes (whether or not the window is modified and the type, etc.), and based on the window management information. The window is managed by controlling the image composition unit 52.
 <情報表示装置10の処理例>
 以下に、ウィンドウに関連した、情報表示装置10による処理(換言すれば、情報表示方法)を例示する。
<Processing Example of Information Display Device 10>
Hereinafter, a process (in other words, an information display method) related to the window by the information display device 10 will be exemplified.
 <ウィンドウ形成までの処理>
 図15に、ウィンドウ形成までの処理フローS10を例示する。図15の例によれば、ステップS11において入力部14がユーザ操作を受け付け、ステップS12において制御部16が、入力されたユーザ操作を識別する。そして、ステップS13において、制御部16が、ステップS12の識別結果に基づいて、入力されたユーザ操作が、予め規定されたウィンドウ開操作であるか否かを判断する。
<Process until window formation>
FIG. 15 illustrates a processing flow S10 up to window formation. According to the example of FIG. 15, in step S11, the input unit 14 receives a user operation, and in step S12, the control unit 16 identifies the input user operation. In step S13, the control unit 16 determines whether or not the input user operation is a window opening operation defined in advance based on the identification result in step S12.
 ユーザ操作がウィンドウ開操作ではないと判断した場合、制御部16は、ステップS14において、入力されたユーザ操作に関連付けられている機能を実行する。その後、情報表示装置10の処理は上記ステップS11に戻る。 If it is determined that the user operation is not a window opening operation, the control unit 16 executes a function associated with the input user operation in step S14. Thereafter, the processing of the information display device 10 returns to step S11.
 他方、上記ステップS13でユーザ操作がウィンドウ開操作であると判断した場合、制御部16は、ステップS15において、ウィンドウを形成し当該ウィンドウに情報を表示する。ウィンドウの形成により、図15の処理フローS10は終了する。 On the other hand, when it is determined in step S13 that the user operation is a window opening operation, the control unit 16 forms a window and displays information in the window in step S15. The process flow S10 in FIG. 15 ends with the formation of the window.
 ウィンドウ開操作は、表示面にウィンドウを形成することを指示する操作であると共に、ウィンドウの形成範囲を指定する操作である。ウィンドウ開操作の例を図16~図18に示す。 The window opening operation is an operation for instructing to form a window on the display surface and an operation for designating a window forming range. Examples of window opening operations are shown in FIGS.
 図16の例では、2点タッチの長押し操作が、ウィンドウ開操作に割り当てられている。この例によれば、タッチした2点を対角位置の頂点とする四角形の範囲に、ウィンドウ80が形成される。すなわち、ユーザは、タッチする2点によって、ウィンドウ80の形成範囲を指定することになる。なお、タッチする2点の位置関係等は、図16の例に限定されるものではない。 In the example of FIG. 16, the long press operation of two-point touch is assigned to the window opening operation. According to this example, the window 80 is formed in a rectangular range having the two touched points as the vertices of the diagonal position. That is, the user designates the formation range of the window 80 by two touch points. Note that the positional relationship between the two points to be touched is not limited to the example of FIG.
 なお、図16では、ウィンドウ80を図面上で分かりやすくするために、ウィンドウ80内の領域に砂状のハッチングを施している。すなわち、この砂状ハッチングは、実施の形態の説明のために施しているに過ぎず、ウィンドウ80のデザイン等を限定するものではない。なお、かかる砂状ハッチングは、図17等の後述の図面においても、用いる場合がある。 In FIG. 16, in order to make the window 80 easy to understand on the drawing, an area in the window 80 is sanded. That is, this sand hatching is only given for the description of the embodiment, and does not limit the design of the window 80 or the like. Such sand hatching may also be used in later-described drawings such as FIG.
 図16の例の場合、制御部16は、2点タッチの状態が、予め定められた時間(「ウィンドウ開指示時間」と称することにする)に達したことを上記ステップS12で識別すると、ステップS13においてウィンドウ開操作が入力されたと判断する。そして、制御部16は、ウィンドウ開操作によって指定された範囲に応じて、表示面32にウィンドウ80を形成する。例えば、制御部16は、タッチされた2点を表示面32の座標系に対応付け、表示面32における当該2点を対角位置の頂点に採用して、四角形のウィンドウ80を形成する。なお、ウィンドウ80の形成範囲を入力面34の座標系上で求め、求められた範囲を表示面32の座標系へ対応付けてもよい。 In the case of the example of FIG. 16, when the control unit 16 identifies that the two-point touch state has reached a predetermined time (referred to as “window opening instruction time”) in step S12, In S13, it is determined that a window opening operation has been input. And the control part 16 forms the window 80 on the display surface 32 according to the range designated by window opening operation. For example, the control unit 16 associates the two touched points with the coordinate system of the display surface 32 and adopts the two points on the display surface 32 as the vertices of the diagonal position to form the rectangular window 80. The formation range of the window 80 may be obtained on the coordinate system of the input surface 34, and the obtained range may be associated with the coordinate system of the display surface 32.
 図17の例では、1点タッチと、当該1点タッチ地点またはその近傍を始点にして、任意の範囲を囲むように行うドラッグ(「囲みドラッグ」または「囲みジェスチャ」と称することにする)と、の組み合わせ操作が、ウィンドウ開操作に割り当てられている。この例での1点タッチとして、シングルタップ、マルチタップおよび長押しのいずれも採用可能である。また、ドラッグの最後がフリックになっても構わない。なお、1点タッチ地点の近傍とは例えば、1点タッチ地点から、予め定められた距離の範囲を言う。この例によれば、ユーザは、囲みドラッグで囲む範囲によって、あるいは、1点タッチ地点と囲みドラッグで囲む範囲との組み合わせによって、ウィンドウ80の形成範囲を指定することになる。なお、囲みドラッグの方向等は、図17の例に限定されるものではない。 In the example of FIG. 17, a one-point touch and a drag (referred to as “enclosed drag” or “enclosed gesture”) that surrounds an arbitrary range starting from the one-point touched point or the vicinity thereof. The combination operation is assigned to the window opening operation. As the one-point touch in this example, any of single tap, multi-tap and long press can be adopted. Also, the end of the drag may be a flick. The vicinity of the one-point touch point refers to a range of a predetermined distance from the one-point touch point, for example. According to this example, the user designates the formation range of the window 80 by the range surrounded by the surrounding drag or by the combination of the one-point touch point and the range surrounded by the surrounding drag. The direction of the surrounding drag is not limited to the example of FIG.
 図17の例の場合、制御部16は、1点タッチと囲みドラッグが連続して行われたことを上記ステップS12で識別すると、ステップS13においてウィンドウ開操作が入力されたと判断する。なお、「連続して」という条件は、予め定められた操作時間間隔以下で1点タッチと囲みドラッグが行われるという条件と、その途中に他の操作が行われないという条件と、を含む。 In the case of the example in FIG. 17, when the control unit 16 identifies in step S12 that the one-point touch and the surrounding drag have been continuously performed, it determines that the window opening operation is input in step S13. The condition “continuously” includes a condition that a one-point touch and a surrounding drag are performed within a predetermined operation time interval, and a condition that no other operation is performed in the middle.
 そして、制御部16は、ウィンドウ開操作によって指定された範囲に応じて、表示面32にウィンドウ80を形成する。例えば、制御部16は、囲みドラッグの軌跡70を表示面32の座標系に対応付け、表示面32の座標系上で、当該軌跡70で囲まれた範囲を、予め定められた変換規則に従って四角形に変換し、変換後の四角形の範囲にウィンドウ80を形成する。なお、図17の例では上記変換規則として、例えば、囲み軌跡70の範囲に内包される最大の四角形を求めるという規則を採用可能である。但し、その他の変換規則を採用しても構わない。例えば、ウィンドウ開操作の最初に行われる上記1点タッチの地点が、ウィンドウ80の1つの頂点になるように、ウィンドウ形成範囲を決定してもよい。なお、ウィンドウ80の形成範囲を入力面34の座標系上で求め、求められた範囲を表示面32の座標系へ対応付けてもよい。 And the control part 16 forms the window 80 on the display surface 32 according to the range designated by window opening operation. For example, the control unit 16 associates the encircling drag locus 70 with the coordinate system of the display surface 32, and defines a range surrounded by the locus 70 on the coordinate system of the display surface 32 according to a predetermined conversion rule. And a window 80 is formed in the converted rectangular range. In the example of FIG. 17, as the conversion rule, for example, it is possible to adopt a rule for obtaining the maximum rectangle contained in the range of the enclosed locus 70. However, other conversion rules may be adopted. For example, the window formation range may be determined so that the point of the one-point touch performed at the beginning of the window opening operation becomes one vertex of the window 80. The formation range of the window 80 may be obtained on the coordinate system of the input surface 34, and the obtained range may be associated with the coordinate system of the display surface 32.
 図18の例では、アイコン90を利用する。例えば、図18に示すように、アイコン90に対する1点タッチと、アイコン90に対するピンチアウトとの組み合わせ操作が、ウィンドウ開操作に割り当てられている。この例での1点タッチとして、シングルタップ、マルチタップおよび長押しのいずれも採用可能である。この例によれば、ユーザは、ピンチアウトの2つの終点によって、ウィンドウ80の形成範囲を指定することになる。なお、図18には2点移動型のピンチアウトを例示しているが、1点移動型のピンチアウトを採用してもよい。また、ピンチアウトの方向等は、図18の例に限定されるものではない。 In the example of FIG. 18, an icon 90 is used. For example, as shown in FIG. 18, a combination operation of a one-point touch on the icon 90 and a pinch out on the icon 90 is assigned to the window opening operation. As the one-point touch in this example, any of single tap, multi-tap and long press can be adopted. According to this example, the user designates the formation range of the window 80 by two end points of the pinch out. 18 illustrates a two-point movement type pinch-out, but a one-point movement type pinch-out may be employed. Further, the direction of pinch out is not limited to the example of FIG.
 図18の例の場合、制御部16は、アイコン90に対する1点タッチとピンチアウトが連続して行われたことを上記ステップS12で識別すると、ステップS13においてウィンドウ開操作が入力されたと判断する。なお、「連続して」という条件は、図17の例と同様である。 In the case of the example in FIG. 18, when the control unit 16 identifies that the one-point touch and the pinch-out for the icon 90 have been continuously performed in step S <b> 12, the control unit 16 determines that the window opening operation is input in step S <b> 13. The condition “continuously” is the same as in the example of FIG.
 そして、制御部16は、ウィンドウ開操作によって指定された範囲に応じて、表示面32にウィンドウ80を形成する。かかるウィンドウ形成処理は、図16の例と同様に実行可能である。すなわち、図16の例のタッチ操作の2点の代わりに、ピンチアウトの2つの終点を用いて、処理可能である。 And the control part 16 forms the window 80 on the display surface 32 according to the range designated by window opening operation. Such window formation processing can be executed in the same manner as in the example of FIG. That is, processing can be performed using two end points of the pinch out instead of the two points of the touch operation in the example of FIG.
 なお、図18の例では、アイコン90に予め関連付けられた情報がウィンドウ80に表示される。また、図18に例示のアイコン90はビルの外観を模しているが、例えば企業、商品等のロゴタイプをアイコン90のデザインに採用してもよい。また、例えば地図用記号を、アイコン90のデザインに採用してもよい。 In the example of FIG. 18, information associated with the icon 90 in advance is displayed on the window 80. Further, although the icon 90 illustrated in FIG. 18 simulates the appearance of a building, for example, a logotype such as a company or a product may be adopted in the design of the icon 90. Further, for example, a map symbol may be adopted in the design of the icon 90.
 また、例えば、図16および図17の例示のウィンドウ開操作によって指定された範囲内にアイコン90が存在する場合、当該ウィンドウ開操作を、アイコン90に関連付けられている情報をウィンドウ80に表示させる操作として識別するようにしてもよい。 In addition, for example, when the icon 90 exists within the range specified by the window opening operation illustrated in FIGS. 16 and 17, the window opening operation is performed, and information associated with the icon 90 is displayed on the window 80. You may make it identify as.
 図16~図18の例によれば、ウィンドウ開操作の実行位置にウィンドウ80が形成可能される。また、図16および図17の例では、ウィンドウ開操作自体が、形成されるウィンドウ80を想像した自然な動作である。また、図18の例では、ウィンドウ開操作が、日常生活において対象物を開いて内部を見るという動作と類似性が高い。これらの点から、ウィンドウ80の形成を直感的に行うことができる。このため、高い操作性を実現できる。 16 to FIG. 18, the window 80 can be formed at the execution position of the window opening operation. In the examples of FIGS. 16 and 17, the window opening operation itself is a natural operation imagining the window 80 to be formed. In the example of FIG. 18, the window opening operation is highly similar to the operation of opening an object and looking inside in daily life. From these points, the window 80 can be formed intuitively. For this reason, high operability can be realized.
 また、ウィンドウ開操作の実行位置にウィンドウ80が形成されるので、視点を大きく動かすことなくウィンドウ80内の情報を見ることができる。このため、ユーザの認知負荷が小さくて済む。 Also, since the window 80 is formed at the execution position of the window opening operation, the information in the window 80 can be viewed without moving the viewpoint greatly. For this reason, a user's cognitive load may be small.
 なお、ウィンドウ開操作は図16~図18の例に限定されるものではない。各種のユーザ操作またはその組み合わせを、ウィンドウ開操作として予め割り当てることが可能である。 Note that the window opening operation is not limited to the examples shown in FIGS. Various user operations or combinations thereof can be pre-assigned as window opening operations.
 図16~図18では、図面の煩雑化を避けるために、ウィンドウ80を単なる太枠で描いている。しかし、ウィンドウ80のデザインはこれに限定されるものではない。例えば、図19に例示する影付き修飾、図20に例示する窪み修飾を採用してもよい。影付き修飾によれば、ウィンドウ部分が周囲よりも上方に位置する印象を与えることができる。逆に、窪み修飾によれば、ウィンドウ部分が周囲よりも下方に位置する印象を与えることができる。また、ウィンドウ80は四角形に限定されるものではない。例えば、図21に示すように円形であってもよい。また、サーチライトを模した形状を採用してもよい。 16 to 18, the window 80 is drawn with a simple thick frame to avoid complication of the drawings. However, the design of the window 80 is not limited to this. For example, the shaded modification exemplified in FIG. 19 and the dimple modification exemplified in FIG. 20 may be employed. According to the shadow modification, it is possible to give an impression that the window portion is positioned above the surroundings. On the contrary, according to the depression modification, it is possible to give an impression that the window portion is positioned below the surroundings. Further, the window 80 is not limited to a quadrangle. For example, it may be circular as shown in FIG. Moreover, you may employ | adopt the shape imitating a searchlight.
 ウィンドウ80に関する修飾の設定(修飾の有無、修飾の種類、修飾の度合い、等)は、1種類に固定されていてもよいし、あるいは、ウィンドウ80に表示する情報に応じて選定されてもよいし、あるいは、ユーザが設定および変更できるようにしてもよい。ウィンドウ80の形状についても同様である。 The setting of the modification regarding the window 80 (the presence / absence of modification, the type of modification, the degree of modification, etc.) may be fixed to one type or may be selected according to the information displayed on the window 80. Alternatively, the user may be able to set and change. The same applies to the shape of the window 80.
 ウィンドウ開操作に応じて設定されたウィンドウ80の形成範囲は、上記のように、ウィンドウ管理部56で管理される。より具体的には、図14の例において、全体制御部42がウィンドウ開操作の入力を検出すると、全体制御部42はウィンドウ開操作に応じてウィンドウ80の形成範囲および表示属性を含むウィンドウ管理情報を決定し、その決定した情報をウィンドウ管理部56に記録する。ここでは、ウィンドウ80の表示属性については、その時点で有効な設定値(例えば、初期設定値)を適用するものとする。なお、全体制御部42はウィンドウ80の形成範囲のみをウィンドウ管理部56に記録し、それに応じてウィンドウ管理部56が表示属性を付加するようにしてもよい。 The formation range of the window 80 set according to the window opening operation is managed by the window management unit 56 as described above. More specifically, in the example of FIG. 14, when the overall control unit 42 detects an input of a window opening operation, the overall control unit 42 includes window management information including the formation range and display attributes of the window 80 according to the window opening operation. And the determined information is recorded in the window management unit 56. Here, for the display attribute of the window 80, a setting value (for example, an initial setting value) effective at that time is applied. The overall control unit 42 may record only the formation range of the window 80 in the window management unit 56, and the window management unit 56 may add display attributes accordingly.
 そして、ウィンドウ管理部56が、自身に格納されているウィンドウ管理情報に基づいて、画像合成部52における第1画像と第2画像との合成を制御する。その制御例を図22および図23を参照して説明する。なお、図22および図23の例では、下レイヤが第1画像であり、上レイヤが第2画像であるものとする。 Then, the window management unit 56 controls the synthesis of the first image and the second image in the image synthesis unit 52 based on the window management information stored in itself. An example of the control will be described with reference to FIGS. 22 and 23, the lower layer is the first image, and the upper layer is the second image.
 図22の例では、ウィンドウ管理部56の制御下、画像合成部52は、第2画像保持部50に格納されている第2画像のうちでウィンドウ80の形成範囲に対応する部分を除いて、読み出す。一方、画像合成部52は、第1画像保持部46に格納されている第1画像を、ウィンドウ80の形成範囲に対応する部分も含めて、読み出す。そして、画像合成部52は、読み出した第1画像および第2画像を下レイヤおよび上レイヤにそれぞれ設定し、両画像を合成する。 In the example of FIG. 22, under the control of the window management unit 56, the image composition unit 52 excludes the portion corresponding to the formation range of the window 80 in the second image stored in the second image holding unit 50. read out. On the other hand, the image composition unit 52 reads the first image stored in the first image holding unit 46, including the part corresponding to the formation range of the window 80. Then, the image composition unit 52 sets the read first image and second image as the lower layer and the upper layer, respectively, and synthesizes both images.
 これにより、表示面32において、ウィンドウ80の内側の領域であるウィンドウ内側領域82には下レイヤの第1画像が表示され、ウィンドウ80の外側の領域であるウィンドウ外側領域83には上レイヤの第2画像が表示される。ここでは、上レイヤの透明度は、0%に設定されているものとする。 As a result, on the display surface 32, the first image of the lower layer is displayed in the window inner area 82, which is the area inside the window 80, and the upper layer first image is displayed in the window outer area 83, which is the area outside the window 80. Two images are displayed. Here, it is assumed that the transparency of the upper layer is set to 0%.
 あるいは、図22の例において、画像合成部52は、上レイヤを構成する第2画像を、ウィンドウ80の形成範囲に対応する部分も含めて、読み出してもよい。この場合、画像合成部52は、ウィンドウ管理部56の制御下、読み出した第2画像のうちでウィンドウ80の形成範囲に対応する部分の透明度を100%に設定して、第1画像と合成する。 Alternatively, in the example of FIG. 22, the image composition unit 52 may read the second image constituting the upper layer, including the part corresponding to the formation range of the window 80. In this case, the image composition unit 52 sets the transparency of the portion corresponding to the formation range of the window 80 in the read second image to 100% under the control of the window management unit 56 and composes it with the first image. .
 図23の例では、ウィンドウ管理部56の制御下、画像合成部52は、第2画像保持部50に格納されている第2画像のうちでウィンドウ80の形成範囲に対応する部分を、読み出す。一方、画像合成部52は、第1画像保持部46に格納されている第1画像を、ウィンドウ80の形成範囲に対応する部分も含めて、読み出す。そして、画像合成部52は、読み出した第1画像および第2画像を下レイヤおよび上レイヤにそれぞれ設定し、両画像を合成する。 In the example of FIG. 23, under the control of the window management unit 56, the image composition unit 52 reads a portion corresponding to the formation range of the window 80 in the second image stored in the second image holding unit 50. On the other hand, the image composition unit 52 reads the first image stored in the first image holding unit 46, including the part corresponding to the formation range of the window 80. Then, the image composition unit 52 sets the read first image and second image as the lower layer and the upper layer, respectively, and synthesizes both images.
 これにより、表示面32において、ウィンドウ内側領域82に上レイヤの第2画像が表示され、ウィンドウ外側領域84に下レイヤの第1画像が表示される。ここでは、上レイヤの透明度は、0%に設定されているものとする。 Thereby, on the display surface 32, the second image of the upper layer is displayed in the window inner area 82, and the first image of the lower layer is displayed in the window outer area 84. Here, it is assumed that the transparency of the upper layer is set to 0%.
 あるいは、図23の例において、画像合成部52は、上レイヤを構成する第2画像を、ウィンドウ80の形成範囲に対応する部分以外も含めて、読み出してもよい。この場合、画像合成部52は、ウィンドウ管理部56の制御下、読み出した第2画像のうちでウィンドウ80の形成範囲に対応する部分を除いて、透明度を100%に設定して、第1画像と合成する。 Alternatively, in the example of FIG. 23, the image composition unit 52 may read out the second image constituting the upper layer, including the part other than the part corresponding to the formation range of the window 80. In this case, under the control of the window management unit 56, the image composition unit 52 sets the transparency to 100% except for the portion corresponding to the formation range of the window 80 in the read second image, and sets the first image. And synthesize.
 なお、画像合成部52は必要に応じて、ウィンドウ管理部56の制御下、例えば上レイヤと下レイヤの合成後に、ウィンドウ80の修飾を行う。 Note that the image composition unit 52 modifies the window 80 as necessary, for example, after composition of the upper layer and the lower layer under the control of the window management unit 56.
 <表示情報>
 図15の上記ステップS15では、ウィンドウ80に、ウィンドウ開操作の対象である表示情報(以下「第1表示情報」と称する場合もある)に関連するが内容または表現形式が当該第1表示情報とは異なる表示情報(「第2表示情報」と称する場合もある)を、表示する。
<Display information>
In step S15 in FIG. 15, the window 80 is related to display information (hereinafter also referred to as “first display information”) that is the object of the window opening operation, but the content or expression format is the first display information. Displays different display information (sometimes referred to as “second display information”).
 第1表示情報および第2表示情報は例えば地図情報であり、以下ではこの例を主に説明する。地図情報は、場所に関する各種情報である。 The first display information and the second display information are, for example, map information, and this example will be mainly described below. The map information is various information regarding the place.
 地図情報は、例えば地理情報である。また、地理情報は、例えば、いわゆる地形(海陸、山川、等のありさま)に関する地形情報、場所の状態(例えば利用状態)に関する場所状態情報、地形情報および場所状態情報の要素名に関する名称情報、等である。 The map information is, for example, geographic information. The geographical information includes, for example, terrain information related to so-called terrain (sea and land, mountain rivers, etc.), place state information related to the state of the place (for example, usage state), name information related to the element name of the terrain information and place state information, Etc.
 地形情報は、海岸線、水系、等の形状情報であり、更に標高等の高さ情報を含んでもよい。 Terrain information is shape information such as coastline and water system, and may include height information such as altitude.
 場所状態情報は、例えば、地上の状態情報と、地下の状態情報と、に大別される。 The location status information is roughly classified into, for example, ground status information and underground status information.
 なお、説明を簡単にするために、地上と地下との境界面として地面(いわゆる陸地表面)を想定するが、この例に限定されるものではない。例えば、「地上」および「地下」という表現を、海洋表面も含めた地球表面(いわゆる地表)に対して用いることが、実用的または適切な場合もある。あるいは、例えば海底表面に着目する場合には、海底表面を境界面として「地上」および「地下」という表現を用いることが、実用的または適切かもしれない。 For simplicity of explanation, the ground (so-called land surface) is assumed as the boundary surface between the ground and the basement, but the present invention is not limited to this example. For example, it may be practical or appropriate to use the terms “above” and “underground” for the earth surface (so-called ground surface) including the ocean surface. Alternatively, for example, when focusing on the sea bottom surface, it may be practical or appropriate to use the expressions “above” and “underground” with the sea bottom surface as a boundary surface.
 地上の状態情報として、交通網(道路、鉄道、等)、建築物、田畑、森林、砂漠、等が例示される。建築物の状態情報は、各フロアの状態情報(例えばフロアガイドとして提供される)を含んでもよい。また、地上の状態情報として、行政区画等の地域境界が例示される。 Examples of ground state information include transportation networks (roads, railways, etc.), buildings, fields, forests, deserts, etc. The building state information may include state information of each floor (for example, provided as a floor guide). Moreover, regional boundaries such as administrative divisions are exemplified as the state information on the ground.
 地下の状態情報として、地下街、地下設備(水道、電気、ガス、通信、鉄道、等)、遺跡、等が例示される。なお、地下部分を有する建築物の場合、その地下部分のフロア状態を地下の状態情報に分類してもよいし、あるいは、地下部分と地上部分とのフロア状態情報を合わせてその建築物の状態情報としてもよい。 Examples of underground state information include underground malls, underground facilities (water supply, electricity, gas, communications, railways, etc.), ruins, etc. In the case of a building having an underground part, the floor state of the underground part may be classified into underground state information, or the state of the building is combined with the floor state information of the underground part and the ground part. It may be information.
 名称情報として、山、川、海、住所等のいわゆる地名が例示される。また、名称情報として、道路、鉄道、建築物、店舗、施設等の名称が例示される。 As the name information, so-called place names such as mountains, rivers, seas, and addresses are exemplified. Examples of the name information include names of roads, railways, buildings, stores, facilities, and the like.
 地形情報は、主に地図等の図形によって視覚化される。また、場所状態情報は、線図、記号、文字等によって視覚化可能であり、主に地図上の該当場所に表示される。名称情報は文字等によって視覚化可能であり、例えば地図上の該当場所に表示される。あるいは、名称情報は、地図上に配置するのではなく、それ自体だけをリスト形式等で表示してもよい。 Terrain information is visualized mainly by graphics such as maps. Further, the place state information can be visualized by a diagram, a symbol, a character, etc., and is mainly displayed at a corresponding place on the map. The name information can be visualized by characters or the like, and is displayed at a corresponding place on the map, for example. Alternatively, the name information may be displayed in a list format or the like instead of being arranged on the map.
 また、地図情報は、例えば気象情報である。気象情報は例えば、状況(晴れ、曇り、雨、雪等)、温度、湿度、降雨量、警報、等に関する情報である。気象情報は、記号、文字等によって視覚化可能である。例えば、地図上の該当場所に記号で表示してもよいし、あるいは、地図上の該当場所とは無関係の位置に文字で表示してもよい。あるいは、気象情報を天気図等の図形によって視覚化してもよい。 Also, the map information is, for example, weather information. The weather information is, for example, information on the situation (sunny, cloudy, rain, snow, etc.), temperature, humidity, rainfall, alarm, and the like. Weather information can be visualized by symbols, characters, and the like. For example, a symbol may be displayed at a corresponding place on the map, or a character may be displayed at a position unrelated to the corresponding place on the map. Alternatively, the weather information may be visualized by a graphic such as a weather map.
 また、地図情報は、例えば、店舗情報、観光情報、渋滞情報、等の有用情報である。有用情報は、記号、文字等によって視覚化可能である。例えば、地図上の該当場所に記号で表示してもよいし、あるいは、地図上の該当場所とは無関係の位置に文字で表示してもよい。 Further, the map information is useful information such as store information, sightseeing information, traffic jam information, and the like. Useful information can be visualized by symbols, characters and the like. For example, a symbol may be displayed at a corresponding place on the map, or a character may be displayed at a position unrelated to the corresponding place on the map.
 また、地図情報は、例えば経路情報である。経路情報は、複数の地点を結ぶ経路(例えば、ナビゲーション経路)の情報である。経路情報は例えば、地図上の該当経路に太線等の表現形式を適用することによって、視覚化可能である。あるいは、地図上の該当経路に矢印等の記号を表示してもよい。 Further, the map information is route information, for example. The route information is information on a route (for example, a navigation route) connecting a plurality of points. For example, the route information can be visualized by applying an expression format such as a thick line to the corresponding route on the map. Alternatively, a symbol such as an arrow may be displayed on the corresponding route on the map.
 地図情報は、地形情報等の1種類の情報だけを含んでもよいし、あるいは、複数種類の情報を含んでもよい。 The map information may include only one type of information such as topographic information, or may include a plurality of types of information.
 第1表示情報と第2表示情報とは、いずれも地図情報である点において、カテゴリーが共通する。なお、以下、第1表示情報としての地図情報を第1地図情報と称し、第2表示情報としての地図情報を第2地図情報と称する場合もある。 The first display information and the second display information have the same category in that both are the map information. Hereinafter, the map information as the first display information may be referred to as first map information, and the map information as the second display information may be referred to as second map information.
 一方、第1地図情報と第2地図情報とは具体的構成において異なる。すなわち、第2地図情報は、第1地図情報に関連した情報であるが、第1地図情報とは内容または表現形式において異なる。 On the other hand, the first map information and the second map information are different in specific configuration. That is, the second map information is information related to the first map information, but differs from the first map information in content or expression format.
 第1地図情報と第2地図情報の関連性について、例えば第1地図情報と第2地図情報とが地図上の同じ場所を含んでいる場合、第1地図情報と第2地図情報とは関連する。この場合、上記同じ場所は、第1地図情報(すなわち第1表示情報)の一部として現に表示されていることになる。 Regarding the relationship between the first map information and the second map information, for example, when the first map information and the second map information include the same location on the map, the first map information and the second map information are related. . In this case, the same place is actually displayed as a part of the first map information (that is, the first display information).
 また、例えば第1地図情報と第2地図情報とが地図上の特定経路を含んでいる場合も、第1地図情報と第2地図情報とは関連する。特定経路とは、名称が同じ道路および鉄道路、等である。また、特定経路は、複数の地点を結ぶ経路(例えば、ナビゲーション経路)であってもよい。第1地図情報と第2地図情報とが特定経路を介して関連する場合、当該特定経路のうちで第2地図情報の側の部分は、表示面に現れていない場合もある。 Also, for example, when the first map information and the second map information include a specific route on the map, the first map information and the second map information are related. The specific route is a road, an iron road, or the like having the same name. The specific route may be a route connecting a plurality of points (for example, a navigation route). When the first map information and the second map information are related via a specific route, the portion on the second map information side in the specific route may not appear on the display surface.
 第1地図情報と第2地図情報との内容の相違について、例えば地図上で対象範囲が異なれば、地図情報がユーザに提供する内容は異なる。 Regarding the difference in content between the first map information and the second map information, for example, if the target range is different on the map, the content that the map information provides to the user is different.
 また、例えば、同じ場所であっても、地形情報だけを含んだ地図と、地形情報と場所状態情報とを含んだ地図とは、地図情報の内容が異なる。また、例えば、同じ場所であっても、地上の状態情報と、地下の状態情報とは、内容が異なる。すなわち、地図上に表示する情報の種類および組み合わせによって、内容の異なった地図情報が構成される。なお、地図上に表示する情報の量を、地図の縮尺に応じて、省略または追加してもよい。 Also, for example, even in the same place, a map including only terrain information is different from a map including terrain information and location state information. Further, for example, even in the same place, the contents of the ground state information and the underground state information are different. That is, map information having different contents is configured according to the type and combination of information displayed on the map. Note that the amount of information to be displayed on the map may be omitted or added according to the scale of the map.
 また、例えば、同じ場所であっても、現在の地図情報と昔の地図情報とは、内容が異なる。 Also, for example, even at the same place, the contents of the current map information and the old map information are different.
 第1地図情報と第2地図情報との表現形式の相違について、例えば、同じ場所の地図であっても、線図と、モノクロ図と、着色図と、写真(航空写真を含む)と、平面的図示と、立体的図示と、鳥瞰図と、写実的図示と、簡略的図示と、デフォルメ地図(ナビゲーション経路等の一部要素を強調した地図)とは、地図情報の表現形式が異なる。 Regarding the difference in the expression format between the first map information and the second map information, for example, even if they are maps of the same place, a line diagram, a monochrome diagram, a colored diagram, a photograph (including aerial photographs), a plane The representation format of map information is different between a target diagram, a three-dimensional diagram, a bird's eye view, a realistic diagram, a simplified diagram, and a deformed map (a map in which some elements such as a navigation route are emphasized).
 また、例えば、同じ内容の場所状態情報を記号で表示するか、文字で表示するかの違いも、表現形式が異なる。気象情報、有用情報、等についても同様である。 Also, for example, the display format differs depending on whether the location status information with the same content is displayed as symbols or characters. The same applies to weather information, useful information, and the like.
 図24~図27に、ウィンドウ80の表示情報を例示する。 FIGS. 24 to 27 illustrate display information of the window 80. FIG.
 図24の例では、ウィンドウ80に、地下地図の一例として地下街の地図が表示される。より具体的には、第1表示情報として地図が表示されており、その地図上でユーザは、地下街の地図を見たいと欲する場所に対してウィンドウ開操作(図24では2点タッチ長押しが例示されている)を行う。それにより、ウィンドウ開操作で指定された範囲にウィンドウ80が形成され、その範囲内の地下街の地図が、第2表示情報として、ウィンドウ80に表示される。 In the example of FIG. 24, an underground map is displayed in the window 80 as an example of an underground map. More specifically, a map is displayed as the first display information. On the map, the user can open a window for a place where he / she wants to see the map of the underground mall (in FIG. 24, two-point touch long press is performed). As illustrated). Thereby, the window 80 is formed in the range designated by the window opening operation, and the map of the underground shopping area in the range is displayed on the window 80 as the second display information.
 なお、図24では図面の煩雑化を避けるために地下街の通路のみを線図で示しているが、その他の各種情報(例えば店舗情報)を地図上に掲載してもよい。また、地下地図は地下の状態情報を地図で視覚化したものであり、地下設備(水道、電気、ガス、通信、鉄道、等)、遺跡、等の地図であってもよい。 In FIG. 24, only the underground shopping street is shown in a diagram in order to avoid complication of the drawing, but other various information (for example, store information) may be posted on the map. The underground map is a visualization of underground state information, and may be a map of underground facilities (water, electricity, gas, communication, railway, etc.), ruins, and the like.
 このように、第2表示情報が地下地図である場合、同じ場所の空間的な上下位置関係を容易に把握できる。 Thus, when the second display information is an underground map, the spatial vertical position relationship of the same place can be easily grasped.
 地下地図に代えて、航空写真、デフォルメ地図、等を第2表示情報として表示してもよい。航空写真によれば、通常の地図の見え方と写真での見え方との関係を容易に把握できる。また、デフォルメ地図によれば、通常の地図と関連する場所とを、特定の観点(例えばナビゲーション経路)に基づく強調表示によって、容易に把握できる。 Instead of the underground map, an aerial photograph, a deformed map, etc. may be displayed as the second display information. According to the aerial photograph, it is possible to easily grasp the relationship between the normal map appearance and the photograph appearance. Further, according to the deformed map, a place related to a normal map can be easily grasped by highlighting based on a specific viewpoint (for example, a navigation route).
 図25の例では、ウィンドウ80に、そのウィンドウ位置における昔の地図が表示されている。図25を図24の上段図と比較すると、昔は鉄道が通っておらず、現在の駅前も整備されていないことが分かる。このように、第2表示情報が昔の地図である場合、同じ場所の時間的関係を容易に把握できる。なお、昔の地図を、航空写真、デフォルメ地図等の表現形式で表示してもよい。 In the example of FIG. 25, an old map at the window position is displayed in the window 80. Comparing FIG. 25 with the upper diagram of FIG. 24, it can be seen that there was no railway in the past and the current station is not maintained. Thus, when the second display information is an old map, the temporal relationship between the same places can be easily grasped. An old map may be displayed in an expression format such as an aerial photograph or a deformed map.
 ここで、上レイヤ(図22および図23参照)の透明度をウィンドウ80の範囲について調整することにより、第1表示情報の地図がウィンドウ80の範囲で透けるようにしてもよい。これによれば、同じ場所の異なる地図が重ねて表示されるので、両地図を比較しやすくなる。 Here, the map of the first display information may be transparent in the range of the window 80 by adjusting the transparency of the upper layer (see FIGS. 22 and 23) for the range of the window 80. According to this, since different maps of the same place are displayed in an overlapping manner, it is easy to compare the two maps.
 なお、図24および図25の例では第2表示情報は地図であるが、第2表示情報は文字等の視覚化手法によって表示される場合もある。 24 and 25, the second display information is a map, but the second display information may be displayed by a visualization method such as characters.
 ここで、図24および図25の例はいずれも、地図によって視覚化された第1表示情報に対してウィンドウ開操作を行うと、その地図上でウィンドウ80が位置する場所に関連した情報が第2表示情報として表示される。換言すれば、第1表示情報の地図が、第2表示情報の地図によって、部分的に置換される。これによれば、第1表示情報と第2表示情報とを、地図を見る視点(位置)の連続性をもって、換言すれば地図情報の連続性をもって、見ることができる。このため、ユーザの認知負荷を軽減できる。 Here, in both the examples of FIGS. 24 and 25, when the window opening operation is performed on the first display information visualized by the map, the information related to the location where the window 80 is located on the map is first displayed. 2 is displayed as display information. In other words, the map of the first display information is partially replaced by the map of the second display information. According to this, the first display information and the second display information can be viewed with the continuity of the viewpoint (position) for viewing the map, in other words, with the continuity of the map information. For this reason, a user's cognitive load can be reduced.
 図26の例では、第1表示情報として現在地付近の地図情報が、経路情報を強調したデフォルメ地図によって表示されている。また、ウィンドウ80内の第2表示情報として、現在地から目的地までの経路情報が簡略地図によって表示されている。なお、参考のために、図26では、ウィンドウ80内の表示を図中右上に拡大して図示している。これによれば、ユーザは目的地までの全体像(具体的には、目的地までの方向、距離、経路の形状、等)を把握することができる。 In the example of FIG. 26, map information near the current location is displayed as first display information by a deformed map with emphasized route information. Further, as the second display information in the window 80, route information from the current location to the destination is displayed by a simplified map. For reference, in FIG. 26, the display in the window 80 is shown enlarged in the upper right in the figure. According to this, the user can grasp the whole image up to the destination (specifically, the direction to the destination, the distance, the shape of the route, etc.).
 また、経路途中の有用情報等を第2表示情報に追加すれば、ユーザはその情報を活用でき、便利である。例えば、経路情報に代えてまたは加えて渋滞情報を表示してもよい。それにより、ユーザは目的地までの行程の時間等を推測し、必要に応じて行程を変更することができる。例えば、渋滞を避けたルートに変更することができ、あるいは、早めに休憩を入れて時間配分を調整することができる。 In addition, if useful information along the route is added to the second display information, the user can use the information, which is convenient. For example, the traffic jam information may be displayed instead of or in addition to the route information. Thereby, the user can estimate the time of the journey to the destination, etc., and can change the journey as necessary. For example, the route can be changed to a route that avoids traffic jams, or the time distribution can be adjusted by taking a break earlier.
 図27には、ウィンドウ80内の第2表示情報として、目的地周辺の気象情報を表示する例を示している。これによれば、ユーザは目的地に到着する前に、目的地周辺に到着したときに行動計画を立てることができる。 FIG. 27 shows an example in which weather information around the destination is displayed as the second display information in the window 80. According to this, the user can make an action plan when arriving near the destination before arriving at the destination.
 もちろん、ウィンドウ80内の第2表示情報として、目的地周辺の地理情報を表示してもよい。これによれば、ユーザは目的地に到着する前に、目的地に近づいたときの行動計画を立てることができる。 Of course, as the second display information in the window 80, geographical information around the destination may be displayed. According to this, the user can make an action plan when approaching the destination before arriving at the destination.
 なお、第2表示情報は、目的地ではなく、経由地に関する情報であってもよい。すなわち、第2表示情報は、予め設定した設定地に関連させることができる。また、設定地および経路は、実際のナビゲーションを目的としたものであってもよいし、あるいは、ナビゲーションを伴わない単なる経路検索を目的としたものであってもよい。 It should be noted that the second display information may be information regarding the waypoint instead of the destination. That is, the second display information can be related to a preset setting place. In addition, the set place and the route may be for the purpose of actual navigation, or may be for the purpose of simple route search without navigation.
 設定地に関連した情報をウィンドウ80に表示することにより、設定地に関連した情報を表示面32中の好みの場所に好みの大きさで表示させることができる。このため、行程計画の立案および変更について、作業効率の向上が期待できる。 By displaying the information related to the set location in the window 80, the information related to the set location can be displayed in a preferred size on the preferred location on the display surface 32. For this reason, improvement in work efficiency can be expected for the planning and change of the process plan.
 なお、ウィンドウ80内に表示する第2表示情報は、ここで例示したものに限定されるものではない。また、ウィンドウ80の形成直後に表示する第2表示情報は、例えば、予め設定しておけばよい。また、その設定は、第1表示情報の内容、情報表示装置10の使用状況、等に応じて異ならせてもよい。 Note that the second display information displayed in the window 80 is not limited to the one exemplified here. The second display information displayed immediately after the window 80 is formed may be set in advance, for example. Further, the setting may be varied depending on the contents of the first display information, the usage status of the information display device 10, and the like.
 このように、ウィンドウ80に表示される第2表示情報は、第1表示情報に関連した情報であるが、第1表示情報とは内容または表現形式において異なる。このため、1画面内に多彩な情報が表示されることによって、情報認知の効率化を図ることができる。また、上記のようにウィンドウ開操作が奏する高い直感性および認知負荷の軽減も、情報認知の効率化に貢献する。 As described above, the second display information displayed in the window 80 is information related to the first display information, but is different from the first display information in content or expression format. For this reason, the efficiency of information recognition can be improved by displaying various information in one screen. Moreover, the high intuition and the reduction of the cognitive load that the window opening operation plays as described above also contribute to the efficiency of information recognition.
 なお、第1表示情報および第2表示情報は、記憶部18(図14参照)に格納されている情報(図14に例示の第1情報60および第2情報62に相当)である。記憶部18内の情報は、上記のように、通信部によって新規に取得可能であるし、更新可能である。通信部によれば、例えば渋滞情報等を情報表示装置10の外部から取得可能である。 The first display information and the second display information are information (corresponding to the first information 60 and the second information 62 illustrated in FIG. 14) stored in the storage unit 18 (see FIG. 14). Information in the storage unit 18 can be newly acquired or updated by the communication unit as described above. According to the communication unit, for example, traffic jam information and the like can be acquired from outside the information display device 10.
 また、第2表示情報は、例えば、記憶部18および情報表示装置10の外部の記憶装置(ネットワーク上のサーバを含む)に蓄積されている情報を、必要時に検索して抽出することにより、取得可能である。あるいは、第1表示情報として利用する情報との関連性を整理して、第1表示情報および第2表示情報を予めデータベース化しておいてもよい。 Further, the second display information is obtained by searching and extracting information stored in the storage unit 18 and a storage device (including a server on the network) external to the information display device 10 when necessary. Is possible. Alternatively, the first display information and the second display information may be stored in a database in advance by organizing the relevance with the information used as the first display information.
 <ウィンドウ形成後の処理>
 図28に、ウィンドウ形成後の処理フローS30を例示する。図28の例において、ステップS31,S32は、図15のステップS11,S12と同様である。すなわち、ステップS31において入力部14がユーザ操作を受け付け、ステップS32において制御部16が、入力されたユーザ操作を識別する。
<Processing after window formation>
FIG. 28 illustrates a processing flow S30 after the window is formed. In the example of FIG. 28, steps S31 and S32 are the same as steps S11 and S12 of FIG. That is, in step S31, the input unit 14 receives a user operation, and in step S32, the control unit 16 identifies the input user operation.
 そして、ステップS33において、制御部16は、ステップS31で受け付けたユーザ操作が、ウィンドウ80の外側の第1表示情報と、ウィンドウ80の内側の第2表示情報と、ウィンドウ80自体と、のうちのいずれを対象にしたものかを識別する。具体的には、ユーザ操作の入力位置および種類から、ユーザ操作による制御対象を識別する。その際、ウィンドウ管理部56(図14参照)による管理情報を参照することによって、ユーザ操作の入力位置がウィンドウ80に関係するか否かを識別可能である。 In step S33, the control unit 16 determines that the user operation received in step S31 includes the first display information outside the window 80, the second display information inside the window 80, and the window 80 itself. Identify the target. Specifically, the control target by the user operation is identified from the input position and type of the user operation. At that time, it is possible to identify whether or not the input position of the user operation is related to the window 80 by referring to the management information by the window management unit 56 (see FIG. 14).
 <ウィンドウの外側に対する操作>
 ステップS33においてユーザ操作がウィンドウ80の外側の第1表示情報に対して行われたと識別した場合、制御部16はステップS34において第1表示情報をユーザ操作に従って制御する。すなわち、ユーザ操作に関連付けられている機能に応じて、スライド(換言すれば、スクロール)、拡大、縮小、回転、等の制御を行う。
<Operations outside the window>
When it is determined in step S33 that the user operation is performed on the first display information outside the window 80, the control unit 16 controls the first display information according to the user operation in step S34. That is, control of slide (in other words, scroll), enlargement, reduction, rotation, and the like is performed according to the function associated with the user operation.
 図29に、ステップS34における制御を例示する。なお、図29において、P、Q、Rは上レイヤに描かれた情報を模式的に示し、p、q、rは下レイヤに描かれた情報を模式的に示している。また、図29では、図22と同様に上レイヤのうちでウィンドウ80に対応する部分が無描画部分(換言すれば、透明部分)に設定されており、これにより上レイヤが第1表示情報を提供し、下レイヤが第2表示情報を提供している。 FIG. 29 illustrates the control in step S34. In FIG. 29, P, Q, and R schematically show information drawn on the upper layer, and p, q, and r schematically show information drawn on the lower layer. In FIG. 29, as in FIG. 22, the portion corresponding to the window 80 in the upper layer is set as a non-drawing portion (in other words, a transparent portion), and thus the upper layer displays the first display information. The lower layer provides the second display information.
 図29の例では、ウィンドウ80の外側に対してスライド操作が行われ、それに応じて上レイヤの内容が更新される。その結果、ウィンドウ外の第1表示情報の内容が変化する。この際、表示面32におけるウィンドウ80の位置はスライドさせない。 In the example of FIG. 29, a slide operation is performed on the outside of the window 80, and the contents of the upper layer are updated accordingly. As a result, the content of the first display information outside the window changes. At this time, the position of the window 80 on the display surface 32 is not slid.
 特に図29の例では上レイヤと下レイヤとが連動しており、スライド操作に応じて下レイヤの内容も更新される。つまり、ウィンドウ80内の第2表示情報の内容も変化する。 In particular, in the example of FIG. 29, the upper layer and the lower layer are linked, and the content of the lower layer is also updated according to the slide operation. That is, the content of the second display information in the window 80 also changes.
 このようなウィンドウ80の内外の表示の連動は、例えば図24および図25の例に採用可能である(図30参照)。すなわち、第1表示情報が第2表示情報によって部分的に置換され、その置換後においても第1表示情報と第2表示情報とが表示上、連続性または一体性を有している場合、表示の連動は有用である。 Such interlocking of the display inside and outside the window 80 can be employed in the examples of FIGS. 24 and 25, for example (see FIG. 30). That is, when the first display information is partially replaced by the second display information, and the first display information and the second display information have continuity or integrity on the display even after the replacement, the display is performed. Interlocking is useful.
 なお、表示の連動は、拡大操作等の他のユーザ操作にも適用可能である。また、図29では上レイヤがウィンドウ80外の第1表示情報に対応する場合を例示したが、第1表示情報が下レイヤに対応する場合(図23参照)にも表示の連動を適用可能である。 Note that the display interlocking can also be applied to other user operations such as an enlargement operation. In addition, FIG. 29 illustrates the case where the upper layer corresponds to the first display information outside the window 80, but display interlocking can also be applied when the first display information corresponds to the lower layer (see FIG. 23). is there.
 これに対し、図31の例では、ウィンドウ80の内外で表示を連動させない(表示の非連動)。なお、図31において、αは上レイヤに描かれた情報を模式的に示し、P、Q、Rは下レイヤに描かれた情報を模式的に示している。また、図31では、図23と同様に上レイヤにウィンドウ80に対応する部分が構成されており、これにより上レイヤが第2表示情報を提供し、下レイヤが第1表示情報を提供している。 On the other hand, in the example of FIG. 31, the display is not interlocked in and out of the window 80 (display is not interlocked). In FIG. 31, α schematically shows information drawn on the upper layer, and P, Q, and R schematically show information drawn on the lower layer. In addition, in FIG. 31, a portion corresponding to the window 80 is configured in the upper layer as in FIG. 23, whereby the upper layer provides the second display information and the lower layer provides the first display information. Yes.
 図31の例では、ウィンドウ80の外側に対してスライド操作が行われ、それに応じて下レイヤの内容が更新される。その結果、ウィンドウ外の第1表示情報の内容が変化する。これに対し、上レイヤの内容は更新されず、したがってウィンドウ内の第2表示情報は変化しない。 In the example of FIG. 31, a slide operation is performed on the outside of the window 80, and the contents of the lower layer are updated accordingly. As a result, the content of the first display information outside the window changes. On the other hand, the contents of the upper layer are not updated, and therefore the second display information in the window does not change.
 このようなウィンドウ80の内外の表示の非連動は、例えば図26および図27の例に採用可能である。すなわち、第1表示情報と第2表示情報とが表示上の連続性または一体性を有していない場合、表示の非連動は有用である。 Such non-interlocking of the display inside and outside the window 80 can be adopted in the examples of FIGS. 26 and 27, for example. That is, when the first display information and the second display information do not have display continuity or unity, display non-linkage is useful.
 なお、表示の非連動は、拡大操作等の他のユーザ操作にも適用可能である。また、図31では下レイヤがウィンドウ80外の第1表示情報に対応する場合を例示したが、第1表示情報が上レイヤに対応する場合(図22参照)にも表示の非連動を適用可能である。 Note that the non-linkage of the display can be applied to other user operations such as an enlargement operation. In addition, FIG. 31 illustrates the case where the lower layer corresponds to the first display information outside the window 80, but display non-linkage can also be applied when the first display information corresponds to the upper layer (see FIG. 22). It is.
 表示を連動させるか否かは、例えば、予め設定しておけばよい。また、その設定は、第1表示情報および第2表示情報の視覚化手法、情報表示装置10の使用状況、等に応じて異ならせてもよい。 Whether to link the display may be set in advance, for example. Further, the setting may be made different depending on the visualization method of the first display information and the second display information, the usage status of the information display device 10, and the like.
 図28に戻り、ステップS34の実行後、情報表示装置10の処理は上記ステップS31に戻る。 28, after the execution of step S34, the processing of the information display device 10 returns to step S31.
 <ウィンドウの内側に対する操作>
 一方、上記ステップS33においてユーザ操作がウィンドウ80の内側の第2表示情報に対して行われたと識別した場合、制御部16はステップS35において第2表示情報をユーザ操作に従って制御する。すなわち、ユーザ操作に関連付けられている機能に応じて、スライド(換言すれば、スクロール)、拡大、縮小、回転、等の制御を行う。なお、ステップS35についても、ウィンドウ内外の表示の連動を採用してもよい。
<Operations inside the window>
On the other hand, if it is determined in step S33 that the user operation is performed on the second display information inside the window 80, the control unit 16 controls the second display information in accordance with the user operation in step S35. That is, control of slide (in other words, scroll), enlargement, reduction, rotation, and the like is performed according to the function associated with the user operation. It should be noted that also in step S35, interlocking display inside and outside the window may be employed.
 ウィンドウ80の内側に対するユーザ操作の他の例として、ウィンドウ80内の表示情報の切り替えを指示するウィンドウ内情報切り替え操作がある。かかるウィンドウ内情報切り替え操作により、制御部16は、別の内容または別の表現形式の第2表示情報をウィンドウ80に表示させる。 As another example of the user operation on the inside of the window 80, there is an in-window information switching operation instructing switching of display information in the window 80. By such an in-window information switching operation, the control unit 16 causes the window 80 to display the second display information having different contents or another expression format.
 図32に、ウィンドウ80内の第2表示情報の切り替え操作の概念図を示す。図32の例では、予め定められたN個(Nは自然数)の第2表示情報が、ウィンドウ内情報切り替え操作によって、予め定められた順番で循環的に切り替わる。例えば、航空写真と地下街の地図と昔の地図とを切り替え可能である。あるいは、現在地から設定地までの経路情報と、設定地周辺の気象情報と、設定地周辺の地理情報と、現在地から設定地までの渋滞情報とを切り替え可能である。 FIG. 32 shows a conceptual diagram of the switching operation of the second display information in the window 80. In the example of FIG. 32, N pieces of predetermined second display information (N is a natural number) are cyclically switched in a predetermined order by an in-window information switching operation. For example, it is possible to switch between an aerial photograph, an underground map, and an old map. Alternatively, it is possible to switch between route information from the current location to the set location, weather information around the set location, geographic information around the set location, and traffic information from the current location to the set location.
 なお、ウィンドウ80の形成直後に表示する第2表示情報は、例えば、常に同じ第2表示情報にしてもよいし、あるいは、前回の表示で最後に表示された第2表示情報にしてもよい。また、切り替える第2表示情報の個数Nおよび種類、切り替え順序、循環的な切り替えのオン/オフ、等は予め設定され、その設定は変更不可であってもよいし、あるいは、変更可能であってもよい。 Note that the second display information displayed immediately after the window 80 is formed may be, for example, always the same second display information, or may be the second display information displayed last in the previous display. In addition, the number and type of second display information to be switched, the switching order, on / off of cyclic switching, and the like are set in advance, and the setting may not be changed or may be changed. Also good.
 ウィンドウ内情報切り替え操作として、例えばフリック、ダブルタップ、等が割り当てられる。また、ウィンドウ内情報切り替え操作はウィンドウ80内の任意の位置において受け付けられる。このため、例えば特定のボタンを押下する場合に比べて、ユーザは、ウィンドウ内情報切り替え操作の実行位置を気にせずに済む。すなわち、ユーザは、ウィンドウ80内の第2表示情報に視点をおいたままで、切り替え操作を行うことができる。このため、ユーザの認知負荷を軽減できる。また、切り替え速度、換言すれば切り替えタイミングをユーザが指示できるので、この点においても認知負荷を軽減できる。 For example, flick, double tap, etc. are assigned as the information switching operation in the window. The window information switching operation is accepted at an arbitrary position in the window 80. For this reason, compared with the case where, for example, a specific button is pressed, the user does not have to worry about the execution position of the in-window information switching operation. That is, the user can perform the switching operation while keeping the viewpoint on the second display information in the window 80. For this reason, a user's cognitive load can be reduced. Moreover, since the user can instruct the switching speed, in other words, the switching timing, the cognitive load can be reduced in this respect as well.
 切り替え対象の第2表示情報は、例えば階層化された情報であってもよい。階層化された第2表示情報として、建築物の内部地図情報である各階の地図情報が例示される。すなわち、各階の地図情報は、重力方向における位置に応じて階層化可能である。例えば、図33に示すように、1階、2階、…、最上階、地下最下階、…、地下1階の順序で、循環的に切り替えるのである。あるいは、ウィンドウ形成直後に全階の概略を示す地図情報を表示し、ユーザが選択した階から循環的な切り替えを開始してもよい。 The second display information to be switched may be hierarchized information, for example. As the second display information layered, map information of each floor, which is internal map information of a building, is exemplified. That is, the map information on each floor can be hierarchized according to the position in the direction of gravity. For example, as shown in FIG. 33, the first floor, the second floor,..., The top floor, the basement bottom floor,. Or the map information which shows the outline of all the floors may be displayed immediately after window formation, and cyclic switching may be started from the floor which the user selected.
 また、同じ場所についての昔の地図情報によって、時間経過に応じて階層化された情報を構成してもよい。例えば、同じ場所についての10年前、20年前および30年前の場所状態情報を、時間経過に応じて階層化可能である。また、同じ場所の名称情報の変遷を階層化してもよい。なお、昔の地図情報の他に現在の地図情報も階層に加えてもよい。 Also, information that is hierarchized according to the passage of time may be constituted by old map information about the same place. For example, location information about 10 years ago, 20 years ago, and 30 years ago for the same location can be hierarchized over time. Further, the transition of the name information of the same place may be hierarchized. In addition to the old map information, the current map information may be added to the hierarchy.
 また、例えば、地下の状態情報が遺跡等である場合、地上の状態情報と地下の状態情報との階層化は、重力方向における位置に基づいていると理解してもよいし、あるいは、時間経過に基づいていると理解してもよい。 Also, for example, when the underground state information is a ruins, etc., it may be understood that the stratification of the ground state information and the underground state information is based on the position in the direction of gravity, or the time elapses You may understand that it is based on.
 階層化された第2表示情報を切り替える場合にも、上記効果は得られる。特に、階層化された情報は各情報の関連性および連続性がより強いので、そのような関連性および連続性の強い複数の情報を同じウィンドウ80内で見られることは、効率的である。 The above effect can also be obtained when the second display information layered is switched. In particular, since layered information is more related and continuity of each information, it is efficient to see a plurality of such highly related and continuous information in the same window 80.
 なお、階層化された第2表示情報をアイコン90(図18参照)に関連付けてもよい。特に、階層化された第2表示情報が建築物の各階の内部地図情報の場合、アイコン90に対するウィンドウ開操作と、ウィンドウ内情報切り替え操作とが相俟って、より高い直感性を提供できる。 Note that the second display information layered may be associated with the icon 90 (see FIG. 18). In particular, when the hierarchical second display information is internal map information of each floor of a building, the window opening operation for the icon 90 and the window information switching operation can be combined to provide higher intuition.
 図28に戻り、ステップS35の実行後、情報表示装置10の処理は上記ステップS31に戻る。 28, after the execution of step S35, the processing of the information display device 10 returns to step S31.
 <ウィンドウに対する操作>
 図28の上記ステップS33においてユーザ操作が、ウィンドウ80自体を制御するためのウィンドウ制御操作であると識別した場合、制御部16はステップS36において、入力されたウィンドウ制御操作に割り当てられている制御内容に応じて、ウィンドウ80を制御する。これによれば、以下に例示するように、ウィンドウ80の形成後にウィンドウ80の位置およびサイズを制御することができるし、ウィンドウ80を消去する制御をジェスチャ操作によって行うことができる。
<Operations on windows>
If it is determined in step S33 in FIG. 28 that the user operation is a window control operation for controlling the window 80 itself, the control unit 16 controls the control contents assigned to the input window control operation in step S36. The window 80 is controlled accordingly. According to this, as exemplified below, the position and size of the window 80 can be controlled after the window 80 is formed, and control for deleting the window 80 can be performed by a gesture operation.
 <ウィンドウ移動操作>
 ウィンドウ制御操作は例えば、ウィンドウ80を移動させる操作である。図34に、かかるウィンドウ移動操作の概念図を示す。図34の例によれば、ウィンドウ80の予め定められた部分(例えば、ウィンドウの枠部分)をタッチした状態でドラッグ操作を行うことにより、そのドラッグ方向にウィンドウ80が移動する。これによれば、ドラッグ操作は日常生活において机上で物を移動させる動作と類似性が高いので、ウィンドウ80の移動を直感的に行うことができる。このため、高い操作性を実現できる。
<Window move operation>
The window control operation is an operation for moving the window 80, for example. FIG. 34 shows a conceptual diagram of such a window moving operation. According to the example of FIG. 34, by performing a drag operation in a state where a predetermined portion (for example, a frame portion of the window) of the window 80 is touched, the window 80 moves in the drag direction. According to this, since the drag operation is highly similar to an operation of moving an object on a desk in daily life, the window 80 can be moved intuitively. For this reason, high operability can be realized.
 <ウィンドウサイズ変更操作>
 また、ウィンドウ制御操作は例えば、ウィンドウ80のサイズを変更する操作である。図35~図37に、かかるウィンドウサイズ変更操作の概念図を示す。図35~図37の例によれば、ウィンドウ80の予め定められた部分(例えば、ウィンドウの枠部分)をタッチした状態で1点移動型のピンチ操作によって、ウィンドウ80のサイズが変更される。この際、ピンチアウトであればウィンドウ80は拡大され(図35~図37参照)、ピンチインであればウィンドウ80は縮小される。すなわち、ピンチ方向に応じて、拡大か縮小かが指示される。
<Window size change operation>
The window control operation is an operation for changing the size of the window 80, for example. 35 to 37 show conceptual diagrams of such window size changing operation. 35 to 37, the size of the window 80 is changed by a one-point moving type pinch operation in a state where a predetermined portion (for example, a frame portion of the window) of the window 80 is touched. At this time, if it is pinch out, the window 80 is enlarged (see FIGS. 35 to 37), and if it is pinch in, the window 80 is reduced. That is, whether to enlarge or reduce is instructed according to the pinch direction.
 これによれば、ウィンドウサイズ変更操作は、ウィンドウ内外の表示情報に対する表示サイズ変更操作と類似性または連続性が高い。その結果、ユーザが操作に迷うのを防止し、操作時間を短縮できる。すなわち、このため、高い操作性を実現できる。 According to this, the window size changing operation has high similarity or continuity with the display size changing operation for the display information inside and outside the window. As a result, it is possible to prevent the user from getting lost in the operation and shorten the operation time. That is, for this reason, high operability can be realized.
 また、ピンチ方向によって、拡大方向および縮小方向、すなわち変形方向が指示される。具体的には、図35の例では、左方向にピンチアウトが行われ、ウィンドウ80が左方向に伸びる。また、図36の例では、下方向にピンチアウトが行われ、ウィンドウ80が下方向に伸びる。また、図37の例では、左斜め下方向にピンチアウトが行われ、ウィンドウ80は左方向および下方向に伸びる。これによれば、ウィンドウ80の変形方向を直感的かつ簡単に指示できる。 Further, the enlargement direction and the reduction direction, that is, the deformation direction are indicated by the pinch direction. Specifically, in the example of FIG. 35, pinch out is performed in the left direction, and the window 80 extends in the left direction. In the example of FIG. 36, pinching out is performed in the downward direction, and the window 80 extends downward. In the example of FIG. 37, pinch-out is performed in the diagonally downward left direction, and the window 80 extends in the left and downward directions. According to this, the deformation direction of the window 80 can be intuitively and easily instructed.
 なお、図35および図36では、1点移動型のピンチ操作において移動させる指について、その始点がウィンドウ80の枠部分に在る場合を例示しているが、この例に限定されるものではない。すなわち、図37に示すように、ウィンドウ80の内側に、指移動の始点を置いてもよい。これは、1点移動型のピンチ操作において固定させる指がウィンドウ80の枠部分に在ることによって、ウィンドウサイズ変更操作を、ウィンドウ80内の表示情報に対する操作と識別可能であることに拠る。 35 and 36 exemplify the case where the starting point of the finger to be moved in the one-point moving type pinch operation is in the frame portion of the window 80, but the present invention is not limited to this example. . That is, as shown in FIG. 37, the starting point of finger movement may be placed inside the window 80. This is based on the fact that the finger to be fixed in the one-point moving type pinch operation is in the frame portion of the window 80 so that the window size changing operation can be distinguished from the operation for the display information in the window 80.
 <ウィンドウ消去操作>
 ウィンドウ制御操作は例えば、ウィンドウ80を消去する操作、換言すればウィンドウ80の表示を終了する操作である。図38~図40に、かかるウィンドウ消去操作の概念図を示す。
<Window deletion operation>
The window control operation is, for example, an operation for deleting the window 80, in other words, an operation for ending the display of the window 80. 38 to 40 show conceptual diagrams of such window erasing operation.
 図38の例によれば、ウィンドウ80に対してフリック操作を行うことにより、ウィンドウ80が消去される。フリック操作は日常生活において机上の物をはじき飛ばして視界から消すという動作と類似性が高いので、ウィンドウ80の消去を直感的に行うことができる。このため、高い操作性を実現できる。 38, the window 80 is deleted by performing a flick operation on the window 80. Since the flick operation is highly similar to the operation of flipping off objects on the desk and erasing them from view in daily life, the window 80 can be erased intuitively. For this reason, high operability can be realized.
 なお、フリック方向は図38の例に限定されるものではない。また、図38の例では、ウィンドウ80の予め定められた部分(例えば、ウィンドウの枠部分)をフリックの始点にしている。但し、ウィンドウ80内でのフリックが、ウィンドウ80内の第2表示情報の制御(スライド等)に割り当てられていなければ、ウィンドウ消去操作用のフリックの始点はウィンドウ80内に在ってもよい。 Note that the flick direction is not limited to the example of FIG. In the example of FIG. 38, a predetermined portion of the window 80 (for example, a frame portion of the window) is set as the flick start point. However, if the flick in the window 80 is not assigned to the control (slide etc.) of the second display information in the window 80, the start point of the flick for the window erasing operation may be in the window 80.
 図39の例によれば、ウィンドウ80の外側からウィンドウ80内に進入し、進入側とは異なる側においてウィンドウ80の外側へ抜けるように、換言すればウィンドウ80を分断するようにドラッグ操作を行うことにより、ウィンドウ80が消去される。かかるドラッグ操作が、日常生活において書類上の不要箇所を斜線を引いて消すという動作と類似性が高いので、ウィンドウ80の消去を直感的に行うことができる。このため、高い操作性を実現できる。 According to the example of FIG. 39, a drag operation is performed so as to enter the window 80 from the outside of the window 80 and exit outside the window 80 on a side different from the entry side, in other words, to divide the window 80. As a result, the window 80 is erased. Since such a drag operation is highly similar to the operation of erasing unnecessary portions on a document by slashing them in daily life, the window 80 can be erased intuitively. For this reason, high operability can be realized.
 なお、ドラッグ方向は図39の例に限定されるものではない。また、他のジェスチャ、具体的にはフリックを、図39の例のウィンドウ消去操作に採用してもよい。 Note that the drag direction is not limited to the example of FIG. Also, other gestures, specifically flicks, may be employed for the window erasing operation in the example of FIG.
 図40の例によれば、ウィンドウ80を挟み込むようにドラッグ操作を行うことにより、ウィンドウ80が消去される。これによれば、かかるドラッグ操作は日常生活において部屋の窓を閉めて部屋の外の景色を視界から消すという動作と類似性が高いので、ウィンドウ80の消去を直感的に行うことができる。このため、高い操作性を実現できる。 40, by performing a drag operation so as to sandwich the window 80, the window 80 is erased. According to this, such a drag operation is highly similar to the operation of closing the window of the room and erasing the scenery outside the room from the field of view in daily life, so the window 80 can be erased intuitively. For this reason, high operability can be realized.
 なお、図40には、ウィンドウ80を挟み込むようにドラッグ操作として、2点移動型のピンチインを例示しているが、1点移動型のピンチインを採用してもよい。また、ピンチイン方向は図40の例に限定されるものではない。また、ドラッグの最後がフリックになっても構わない。 In FIG. 40, a two-point movement type pinch-in is illustrated as a drag operation so as to sandwich the window 80, but a one-point movement type pinch-in may be employed. Further, the pinch-in direction is not limited to the example of FIG. Also, the end of the drag may be a flick.
 なお、図34~図38では上レイヤがウィンドウ80外の第1表示情報に対応する場合を例示したが、第1表示情報が下レイヤに対応する場合(図23参照)にもウィンドウ制御操作を適用可能である。 34 to 38 illustrate the case where the upper layer corresponds to the first display information outside the window 80, but the window control operation is also performed when the first display information corresponds to the lower layer (see FIG. 23). Applicable.
 図28に戻り、ステップS36の実行後、制御部16はステップS37において、ステップS36がウィンドウ80の消去制御であったか否かを判断する。ステップS36がウィンドウ80の消去制御ではなかった場合、情報表示装置10の処理は上記ステップS31に戻る。他方、ステップS36がウィンドウ80の消去制御であった場合、情報表示装置10は、図28の処理フローS30を終了し、図15の上記処理フローS10へ戻る。 Referring back to FIG. 28, after executing step S36, the control unit 16 determines whether or not step S36 is deletion control of the window 80 in step S37. When step S36 is not erasure control of the window 80, the process of the information display apparatus 10 returns to step S31. On the other hand, when step S36 is the deletion control of the window 80, the information display apparatus 10 ends the processing flow S30 of FIG. 28 and returns to the processing flow S10 of FIG.
 <ウィンドウの形成の他の例>
 以下では、制御部16が、ウィンドウ開操作によって指定された範囲を、予め定められた変換規則に従って変換し、変換後の範囲にウィンドウを形成する例を説明する。
<Other examples of window formation>
Hereinafter, an example in which the control unit 16 converts the range specified by the window opening operation according to a predetermined conversion rule and forms a window in the converted range will be described.
 例えば、図41に例示するように、第1表示情報が鳥瞰図によって表示されている場合、ウィンドウ80の形状も、その鳥瞰図の俯角に応じて設定してもよい。さらに、ウィンドウ80内の第2表示情報も、同じ俯角の鳥瞰表現によって表示してもよい。これによれば、同じ表現形式の採用により、第1表示情報と第2表示情報との連続性が高まり、ユーザの認知負荷を軽減できる。 For example, as illustrated in FIG. 41, when the first display information is displayed in a bird's eye view, the shape of the window 80 may be set according to the depression angle of the bird's eye view. Further, the second display information in the window 80 may also be displayed by a bird's eye view expression of the same depression angle. According to this, by adopting the same expression format, the continuity between the first display information and the second display information is increased, and the user's cognitive load can be reduced.
 鳥瞰表現の手法は種々知られており、ここではそのような既知の手法を用いるものとする。例えば、上面図または正面図として描画された画像を鳥瞰表現に変換する手法が採用される。鳥瞰表現の画像の生成は全体制御部42が行ってもよいし、あるいは、第1画像形成部44および第2画像形成部48が行ってもよい。なお、鳥瞰表現は、図形だけでなく、文字等にも適用可能である。 Various methods of bird's-eye view expression are known, and such known methods are used here. For example, a method of converting an image drawn as a top view or a front view into a bird's eye view representation is employed. The generation of the bird's eye view image may be performed by the overall control unit 42, or may be performed by the first image forming unit 44 and the second image forming unit 48. The bird's-eye view expression can be applied not only to figures but also to characters and the like.
 より具体的には、図42および図43に例示するように、原画像に対して鳥瞰表現における視点を設定し、あたかも原画像の縦方向に走る平行線を当該視点において束ねように、原画像を変形させる。なお、図42および図43には、鳥瞰変換の様子を分かりやすくするため、便宜的に、様々な位置にウィンドウ80を設けている。また、図42と図43とは視点の位置が異なり、図43中の破線の四角形は図42中のウィンドウ80に対応する。 More specifically, as illustrated in FIGS. 42 and 43, the viewpoint in the bird's-eye view is set for the original image, and the parallel lines running in the vertical direction of the original image are bundled at the viewpoint. Deform. 42 and 43, windows 80 are provided at various positions for the sake of convenience in order to facilitate understanding of the bird's-eye view conversion. 42 and 43 have different viewpoint positions, and a broken-line rectangle in FIG. 43 corresponds to the window 80 in FIG.
 このように、ユーザがウィンドウ開操作によって指定した長方形の範囲(図16~図18参照)は、制御部16がそれを鳥瞰表現に変換することによって、略台形に変換される。これによれば、ユーザは、第1表示情報の鳥瞰表現を意識しながらウィンドウ80の形成範囲を指定する必要がない。すなわち、通常どおりの長方形の範囲を指定すれば、制御部16が自動的に鳥瞰表現に合わせたウィンドウ80を形成してくれる。 As described above, the rectangular range (see FIGS. 16 to 18) designated by the user by the window opening operation is converted into a substantially trapezoid when the control unit 16 converts it into a bird's eye view expression. According to this, the user does not need to specify the formation range of the window 80 while being aware of the bird's-eye view expression of the first display information. That is, if a normal rectangular range is designated, the control unit 16 automatically forms a window 80 that matches the bird's-eye view.
 なお、台形状のウィンドウ80にデフォルメ表現を施してもよい。例えば図44の例では、台形状の横辺を湾曲させている。 It should be noted that the trapezoidal window 80 may be deformed. For example, in the example of FIG. 44, the trapezoidal side is curved.
 図45に、ウィンドウ80の形状の更なる例を示す。概略的には、第1表示情報が地図によって視覚化されており、その地図中の区画等に合わせてウィンドウ80を形成する。この際、制御部16は、ウィンドウ開操作によって指定された範囲を次の第1規則および第2規則に従って変換し、変換後の範囲にウィンドウ80を形成する。 FIG. 45 shows a further example of the shape of the window 80. Schematically, the first display information is visualized by a map, and a window 80 is formed in accordance with a section or the like in the map. At this time, the control unit 16 converts the range specified by the window opening operation according to the following first rule and second rule, and forms the window 80 in the converted range.
 第1規則は、ウィンドウ開操作によって指定された範囲の周縁(図45ではドラッグ軌跡70)を、地図中の区画境界と、地図の表示領域(図45の例では表示面32の全体)の周縁とに、合わせて変形することを内容とする。 The first rule is that the periphery of the range specified by the window opening operation (drag trajectory 70 in FIG. 45), the boundary of the partition in the map, and the periphery of the map display area (the entire display surface 32 in the example of FIG. 45). In addition, the content is to be deformed together.
 例えば、ユーザ指定範囲を、風船を膨らませるように拡張していき、当該ユーザ指定範囲の周縁を地図中の区画境界に一致させる。なお、区画境界は、道路、河川、行政区画、等である。なお、第1規則において地図中の区画境界の他に地図の表示領域の周縁を考慮するのは、ユーザ指定範囲の拡張の結果、ウィンドウ80が表示領域を超えてしまうのを防ぐためである。 For example, the user-specified range is expanded so as to inflate a balloon, and the periphery of the user-specified range is made coincident with the partition boundary in the map. Note that the partition boundaries are roads, rivers, administrative partitions, and the like. The reason why the first rule considers the periphery of the map display area in addition to the partition boundaries in the map is to prevent the window 80 from exceeding the display area as a result of the expansion of the user-specified range.
 第2規則は、第1規則に従って変換された上記ユーザ指定範囲は、ユーザが元々指定した範囲を、予め定められた割合以上含むことを内容とする。第2規則は、ウィンドウ80が、ユーザが指定した範囲よりも大きくなり過ぎるのを防ぐために、上限値を設定するものである。 The content of the second rule is that the user-specified range converted in accordance with the first rule includes a range originally specified by the user in a predetermined ratio or more. The second rule is to set an upper limit value in order to prevent the window 80 from becoming too larger than the range specified by the user.
 図45の例ではウィンドウ80は、当初のユーザ指定範囲の全体を収容するように形成されている。これに対し、ウィンドウ80が部分的に、当初のユーザ指定範囲よりも後退する場合も生じうる。例えば、第2規則を満たす区画境界を見つけるために、候補として選出した区画境界よりも後退した位置の区画境界を、改めて選出し直す場合が生じうるからである。 In the example of FIG. 45, the window 80 is formed so as to accommodate the entire initial user-specified range. On the other hand, the window 80 may be partially retracted from the initial user-specified range. For example, in order to find a partition boundary that satisfies the second rule, it may occur that a partition boundary at a position backward from the partition boundary selected as a candidate is selected again.
 上記の第1規則および第2規則の採用により、ユーザは複雑な区画境界をトラッキングする必要がない。すなわち、通常どおりの長方形の範囲を指定すれば、制御部16が自動的に区画境界に合わせたウィンドウ80を形成してくれる。また、近接した区画を複数、結合した大きな区画を容易に指定できる。 The adoption of the first and second rules described above eliminates the need for the user to track complicated partition boundaries. That is, if a normal rectangular range is designated, the control unit 16 automatically forms a window 80 that matches the partition boundary. In addition, a large section obtained by combining a plurality of adjacent sections can be easily specified.
 <効果>
 情報表示装置10によれば、ウィンドウ開操作等の採用により、表示情報の位置、大きさ、内容、等をユーザが好みに合わせることが可能である。また、その他にも上記の各種効果が得られる。
<Effect>
According to the information display device 10, the position, size, content, etc. of the display information can be adjusted to the user's preference by adopting a window opening operation or the like. In addition, the above various effects can be obtained.
 <変形例>
 上記では第1表示情報が表示面32の全体に表示されている場合を例示しているが、この例に限定されるものではない。また、表示面32に複数のウィンドウ80を同時に存在させてもよい。
<Modification>
Although the case where the 1st display information is displayed on the whole display surface 32 is illustrated above, it is not limited to this example. In addition, a plurality of windows 80 may exist on the display surface 32 at the same time.
 また、ウィンドウ80内の情報に対してウィンドウ開操作を行うことによって、現存のウィンドウ80内に、更なるウィンドウ80を形成してもよい(ウィンドウの多重化)。この場合、現存のウィンドウ80内の第2表示情報を新たな第1表示情報として把握し、更なるウィンドウ80内の表示情報を新たな第2表示情報として把握することによって、既述の説明の全てが、ウィンドウの多重化にも当てはまる。ウィンドウの多重化は、例えば、ウィンドウ内情報切り替え操作と同様に、表示情報の切り替えに利用可能である。 Further, by performing a window opening operation on the information in the window 80, a further window 80 may be formed in the existing window 80 (window multiplexing). In this case, the second display information in the existing window 80 is grasped as new first display information, and the display information in the further window 80 is grasped as new second display information. Everything applies to window multiplexing. Multiplexing of windows can be used for switching display information, for example, in the same way as in-window information switching operation.
 また、上記では第1表示情報および第2表示情報が地図情報である場合を例示したが、この例に限定されるものではない。例えば、第1表示情報および第2表示情報は音楽情報であってもよい。 In addition, although the case where the first display information and the second display information are map information has been illustrated above, the present invention is not limited to this example. For example, the first display information and the second display information may be music information.
 音楽情報は、例えば、曲名、アーティスト名、作詞家名、作曲家名、編曲家名、収録されているアルバム名、発売年月日、発売元、等の各種情報によって構成可能である。 The music information can be composed of various information such as a song name, artist name, songwriter name, composer name, arranger name, recorded album name, release date, release source, and the like.
 より具体的には、第1表示情報として、複数のアーティスト名のリストが、文字、アイコン、等による表現形式で、表示されているとする。そのリストの中の或るアーティスト名に対してウィンドウ開操作を行うと、ウィンドウ80が表示され、ウィンドウ80内に第2表示情報として、そのアーティストの一のアルバムの情報が表示される。そして、ウィンドウ内情報切り替え操作を行うと、別のアルバムの情報が表示される。この際、アルバム情報は、発売年月日順に切り替えられる。 More specifically, it is assumed that a list of a plurality of artist names is displayed as the first display information in an expression format using characters, icons, and the like. When a window opening operation is performed on a certain artist name in the list, a window 80 is displayed, and information on one album of the artist is displayed in the window 80 as second display information. When the in-window information switching operation is performed, information on another album is displayed. At this time, the album information is switched in order of release date.
 なお、音楽情報および操作はこの例に限定されるものではなく、既述の説明の全てが、第1表示情報および第2表示情報が音楽情報である場合にも当てはまる。 Note that the music information and operation are not limited to this example, and all of the above description applies to the case where the first display information and the second display information are music information.
 また、上記では入力部14として接触型のタッチパネルを例示した。これに対し、非接触型(3次元(3D)型とも称される)タッチパネルを入力部14に採用することも可能である。 In the above, a contact type touch panel is exemplified as the input unit 14. On the other hand, a non-contact type (also referred to as a three-dimensional (3D) type) touch panel can be used for the input unit 14.
 非接触型によれば、センサ群の検出可能領域(換言すれば、ユーザ入力を受け付け可能な入力領域)が入力面上に3次元空間として提供され、その3次元空間内の指を入力面上に投影した位置が検出される。また、非接触型の中には入力面から指までの距離を検出可能な方式もある。その方式によれば、指位置を3次元位置として検出可能であるし、更には指の接近および後退も検出可能である。非接触型タッチパネルとして種々の方式が開発されているが、例えば、静電容量方式の一つである投影容量方式が知られている。 According to the non-contact type, a detectable region of the sensor group (in other words, an input region that can accept user input) is provided as a three-dimensional space on the input surface, and a finger in the three-dimensional space is placed on the input surface. The position projected on is detected. Some non-contact types can detect the distance from the input surface to the finger. According to this method, the finger position can be detected as a three-dimensional position, and further, the approach and retreat of the finger can also be detected. Various systems have been developed as non-contact type touch panels. For example, a projection capacity system which is one of electrostatic capacity systems is known.
 なお、上記ではユーザが入力に用いる指示物として指を例示したが、例えば指以外の部位を指示物として利用することも可能である。また、例えばタッチペン(スタイラスペンとも称される)等の道具を指示物として利用してもよい。 In the above description, the finger is exemplified as the indicator used by the user for input. However, for example, a part other than the finger can be used as the indicator. For example, a tool such as a touch pen (also referred to as a stylus pen) may be used as an indicator.
 また、入力部14に、いわゆるモーションセンシング技術を利用してもよい。モーションセンシング技術として種々の方式が開発されている。例えば、加速度センサ等を搭載したコントローラをユーザが把持または装着することによって、ユーザの動きを検出する方式が知られている。また、例えば、カメラの撮像画像から指等の特徴点を抽出し、その抽出結果からユーザの動きを検出する方式が知られている。モーションセンシング技術を利用した入力部14によっても、直感的な操作環境が提供される。 Also, a so-called motion sensing technology may be used for the input unit 14. Various methods have been developed as motion sensing technology. For example, a method is known in which a user's movement is detected by a user holding or wearing a controller equipped with an acceleration sensor or the like. In addition, for example, a method of extracting a feature point such as a finger from a captured image of a camera and detecting a user's movement from the extraction result is known. An intuitive operation environment is also provided by the input unit 14 using the motion sensing technology.
 また、上記では入力兼表示部20を例示したが、表示部12と入力部14とが別々に配置されても構わない。この場合でも、入力部14がタッチパネル等で構成されることによって、直感的な操作環境が提供される。 In addition, although the input / display unit 20 is illustrated above, the display unit 12 and the input unit 14 may be arranged separately. Even in this case, an intuitive operation environment is provided by configuring the input unit 14 with a touch panel or the like.
 また、本発明は、その発明の範囲内において、実施の形態を適宜、変形、省略することが可能である。 Further, in the present invention, the embodiments can be appropriately modified and omitted within the scope of the invention.
 10 情報表示装置、12 表示部、14 入力部、16 制御部、18 記憶部、20 入力兼表示部、32 表示面、34 入力面(入力領域)、80 ウィンドウ、90 アイコン、S10,S30 処理フロー。 10 information display device, 12 display unit, 14 input unit, 16 control unit, 18 storage unit, 20 input / display unit, 32 display surface, 34 input surface (input area), 80 windows, 90 icons, S10, S30 processing flow .

Claims (23)

  1.  表示面を有する表示部と、
     ユーザ操作を受け付ける入力部と、
     前記ユーザ操作として、前記表示面にウィンドウを形成することを指示すると共に前記ウィンドウの形成範囲を指定するウィンドウ開操作が、前記表示面上の第1表示情報に対して行われた場合、前記ウィンドウ開操作によって指定された前記形成範囲に応じて前記ウィンドウを形成し、前記ウィンドウに、前記第1表示情報に関連するが内容または表現形式が前記第1表示情報とは異なる第2表示情報を表示させる、制御部と
    を備える情報表示装置。
    A display unit having a display surface;
    An input unit that accepts user operations;
    When a window opening operation for instructing to form a window on the display surface and designating a window forming range is performed on the first display information on the display surface as the user operation, the window The window is formed according to the formation range designated by the opening operation, and second display information related to the first display information but having a content or expression format different from the first display information is displayed on the window. An information display device comprising a control unit.
  2.  前記第1表示情報は、地図によって視覚化された第1地図情報であり、
     前記第2表示情報は、前記地図上で前記ウィンドウが位置する場所に関連した第2地図情報である、
    請求項1に記載の情報表示装置。
    The first display information is first map information visualized by a map,
    The second display information is second map information related to a place where the window is located on the map.
    The information display device according to claim 1.
  3.  前記第2地図情報は、地下地図と、昔の地図と、航空写真と、デフォルメ地図とのうちのいずれかを含む、請求項2に記載の情報表示装置。 The information display device according to claim 2, wherein the second map information includes any one of an underground map, an old map, an aerial photograph, and a deformed map.
  4.  前記制御部は、前記ユーザ操作として、前記ウィンドウ内の表示情報の切り替えを指示するウィンドウ内情報切り替え操作が前記ウィンドウ内の任意の位置で行われた場合、別の内容または別の表現形式の前記第2表示情報を前記ウィンドウに表示させる、請求項1に記載の情報表示装置。 When the in-window information switching operation instructing switching of display information in the window is performed at an arbitrary position in the window as the user operation, the control unit has another content or another expression format. The information display device according to claim 1, wherein second display information is displayed on the window.
  5.  前記第2表示情報は、階層化された複数の表示情報であり、
     前記制御部は、前記ユーザ操作として、前記ウィンドウ内の表示情報の切り替えを指示するウィンドウ内情報切り替え操作が前記ウィンドウ内の任意の位置で行われた場合、前記複数の表示情報の階層を切り替えて前記ウィンドウに表示させる、
    請求項1に記載の情報表示装置。
    The second display information is a plurality of hierarchical display information,
    When the window information switching operation instructing switching of display information in the window is performed at an arbitrary position in the window as the user operation, the control unit switches the hierarchy of the plurality of display information. Display in the window,
    The information display device according to claim 1.
  6.  前記複数の表示情報は、重力方向における位置に応じて階層化された複数の地図情報である、請求項5に記載の情報表示装置。 6. The information display device according to claim 5, wherein the plurality of display information are a plurality of map information hierarchized according to positions in a gravity direction.
  7.  前記複数の表示情報は、時間経過に応じて階層化された複数の情報である、請求項5に記載の情報表示装置。 The information display device according to claim 5, wherein the plurality of display information are a plurality of information hierarchized according to the passage of time.
  8.  前記制御部は、前記ウィンドウ開操作が、前記第1表示情報の表示領域内のアイコンに対して行われた場合、前記アイコンに関連付けられている情報を前記第2表示情報として前記ウィンドウに表示させる、請求項1に記載の情報表示装置。 When the window opening operation is performed on an icon in the display area of the first display information, the control unit displays information associated with the icon as the second display information on the window. The information display device according to claim 1.
  9.  前記第1表示情報は、地図によって視覚化されており、
     前記アイコンに関連付けられている前記第2表示情報は、前記地図上で前記アイコンの位置に存在する建築物の内部地図情報である、
    請求項8に記載の情報表示装置。
    The first display information is visualized by a map;
    The second display information associated with the icon is internal map information of a building existing at the position of the icon on the map.
    The information display device according to claim 8.
  10.  前記第2表示情報は、階層化された複数の内部地図情報であり、
     前記制御部は、前記ユーザ操作として、前記ウィンドウ内の表示情報の切り替えを指示するウィンドウ内情報切り替え操作が前記ウィンドウ内の任意の位置で行われた場合、前記複数の内部地図情報の階層を切り替えて前記ウィンドウに表示させる、
    請求項9に記載の情報表示装置。
    The second display information is a plurality of hierarchical internal map information,
    When the window information switching operation instructing switching of display information in the window is performed at an arbitrary position in the window as the user operation, the control unit switches the hierarchy of the plurality of internal map information. To display in the window,
    The information display device according to claim 9.
  11.  前記第1表示情報は、第1地図情報であり、前記第2表示情報は、予め設定された設定地に関連する第2地図情報である、請求項1に記載の情報表示装置。 The information display device according to claim 1, wherein the first display information is first map information, and the second display information is second map information related to a preset setting place.
  12.  前記制御部は、前記ユーザ操作として、前記ウィンドウ内の表示情報の切り替えを指示するウィンドウ内情報切り替え操作が前記ウィンドウ内の任意の位置で行われた場合、別の内容または別の表現形式の前記第2地図情報を前記ウィンドウに表示させる、請求項11に記載の情報表示装置。 When the in-window information switching operation instructing switching of display information in the window is performed at an arbitrary position in the window as the user operation, the control unit has another content or another expression format. The information display device according to claim 11, wherein second map information is displayed on the window.
  13.  前記第2地図情報は、現在地から前記設定地までの経路情報と、前記設定地周辺の気象情報と、前記設定地周辺の地理情報と、現在地から前記設定地までの渋滞情報と、のうちのいずれかを含む、請求項12に記載の情報表示装置。 The second map information includes route information from a current location to the set location, weather information around the set location, geographic information around the set location, and traffic jam information from the current location to the set location. The information display device according to claim 12, comprising any one of them.
  14.  前記制御部は、前記第1表示情報が鳥瞰図によって表示されている場合、前記ウィンドウの形状を前記鳥瞰図の俯角に応じて設定する、請求項1に記載の情報表示装置。 The information display device according to claim 1, wherein when the first display information is displayed by a bird's eye view, the control unit sets the shape of the window according to a depression angle of the bird's eye view.
  15.  前記第1表示情報は、地図によって視覚化されており、
     前記制御部は、前記ウィンドウ開操作によって指定された前記形成範囲を、予め定められた変換規則に従って変換し、変換後の範囲に前記ウィンドウを形成し、
     前記予め定められた変換規則は、
      前記ウィンドウ開操作によって指定された前記形成範囲の周縁を、前記地図中の区画境界および前記地図の表示領域の周縁に合わせて変形するという第1規則と、
      前記変換後の範囲は、前記ウィンドウ開操作によって指定された前記形成範囲を、予め定められた割合以上含むという第2規則と
    を含む、請求項1に記載の情報表示装置。
    The first display information is visualized by a map;
    The control unit converts the formation range specified by the window opening operation according to a predetermined conversion rule, forms the window in the converted range,
    The predetermined conversion rule is:
    A first rule that a periphery of the formation range designated by the window opening operation is deformed according to a partition boundary in the map and a periphery of the display area of the map;
    The information display device according to claim 1, wherein the range after conversion includes a second rule that includes the formation range specified by the window opening operation at a predetermined ratio or more.
  16.  前記制御部は、前記ユーザ操作が前記ウィンドウを制御するための操作である場合、前記ユーザ操作に割り当てられている制御内容に応じて前記ウィンドウを制御する、請求項1に記載の情報表示装置。 The information display device according to claim 1, wherein the control unit controls the window according to a control content assigned to the user operation when the user operation is an operation for controlling the window.
  17.  前記制御部は、前記ユーザ操作が、前記ウィンドウの予め定められた部分をタッチした状態で行う1点移動型のピンチ操作である場合、ピンチ方向に応じて前記ウィンドウを拡大または縮小する、請求項16に記載の情報表示装置。 The control unit enlarges or reduces the window according to a pinch direction when the user operation is a one-point moving type pinch operation performed in a state where a predetermined portion of the window is touched. 16. The information display device according to 16.
  18.  前記制御部は、前記ピンチ方向に前記ウィンドウを変形することによって、前記ウィンドウを拡大または縮小する、請求項17に記載の情報表示装置。 The information display device according to claim 17, wherein the control unit enlarges or reduces the window by deforming the window in the pinch direction.
  19.  前記制御部は、前記ユーザ操作が、前記ウィンドウの予め定められた部分をタッチした状態で行うドラッグ操作である場合、前記ドラッグ操作に伴って前記ウィンドウを移動させる、請求項16に記載の情報表示装置。 The information display according to claim 16, wherein, when the user operation is a drag operation performed in a state where a predetermined portion of the window is touched, the control unit moves the window in accordance with the drag operation. apparatus.
  20.  前記制御部は、前記ユーザ操作が前記ウィンドウに対するフリック操作である場合、前記ウィンドウを消去する、請求項16に記載の情報表示装置。 The information display device according to claim 16, wherein the control unit deletes the window when the user operation is a flick operation on the window.
  21.  前記制御部は、前記ユーザ操作として、前記ウィンドウの外側から前記ウィンドウ内に進入し、進入側とは異なる側において前記ウィンドウの外側へ抜けるジェスチャ操作が行われた場合、前記ウィンドウを消去する、請求項16に記載の情報表示装置。 The control unit deletes the window when a gesture operation for entering the window from the outside of the window and exiting to the outside of the window on the side different from the entry side is performed as the user operation. Item 17. The information display device according to Item 16.
  22.  前記制御部は、前記ユーザ操作として、前記ウィンドウを挟み込むジェスチャ操作が行われた場合、前記ウィンドウを消去する、請求項16に記載の情報表示装置。 The information display device according to claim 16, wherein the control unit erases the window when a gesture operation for sandwiching the window is performed as the user operation.
  23.  (a)ユーザ操作を受け付けるステップと、
     (b)前記ユーザ操作を識別するステップと、
     (c)前記ユーザ操作として、表示面にウィンドウを形成することを指示すると共に前記ウィンドウの形成範囲を指定するウィンドウ開操作が、前記表示面上の第1表示情報に対して行われた場合、前記ウィンドウ開操作によって指定された前記形成範囲に応じて前記ウィンドウを形成するステップと、
     (d)前記ウィンドウに、前記第1表示情報に関連するが内容または表現形式が前記第1表示情報とは異なる第2表示情報を表示するステップと
    を備える情報表示方法。
    (A) receiving a user operation;
    (B) identifying the user operation;
    (C) When a window opening operation for instructing to form a window on the display surface and designating a window forming range is performed on the first display information on the display surface as the user operation, Forming the window according to the formation range specified by the window opening operation;
    (D) An information display method comprising: displaying, on the window, second display information related to the first display information but having a content or an expression format different from the first display information.
PCT/JP2012/076678 2012-10-16 2012-10-16 Information display device and information display method WO2014061096A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2014541845A JP6000367B2 (en) 2012-10-16 2012-10-16 Information display device and information display method
PCT/JP2012/076678 WO2014061096A1 (en) 2012-10-16 2012-10-16 Information display device and information display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/076678 WO2014061096A1 (en) 2012-10-16 2012-10-16 Information display device and information display method

Publications (1)

Publication Number Publication Date
WO2014061096A1 true WO2014061096A1 (en) 2014-04-24

Family

ID=50487685

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/076678 WO2014061096A1 (en) 2012-10-16 2012-10-16 Information display device and information display method

Country Status (2)

Country Link
JP (1) JP6000367B2 (en)
WO (1) WO2014061096A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015225397A (en) * 2014-05-26 2015-12-14 株式会社コロプラ User interface program
JP2016110362A (en) * 2014-12-05 2016-06-20 三菱電機株式会社 Display control system and display control method
WO2018180521A1 (en) * 2017-03-28 2018-10-04 アイシン・エィ・ダブリュ株式会社 Map display system and map display program
JP2019003530A (en) * 2017-06-19 2019-01-10 株式会社コーエーテクモゲームス User interface processing program, recording medium, and user interface processing method
JP2022510016A (en) * 2018-12-14 2022-01-25 広州極飛科技股▲ふん▼有限公司 Work route setting method and control equipment for work equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04204481A (en) * 1990-11-30 1992-07-24 Hitachi Ltd Display method for map and drawing information
JPH11166836A (en) * 1997-12-03 1999-06-22 Sharp Corp Navigator device
JP2000076484A (en) * 1998-08-31 2000-03-14 Nec Corp Information display device and storage medium for recording its program
JP2011071627A (en) * 2009-09-24 2011-04-07 Mitsubishi Electric Corp Radio receiving apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10177465A (en) * 1996-12-18 1998-06-30 Sharp Corp Information processor
AU733993B2 (en) * 1997-07-21 2001-05-31 Rovi Guides, Inc. Systems and methods for displaying and recording control interfaces
JP4182381B2 (en) * 1999-03-31 2008-11-19 ソニー株式会社 Map display apparatus and method
WO2010147497A1 (en) * 2009-06-19 2010-12-23 Alcatel Lucent Gesture on touch sensitive input devices for closing a window or an application
JP2012184935A (en) * 2011-03-03 2012-09-27 Navitime Japan Co Ltd Navigation device, navigation system, navigation server, navigation method and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04204481A (en) * 1990-11-30 1992-07-24 Hitachi Ltd Display method for map and drawing information
JPH11166836A (en) * 1997-12-03 1999-06-22 Sharp Corp Navigator device
JP2000076484A (en) * 1998-08-31 2000-03-14 Nec Corp Information display device and storage medium for recording its program
JP2011071627A (en) * 2009-09-24 2011-04-07 Mitsubishi Electric Corp Radio receiving apparatus

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015225397A (en) * 2014-05-26 2015-12-14 株式会社コロプラ User interface program
JP2016110362A (en) * 2014-12-05 2016-06-20 三菱電機株式会社 Display control system and display control method
WO2018180521A1 (en) * 2017-03-28 2018-10-04 アイシン・エィ・ダブリュ株式会社 Map display system and map display program
JPWO2018180521A1 (en) * 2017-03-28 2019-11-07 アイシン・エィ・ダブリュ株式会社 Map display system and map display program
JP2019003530A (en) * 2017-06-19 2019-01-10 株式会社コーエーテクモゲームス User interface processing program, recording medium, and user interface processing method
JP2022510016A (en) * 2018-12-14 2022-01-25 広州極飛科技股▲ふん▼有限公司 Work route setting method and control equipment for work equipment

Also Published As

Publication number Publication date
JP6000367B2 (en) 2016-09-28
JPWO2014061096A1 (en) 2016-09-05

Similar Documents

Publication Publication Date Title
US10508926B2 (en) Providing navigation instructions while device is in locked mode
CN106201305B (en) Electronic device and control method thereof
CN105051494B (en) Mapping application with several user interfaces
KR101203271B1 (en) Advanced navigation techniques for portable devices
US20150338974A1 (en) Definition and use of node-based points, lines and routes on touch screen devices
CN104137043A (en) Method for human-computer interaction on a graphical user interface (gui)
JP6000367B2 (en) Information display device and information display method
CN104335012A (en) Voice instructions during navigation
KR20160062565A (en) Device and method for providing handwritten content
US9672793B2 (en) Map display device and map display method
TWI487881B (en) Electronic device, voice-activated method of providing navigational directions, method of providing navigational directions, and machine readable medium
TWI550568B (en) Mapping application with 3d presentation
JP5921703B2 (en) Information display device and operation control method in information display device
JP5496434B2 (en) Map display device and map display method
US11550459B2 (en) User interfaces for maps and navigation
JP5774133B2 (en) Map display device and map display method
KR20230010759A (en) User interfaces for viewing and refining the current location of an electronic device
US20230160714A1 (en) User interfaces for maps and navigation
JP6351432B2 (en) Map drawing system and map drawing method
CN104040294B (en) Map display and map-indication method
TWI533264B (en) Route display and review
JP5818917B2 (en) Map display device and map display method
CN117460926A (en) User interface for map and navigation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12886715

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014541845

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12886715

Country of ref document: EP

Kind code of ref document: A1