WO2013153568A1 - Video display device and integrated circuit - Google Patents

Video display device and integrated circuit Download PDF

Info

Publication number
WO2013153568A1
WO2013153568A1 PCT/JP2012/002472 JP2012002472W WO2013153568A1 WO 2013153568 A1 WO2013153568 A1 WO 2013153568A1 JP 2012002472 W JP2012002472 W JP 2012002472W WO 2013153568 A1 WO2013153568 A1 WO 2013153568A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display device
video display
information
interpolation
Prior art date
Application number
PCT/JP2012/002472
Other languages
French (fr)
Japanese (ja)
Inventor
厳太朗 竹田
石川 雄一
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Priority to PCT/JP2012/002472 priority Critical patent/WO2013153568A1/en
Publication of WO2013153568A1 publication Critical patent/WO2013153568A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/005Adapting incoming signals to the display format of the display terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0435Change or adaptation of the frame rate of the video stream
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/02Graphics controller able to handle multiple formats, e.g. input or output formats
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/08Power processing, i.e. workload management for processors involved in display operations, such as CPUs or GPUs
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/027Arrangements and methods specific for the display of internet documents
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/34Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators for rolling or scrolling
    • G09G5/346Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators for rolling or scrolling for systems having a bit-mapped display memory

Definitions

  • the present invention relates to a video display device and a video display circuit, and more particularly to a video display device and an integrated circuit that perform frame interpolation for increasing a frame rate.
  • a display device when a video signal having a frame rate lower than the frame rate that can be displayed by the display device is input, conventionally, for example, frame rate conversion is performed to increase the frame rate in order to display the video smoothly. Yes.
  • a technique related to frame rate conversion for increasing the frame rate for example, by comparing two consecutive images among a plurality of images constituting a video, motion information including a motion direction and its size is obtained.
  • a frame interpolation technique based on motion compensation in which an interpolation image is generated in accordance with the calculated motion direction and size, and an interpolation image is inserted between two consecutive images (for example, see Patent Document 1). .
  • the video is a moving image
  • motion information such as an image displaying newspaper text or an image displaying text containing small characters.
  • a frame interpolation error may occur or the image may be blurred.
  • an aspect of the video display device is a video display device that performs frame interpolation for increasing a frame rate, and is based on an input first image signal.
  • a rendering processing unit that generates a first image that can be operated at a first frame rate, and operation information that indicates the operation when the user performs an operation on the first image via an operation terminal And generating a control information including the operation information, and generating an interpolated image of the first image based on the first image and the control information, and the first frame rate
  • An interpolated image generating unit that generates a video display signal including the first image and an interpolated image of the first image at a higher second frame rate, and outputs the video display signal to a display device; Be equipped That.
  • the video display device of the present invention it is possible to perform frame interpolation satisfactorily even for video for which motion information is difficult to calculate.
  • FIG. 1 is a schematic block diagram illustrating a schematic configuration example of a video display device and its peripheral devices.
  • FIG. 2A is a flowchart illustrating an example of an operation of detecting an operation of a user configuring the video display device.
  • FIG. 2B is a flowchart illustrating an example of a processing operation of the video display device.
  • FIG. 3A is a block diagram illustrating an example of a composite image.
  • FIG. 3B is a block diagram illustrating an example of an interpolated image of a composite image.
  • FIG. 3C is a block diagram illustrating an example of a composite image.
  • FIG. 4A is a block diagram illustrating an example of a composite image.
  • FIG. 4B is a block diagram illustrating an example of an interpolated image of a composite image.
  • FIG. 4C is a block diagram illustrating an example of a composite image.
  • FIG. 5 is a block diagram illustrating an example of a composite image when there are a plurality of object images.
  • FIG. 6A is a block diagram illustrating an example of a left-eye image.
  • FIG. 6B is a block diagram illustrating an example of a right-eye image.
  • FIG. 7 is a block diagram illustrating a TV as an example of a display device including a video display device.
  • FIG. 8 is a block diagram showing a mobile phone as an example of the operation terminal.
  • a display device such as a TV or projector
  • a smart phone for example, a smart phone
  • a tablet are connected as an operation terminal, and it is possible to operate the image displayed on the display device with the operation terminal.
  • a frame rate that increases the frame rate by motion compensation has been conventionally used. Conversion is in progress.
  • Video display devices that perform frame interpolation by motion compensation include, for example, GPU (Graphics Processing Unit) that operates at a lower frame rate than the display panel, FRC (Frame Rate Converter) that generates interpolated images, and various functions of the video display device And a central processing unit (CPU) that manages the system.
  • GPU Graphics Processing Unit
  • FRC Full Rate Converter
  • CPU central processing unit
  • the GPU In the case of a TV having the above-described double-speed panel, the GPU generates an object image (an example of a first image) at a low frame rate from an object signal indicating a Web page, and a TV image (at a low frame rate from a broadcast signal). An example of the second image) is generated.
  • the object image is, for example, an image of a net browser on which a web page is displayed.
  • the GPU generates a composite image at a low frame rate by combining the object image with the TV image.
  • the FRC generates an interpolation image by motion compensation from two temporally continuous composite images. Specifically, the FRC acquires a composite image from the GPU at a low frame rate, compares two temporally continuous composite images, and generates an interpolated image of the composite image. Further, the FRC generates a video display image by inserting the interpolated image of the generated composite image between the two composite images used for generating the interpolated image, and outputs it to the display panel. With this configuration, an image can be displayed at a frame rate higher than the frame rate of the input video signal and object signal, that is, the frame rate of the display panel.
  • noise due to a frame interpolation error may occur or the display of the Web page may be blurred.
  • noise and display blur, or movement may be awkward.
  • the GPU that performs frame interpolation by motion compensation is a high-speed GPU that operates at the same frame rate as the display device, there is no need to perform frame interpolation by motion compensation, which solves the problem of frame interpolation errors and display blurring. Although it is possible, a high-speed GPU is quite expensive and the manufacturing costs are considerably increased.
  • a video display device is a video display device that performs frame interpolation to increase a frame rate, and a first image that can be operated by a user based on an input first image signal.
  • the operation information indicating the operation is acquired and the operation information is included.
  • an interpolated image of the first image is generated, and the second frame rate higher than the first frame rate is used to generate the interpolated image.
  • An interpolation image generation unit that generates a video display signal including a first image and an interpolation image of the first image, and outputs the video display signal to a display device.
  • the video display device is configured in this way, when an operation such as moving or scrolling the first image (object image) is performed, the operation is not based on motion compensation but based on operation information indicating an operation by the user. Is possible. Since operation information is used for generating an interpolated image instead of motion compensation, it is possible to prevent a frame interpolation error from occurring. Specifically, smooth movement of the first image, scroll without display blur in the first image, and the like can be realized. In addition, since the video display device having the above configuration does not require a high-speed GPU, an increase in manufacturing cost can be suppressed.
  • the drawing processing unit further generates a second image that is not a target of the user operation based on the input second image signal, and the second image and the first image are generated.
  • the drawing control unit outputs the control information and identification information for identifying the first image to the interpolation image generation unit, and the interpolation image generation unit And specifying the first image from the composite image using the specifying information, and generating an interpolated image of the first image based on the specified first image and the control information. Also good.
  • the first image is displayed when, for example, the first image is superimposed on the second image, or the second image and the first image are displayed side by side. Scrolling and moving can be performed smoothly without blurring the display.
  • the specific information may be configured to include a position of the first image with respect to the second image and a size of the first image.
  • control information may include an operation direction and an operation speed of the first image with respect to the second image as the operation information.
  • the drawing control unit calculates an operation direction and an operation speed in a three-dimensional space for each of the right-eye image and the left-eye image constituting the first image based on the operation information.
  • the operation information is obtained, and the interpolation image generation unit generates a right-eye interpolation image based on an operation direction and an operation speed in the right-eye image and the right-eye image, and in the left-eye image and the left-eye image
  • the left-eye interpolation image may be generated based on the operation direction and the operation speed.
  • the video display device is configured in this way, even when a three-dimensional video (3D video) is displayed, smooth display without display blur due to user operation information becomes possible.
  • the drawing processing unit when the first image signal indicates an object image that is a still image, the drawing processing unit generates the first image by drawing the object image at the first frame rate. You may comprise.
  • An integrated circuit is an integrated circuit for a video display device that performs frame interpolation for increasing a frame rate, and is based on an input first image signal and can be operated by a user.
  • the operation information indicating the operation is acquired, and the operation information is acquired.
  • a drawing control unit that generates control information including the first image and the first image and an interpolation image of the first image are generated, and the first image and the first image are generated.
  • An interpolated image generation unit that outputs a video display signal including the interpolated image of the image to the display device.
  • the present invention can be realized not only as an apparatus but also as a method using steps as processing units constituting the apparatus, as a program for causing a computer to execute the steps, or as a computer read recording the program. It can be realized as a possible recording medium such as a CD-ROM, or as information, data or a signal indicating the program. These programs, information, data, and signals may be distributed via a communication network such as the Internet.
  • Embodiment 1 The video display apparatus according to Embodiment 1 will be described with reference to FIGS. 1 to 4C.
  • the video display device 30 is a device that performs frame interpolation to increase the frame rate.
  • an example will be described in which an object image that can be operated by the user and a video image that is not an operation target of the user are simultaneously displayed on the display device 20, and the object image is operated by the operation terminal 50. To do.
  • FIG. 1 is a block diagram illustrating a configuration example of a display device 20 including a video display device 30, a video input device 10 that is a peripheral device, and an operation terminal 50.
  • a case where the display device 20 is a TV shown in FIG. 7 will be described as an example.
  • the video input device 10 outputs a video signal STV (an example of a second image signal) that is not an operation target of the user to the display device 20.
  • a video signal STV an example of a second image signal
  • the video input device 10 is a TV antenna and the video signal STV is a broadcast wave of TV broadcasting will be described as an example.
  • the video input device 10 is not limited to a TV antenna, and may be a set-top box for cable broadcasting (cable television).
  • the operation terminal 50 is a multi-function mobile phone having a touch panel as shown in FIG. 7, and is connected to the display device 20 with an AV output cable, an HDMI cable (wired), or infrared communication (wireless). The case where it is comprised so that it may connect by is demonstrated to an example.
  • the operation terminal 50 is configured to output an object signal SOB (corresponding to a first image signal) that can be operated by the user and an operation detection signal SOD to the display device 20.
  • the object signal SOB will be described as an example in which the object signal SOB is output from the mobile phone to the video display device 30.
  • the object signal SOB is a signal indicating data such as HTML indicating a Web page on the Internet and an image of a net browser for displaying the Web page.
  • the Web page is configured to include text composed of small characters.
  • the Web page data is data of the entire Web page, and may include a range not displayed on the network browser.
  • the operation terminal 50 When the touch panel detects an operation on the object image by the user, the operation terminal 50 immediately outputs an operation detection signal SOD indicating the operation content to the video display device 30.
  • the operation terminal 50 is not limited to a mobile phone.
  • a remote controller for operating a TV which is an example of the display device 20, a PC (Personal Computer) mouse connected to the display device 20, and the display device 20.
  • Any terminal that transmits the operation content intended by the user such as a tablet or camera connected to the terminal, may be used.
  • a camera for example, a moving image captured by the camera is output as operation detection information to the video display device 30, and the operation detection unit 34 of the video display device 30 described later analyzes the moving image, and the operation content is determined from the user's movement. You may comprise so that it may identify.
  • the display device 20 will be described as an example in which the display device 20 is a TV configured by including a video display device 30 and a display panel 40.
  • the display device 20 is not limited to the TV, and may be a projector.
  • the display device 20 displays a TV broadcast screen (corresponding to a video image or a second image) and a net browser (object image) on which a Web page is displayed on the display panel 40. It can be displayed at the same time.
  • the case where the video display device 30 is mounted on the display device 20 will be described as an example, but the present invention is not limited to this.
  • the display device 20 may be configured to be connectable to the display device 20 or may be mounted on another device such as the user operation terminal 50.
  • the display panel 40 is configured to be able to display an image at a frame rate higher than a broadcast wave of TV broadcast (for example, 29.97 fps or 59.9 fps in terrestrial digital broadcast), for example, 120 fps.
  • a broadcast wave of TV broadcast for example, 29.97 fps or 59.9 fps in terrestrial digital broadcast
  • the video display device 30 includes a drawing processing unit 31 that operates at a first frame rate lower than that of the display panel 40, a drawing control unit 33 that controls each function of the video display device 30, and an operation terminal 50.
  • An operation detection unit 34 that detects a user operation and an interpolated image generation unit 32 that performs frame interpolation to increase the frame rate are provided.
  • the drawing processing unit 31 is configured by a GPU, generates an object image that can be operated by the user from the input object signal SOB, generates a video image that is not a user operation target from the input video signal STV, A composite image is generated by combining the image and the object image. Note that, when the video signal STV or the object signal SOB indicates a still image, the rendering processing unit 31 of the present embodiment renders the still image at the first frame rate, and performs the second image or the first image. As a video.
  • the drawing control unit 33 is composed of a CPU and manages identification information SLO for identifying an object image.
  • the specific information SLO is the position (for example, coordinates on the video image) and size of the object image on the video image.
  • the drawing control unit 33 outputs specific information SLO to the drawing processing unit 31 based on a request from the drawing processing unit 31.
  • the drawing control unit 33 acquires operation information SOI indicating the operation from the operation detection unit 34.
  • the operation information SOI here is, for example, quantification data of the operation direction and operation speed.
  • the drawing control unit 33 converts the operation direction and operation speed quantification data included in the operation information SOI into a vector amount usable by the interpolation image generation unit 32.
  • the drawing control unit 33 generates the control information SC including the specific information SLO and the operation information SOI after converting the quantified data into the vector amount, and outputs the control information SC to the interpolation image generation unit 32.
  • the interpolation image generation unit 32 is configured by FRC, and generates an interpolation image of the object image using the control information SC output from the drawing control unit 33. Furthermore, the interpolation image generation unit 32 generates an interpolation image of the video image by motion compensation. The interpolated image generating unit 32 combines the interpolated image of the video image and the interpolated image of the object image to generate an interpolated image of the combined image. The interpolated image generation unit 32 generates a video display signal including the composite image and the interpolated image of the composite image at the second frame rate, and outputs the generated video display signal to the display device.
  • the operation detection unit 34 When the operation detection unit 34 receives the operation detection signal SOD output from the operation terminal 50, the operation detection unit 34 acquires quantification data of the operation direction and the operation speed from the operation detection signal SOD. The operation detection unit 34 outputs the operation direction and operation speed quantification data to the drawing control unit 33 as operation information SOI.
  • drawing processing unit 31, the drawing control unit 33, and the interpolated image generation unit 32 are each configured to execute processing in parallel.
  • the actual TV broadcast frame rate is not necessarily the same, but the FRC processing speed is 50 Hz and the display panel 40 processing speed is 100 Hz.
  • the video image 41 is displayed on the entire screen of the display panel 40, and the object image 42 is displayed on a part of the display panel 40.
  • the image is displayed in a hierarchy above the video image 41 will be described as an example.
  • the size of the object image 42 is set smaller than the video image 41 so that the visibility of the video image 41 is not significantly impaired.
  • a case where there is one object image 42 and the size of the object image 42 is constant will be described as an example.
  • the video display device 30 executes a basic operation for generating an interpolated image of a composite image based on the control signal and an object operation detection operation for detecting a user operation.
  • the basic operation is an operation that is always executed while the video signal STV and the object signal SOB are input.
  • the object operation detection operation is an operation when a user operation is detected, and is executed asynchronously with the basic operation.
  • 2A is a flowchart showing an object operation detection operation
  • FIG. 2B is a flowchart showing a basic operation.
  • the operation detection unit 34 of the video display device 30 receives an operation detection signal SOD that is immediately output when the user operates the operation terminal 50 (mobile phone) ( Step S11).
  • the operation detection unit 34 Upon receiving the operation detection signal SOD, the operation detection unit 34 generates operation information SOI including the operation type, quantified data obtained by quantifying the operation speed and the operation direction from the operation detection signal SOD, and the drawing control unit. It outputs to 33 (step S12).
  • the operation type is an operation type for the object image
  • a scroll operation for scrolling a Web page displayed on the net browser and a move operation for moving the net browser on the video image 41 will be described as examples.
  • the operation type is determined as a moving operation, and the first contact detection position is outside the net browser.
  • the Web page portion excluding the frame portion it may be configured to determine that the operation is a scroll operation.
  • the quantification data is obtained from, for example, the speed of the user's operation and the direction of the operation.
  • the drawing control unit 33 converts the quantified data of the operation speed and the operation direction into a vector amount that can be used by the interpolation image generation unit 32.
  • the vector amount here is expressed using, for example, the operation direction and the pixel amount.
  • the drawing control unit 33 generates the control information SC including the specific information SLO and the operation information SOI after converting the quantified data into a vector amount, and outputs the control information SC to the interpolated image generation unit 32 (step S13).
  • the drawing control unit 33 updates the position of the object image 42 on the video image 41 in the specific information SLO to be managed.
  • the drawing control unit 33 interpolates the vector amount in accordance with the frequency of the display panel 40 when the detection frequency for the user operation in the operation terminal 50 or the operation detection unit 34 is lower than the frequency of the display panel 40. Do. Vector quantity interpolation is performed using, for example, linear interpolation or Bezier curve interpolation.
  • the video display device 30 receives the video signal STV from the video input device 10 and the object signal SOB from the mobile terminal as shown in FIG. 2B (step S21). Note that the video signal STV is continuously input to the video display device 30 because it is a TV broadcast signal in this embodiment. Since the object signal SOB is data of a web page, new web page data is input to the video display device 30 when the web page is opened.
  • the drawing processing unit 31 generates an object image 42 that can be operated by the user from the object signal SOB, generates a video image 41 that is not an operation target of the user from the video signal STV, and combines the object image 42 with the video image 41. (Step S22).
  • FIGS. 4A and 4C show an example of the composite image 43 when the Web page is scrolled.
  • FIGS. 4A and 4C show an example of the composite image 43 when the net browser is moved. Show.
  • the drawing processing unit 31 acquires the specific information SLO including the position and size of the object image 42 from the drawing control unit 33 for each video image 41. Next, the drawing processing unit 31 generates a composite image 43 by superimposing the object image 42 on the position on the video image 41 indicated by the specific information SLO.
  • the position of the object image 42 is a coordinate on the video image 41 where the pixel at the upper left corner of the net browser that is the object image 42 is displayed.
  • the coordinates (x 1 , y 1 ) of the pixel in the upper left corner of the object image 42 are those in the right direction of the drawing from the pixel in the upper left corner when the coordinate of the pixel in the upper left corner of the video image 41 is (0, 0). This is expressed as (x 1 , y 1 ) using the number of pixels x 1 and the number of pixels y 1 in the downward direction of the drawing.
  • the position of the object image 42 is not limited to the coordinates of the pixel at the upper left corner, but may be the coordinates of other pixels of the object image 42 such as the coordinates of the center pixel of the object image 42. Further, the coordinates (0, 0) may be set at another position of the video image 41. Furthermore, the position of the object image 42 may be expressed using a distance r from the coordinates (0, 0) and an angle ⁇ from the x axis.
  • the size of the object image 42 is the length (number of pixels) in the x-axis direction (horizontal direction in the drawing) and the length (number of pixels) in the y-axis direction (vertical direction in the drawing) of the net browser.
  • the present invention is not limited to this.
  • the interpolation image generation unit 32 acquires the control information SC from the drawing control unit 33, and generates an interpolation image based on the control information SC (step S23).
  • the interpolation image generation unit 32 has a processing speed of the drawing processing unit 31 of 50 Hz and a processing speed of the display panel 40 of 100 Hz. Frame interpolation for generating one interpolated image is performed. Note that the number of interpolated images generated and the insertion location for the entire video signal STV are appropriately set based on the frame rate of the input video signal STV and the frame rate that can be displayed on the display panel 40.
  • control information SC includes specific information SLO of the object image 42 in each of the preceding and succeeding composite images 43.
  • the control information SC includes the operation identification information, the operation amount, and the quantified data of the operation direction in addition to the two pieces of specific information SLO.
  • the control information SC is acquired every time the interpolation image 42i of the object image is generated.
  • the interpolated image generation unit 32 When the operation type information is not included in the control information SC, that is, when the object image 42 is not operated, the interpolated image generation unit 32 first determines, based on the position and size of the object image 42 indicated by the specific information SLO. The object image 42 is specified from each of the two composite images 43 that are temporally continuous. After specifying the object image 42, the interpolated image generation unit 32 generates an interpolated image 41i of the video image by motion compensation.
  • the interpolation image generating unit 32 acquires the specified object image 42 as an interpolation image 42i of the object image from any one of the two temporally continuous composite images 43.
  • the object image 42 since the object image 42 is not operated, the object image 42 in the composite image 43 and the interpolation image 42i of the object image are the same.
  • the position of the object image 42 in the interpolated image 41 i of the video image is the same as the position of the object image 42 in the video image 41.
  • the interpolated image generation unit 32 superimposes the interpolated image 42i of the object image on the interpolated image 41i of the video image to generate an interpolated image 43i of the composite image.
  • the interpolated image generation unit 32 When the operation information SOI is included in the control information SC and the operation type information is information indicating a scroll operation, the interpolated image generation unit 32 first determines based on the position and size of the object image 42 indicated by the specific information SLO.
  • the object image 42 is specified from each of the two composite images 43 that are continuous in time. In the present embodiment, the object image 42 is specified from the composite image 43 shown in FIGS. 3A and 3C. After specifying the object image 42, the interpolated image generation unit 32 generates an interpolated image 41i of the video image by motion compensation.
  • the interpolated image generation unit 32 determines, based on the vector amount of the operation information SOI, from the object image 42 of the two composite images 43 that are temporally continuous after specifying the object image 42.
  • An interpolation image 42i of the object image obtained by scrolling the web page is generated.
  • the interpolated image generation unit 32 scrolls the object image 42 of the previous composite image 43 shown in FIG. 3A according to the vector amount.
  • the interpolated image generation unit 32 slides the Web page in the upward or downward direction based on the vector amount by the amount indicated by the vector amount.
  • the interpolated image generating unit 32 obtains an image (image indicated by a broken line in FIG. 3A) that is included in the interpolated image 42 i of the object image from the object image 42 of the previous composite image 43 by scroll operation. To do.
  • the interpolated image generation unit 32 uses the object image 42 of the next composite image 43 for the portion missing due to the slide of the image (the image of the portion indicated by the one-dot broken line in FIG. 3C).
  • the position of the interpolation image 42i of the object image is the same as the position of the object image 42 in the preceding and succeeding composite images 43.
  • the specific information includes the data of the Web data display part (the part to be scrolled) and the non-scrolling part.
  • the interpolated image generating unit 32 superimposes the interpolated image 42i of the object image on the interpolated image 41i of the video image, and generates an interpolated image 43i of the composite image.
  • the interpolation image generation unit 32 When the operation information SOI is included in the control information SC and the operation type information is information indicating the movement operation of the network browser, the interpolation image generation unit 32 firstly determines the position and size of the object image 42 indicated by the specific information SLO. Based on this, the object image 42 is specified from each of the synthesized images 43 before and after the interpolation image to be inserted. It should be noted that the position of the object image 42 on the video image 41 is the same as that of the previous composite image 43, as can be seen from FIG. 4A and FIG. This is different from the later composite image 43. After specifying the object image 42, the interpolated image generation unit 32 generates an interpolated image 41i of the video image by motion compensation.
  • the interpolated image generation unit 32 After specifying the object image 42, the interpolated image generation unit 32 acquires the specified object image 42 as an interpolated image 42i of the object image from any one of the preceding and succeeding synthesized images 43.
  • the Web page displayed on the network browser or the display location of the Web page is not changed, so the object image 42 in the composite image 43 and the interpolation image 42i of the object image are the same.
  • the interpolation image generation unit 32 moves the position of the object image 42 in the previous composite image 43 that has been moved by the vector amount of the operation information SOI to the object image in the interpolation image 43i of the composite image. 42 is calculated.
  • the interpolated image generation unit 32 superimposes the acquired object image 42 on the position of the object image 42 in the calculated interpolated image 43i of the interpolated image to generate an interpolated image 43i of the combined image.
  • the interpolated image generating unit 32 After generating the interpolated image 43i of the composite image, the interpolated image generating unit 32 generates a video display signal in which the interpolated image 43i of the generated composite image is inserted between the composite images 43, and outputs it to the display panel 40 (step S24). ).
  • the object image 42 of the previous or next synthesized image 43 with no display blur is directly used as the interpolation image 42i of the object image, or before and after the display blur is not generated. Since the object image 42 of the composite image 43 is combined as it is and used as the interpolation image 42 i of the object image, the interpolation image 43 i of the composite image becomes an image with no display blur, and the accuracy of the interpolation image can be improved.
  • the video display device 30 of the present embodiment is different from the video display device 30 of the first embodiment in that a plurality of object images 42a to 42g can be displayed on the screen of the display panel 40. . Therefore, the specific information SLO for specifying the object images 42a to 42g includes identification information of the object images 42a to 42g in the present embodiment.
  • the display device 20 of the present embodiment is connected to the video input device 10 and the operation terminal 50, and includes the video display device 30 and the display panel 40, as in the first embodiment.
  • a case of a TV is described as an example.
  • the configurations of the video input device 10, the operation terminal 50, and the display panel 40 are the same as those in the first embodiment.
  • the video display device 30 includes a drawing processing unit 31 that operates at a lower frame rate than the display panel 40, a drawing control unit 33 that controls each function of the video display device 30, and an operation terminal 50.
  • An operation detection unit 34 that detects a user operation and an interpolated image generation unit 32 that performs frame interpolation to increase the frame rate are provided.
  • the drawing processing unit 31 is configured by a GPU as in the first embodiment. As shown in FIG. 5, the rendering processing unit 31 of the present embodiment generates a plurality of object images 42a to 42g that can be operated by the user from a plurality of input object signals SOB, and from the input video signal STV. A video image 41 that is not an operation target of the user is generated. The drawing processing unit 31 further synthesizes the video image 41 and the plurality of object images 42a to 42g to generate a composite image 43.
  • the object image 42a is a net browser for displaying a Web page
  • the object images 42b to 42g are content images.
  • the content image is, for example, a thumbnail image of a movie.
  • a movie guidance screen (such as an advertisement screen or a movie data sales screen) is displayed.
  • the object signal SOB and the specific information SLO include identification information for identifying the corresponding object image.
  • the drawing processing unit 31 acquires a plurality of specific information SLOs from the drawing control unit 33, and specifies the specific information SLO corresponding to each of the plurality of object images 42a to 42g using the identification information.
  • the drawing processing unit 31 identifies the position and size of each of the plurality of object images 42a to 42g based on the identification information SLO, and synthesizes the video image 41 and the plurality of object images 42a to 42g to form a composite image 43. Is generated.
  • a desired background image 44 includes a display area for displaying the video image 41, a display area for displaying a net browser, and a display area for displaying a plurality of content images. Is set.
  • the display area of the video image 41 is set in the upper left part of the screen
  • the display area of the net browser is set in the upper right part of the screen
  • the display areas of a plurality of content images are set in the lower part of the screen.
  • the drawing control unit 33 is configured by a CPU as in the first embodiment.
  • the drawing control unit 33 of this embodiment manages specific information SLO including identification information, position, and size of the object signal SOB for each object signal SOB.
  • the drawing control unit 33 outputs specific information SLO to the drawing processing unit 31 based on a request from the drawing processing unit 31.
  • the drawing control unit 33 in the object operation detection operation, the drawing control unit 33 according to the present embodiment, from the operation detection unit 34, when the user operates the object image 42a via the operation terminal 50, as in the first embodiment. Operation information SOI including identification information indicating the operation target object image 42a and quantification data indicating the operation content is acquired.
  • the drawing control unit 33 generates control information SC including the specific information SLO and the operation information SOI, and outputs the control information SC to the interpolation image generation unit 32. When a plurality of object images are selected, control information SC is generated and output for all the selected object images.
  • the interpolation image generation unit 32 is configured by FRC, generates an interpolation image based on the plurality of object images 42a to 42g and the control information SC, and interpolates the composite image 43 and the composite image.
  • a video display signal composed of the image 43i is output to the display device.
  • the interpolated image generation unit 32 selects a plurality of object images 42a to 42g from each of the previous and next synthesized images 43 based on the control information SC from the drawing control unit 33. Identify. After specifying the plurality of object images 42a to 42g, the interpolated image generating unit 32 generates an interpolated image 41i of the video image by motion compensation. Further, as shown in FIG. 5, the interpolated image generating unit 32 synthesizes the interpolated image 41i of the video image and the plurality of identified object images 42a to 42g with the background image 44 to generate an interpolated image 43i of the synthesized image. To do.
  • the interpolated image generation unit 32 When the operation information SOI is included in the control information SC, the interpolated image generation unit 32 first specifies a plurality of object images 42a to 42g from each of the composite images 43. After specifying the plurality of object images 42a to 42g, the interpolated image generating unit 32 generates an interpolated image 41i of the video image by motion compensation. Further, after specifying the object images 42a to 42g, the interpolation image generation unit 32 generates an interpolation image 42i of the object image corresponding to the operation type for each of the object images to be operated. The method for generating the interpolated image 42i of the object image corresponding to the operation type is the same as in the first embodiment.
  • the interpolation image generation unit 32 sets the object image of the composite image 43 before and after the object image that is not the operation target as the interpolation image 42i of the object image.
  • the interpolated image generating unit 32 superimposes the interpolated image 41 i of the video image and the interpolated image 42 i of the object image on the background image 44 to generate an interpolated image 43 i of the composite image.
  • the interpolated image generating unit 32 After the interpolated image 43i of the composite image is generated, the interpolated image generating unit 32 generates a video display signal in which the interpolated image 43i of the generated composite image is inserted between the composite images 43 and outputs the video display signal to the display panel 40.
  • the operation detection unit 34 receives the operation detection signal SOD output from the operation terminal 50, specifies the object image to be operated from the operation detection signal SOD, and detects the operation content. Further, the operation detection unit 34 outputs operation information SOI including identification information for specifying the object image and quantification data indicating the operation content to the drawing control unit 33.
  • the operation detection unit 34 When receiving the operation detection signal SOD output from the operation terminal 50, the operation detection unit 34 first specifies an object image to be operated from the plurality of object images 42a to 42g based on the operation detection signal SOD. Specifically, for example, in the case of a touch panel of a mobile phone, the object image at the first contact detection position is determined as the object image to be operated. Furthermore, the operation detection unit 34 detects the operation type, and data obtained by quantifying the operation speed and the operation direction from the operation detection signal SOD. The operation type and the quantification data detection method are the same as those in the first embodiment. The operation detection unit 34 generates operation information SOI including the identification information of the operation target object image, the operation type, and the quantification data, and outputs the operation information SOI to the drawing control unit 33.
  • the video display device 30 of the present embodiment is different from the video display device 30 of the second embodiment in that the display panel 40 can display a 3D image, and the right eye image and the left eye for displaying the 3D image. It is a point which produces
  • the display device 20 of the present embodiment is connected to the video input device 10 and the operation terminal 50 as in the first and second embodiments, and the video display device 30, the display panel 40, and the like.
  • the TV is configured to include the above will be described as an example.
  • the configurations of the video input device 10, the operation terminal 50, and the display panel 40 are the same as those in the first and second embodiments.
  • the video display device 30 includes a drawing processing unit 31 that operates at a lower frame rate than the display panel 40, a drawing control unit 33 that controls each function of the video display device 30, and an operation terminal 50.
  • An operation detection unit 34 that detects a user operation and an interpolated image generation unit 32 that performs frame interpolation to increase the frame rate are provided.
  • the drawing processing unit 31 is configured by a GPU, generates a plurality of object images 42a to 42g that can be operated by the user from a plurality of input object signals SOB, and is an operation target of the user from the input video signal STV.
  • a video image 41 that is not to be generated is generated.
  • two types of images, a right-eye image and a left-eye image are generated as composite images.
  • the drawing processing unit 31 uses the specific information SLO to superimpose the video image 41 and the plurality of object images 42a to 42g on the background image 44 to generate a composite image.
  • FIG. 6A is a block diagram illustrating an example of an interpolation image 43L for the left eye
  • FIG. 6B is a block diagram illustrating an example of an interpolation image 43R for the right eye corresponding to FIG. 6A.
  • a display area for displaying the video image 41 is set in the upper left part of the desired background image 44
  • a display area for displaying a net browser is set in the upper right part
  • a plurality of display areas are set in the lower part.
  • a display area for displaying the content image is set.
  • the drawing control unit 33 is constituted by a CPU and manages specific information SLO including identification information, position and size of the object signal SOB for each object signal SOB.
  • the position here is a three-dimensional position.
  • the drawing control unit 33 outputs the specific information SLO to the drawing processing unit 31 and the interpolated image generation unit 32, the composite image to be generated and the interpolated image of the composite image are a composite image for the right eye or a composite for the left eye. Depending on whether it is an image, it is converted into coordinates on the corresponding image and output.
  • the drawing control unit 33 outputs specific information SLO to the drawing processing unit 31 based on a request from the drawing processing unit 31.
  • the drawing control unit 33 includes identification information and quantified data indicating the operation content from the operation detection unit 34 when an object image is operated by the user via the operation terminal 50. Operation information SOI is acquired.
  • the quantification data here is composed of the operation direction and operation speed in three dimensions.
  • the drawing control unit 33 generates control information SC including the specific information SLO and the operation information SOI, and outputs the control information SC to the interpolation image generation unit 32. When a plurality of object images are selected, control information SC is generated and output for all the selected object images.
  • the interpolated image generation unit 32 is configured by FRC, generates an interpolated image based on the plurality of object images 42a to 42g and the control information SC, and outputs a video display signal composed of the composite image and the interpolated image of the composite image to the display device. Output.
  • the method for generating the interpolated image of the composite image when the operation type information is not included in the control information SC is the same as in the second embodiment.
  • the method for generating the interpolated image of the composite image is the same as that in the second embodiment when moving in the plane direction.
  • FIGS. 6A and 6B A case where movement in the depth direction is detected when the operation information SOI is included in the control information SC will be described with reference to FIGS. 6A and 6B.
  • FIG. 6A is a block diagram illustrating an example of an interpolation image 43L for the left eye
  • FIG. 6B is a block diagram illustrating an example of an interpolation image 43R for the right eye.
  • the broken line indicates the position of the net browser before movement.
  • the interpolated image generation unit 32 generates, as the left-eye interpolated image 43L, an image in which the net browser is moved to the right according to the vector amount as shown in FIG. 6A.
  • the interpolation image 43R for the right eye as shown in FIG. 6B, an image is generated by moving the net browser to the left according to the vector amount.
  • the composite image generation method in this case is the same as that in the moving operation in the second embodiment.
  • the interpolation image generation unit 32 After generating the left-eye interpolation image 43L and the right-eye interpolation image 43R, the interpolation image generation unit 32 synthesizes the generated synthesized image interpolation image for each of the left-eye interpolation image 43L and the right-eye interpolation image 43R. A video display signal inserted between the images is generated and output to the display panel 40.
  • the operation detection unit 34 receives the operation detection signal SOD output from the operation terminal 50, specifies the object image to be operated from the operation detection signal SOD, and quantifies the three-dimensional operation direction and operation speed. Is detected.
  • the operation detection unit 34 generates operation information SOI including the identification information of the identified object image and the quantification data, and outputs the operation information SOI to the drawing control unit 33.
  • Embodiments 1 to 3 the scroll operation and the movement operation have been described as examples of the operation type.
  • the present invention is not limited to this.
  • Other types such as an enlargement / reduction operation of an object image or an operation of replacing an object image when there are a plurality of object images may be used.
  • the enlargement / reduction of the object image can be handled by changing the position and size of the specific information SLO and adding change information (change amount) of the size of the object image to the operation information SOI.
  • the video image 41 may be configured not to be displayed. That is, the configuration using the video image 41 is not an essential configuration of the present invention, but has been described as a more preferable embodiment.
  • the operation type may be a scroll operation or an object image enlargement / reduction.
  • the first image has been described as an example of a net browser on which a Web page is displayed or a content image.
  • the first image is not limited thereto. Absent. It may be another image that is a user's operation target, such as an icon.
  • the second image is not limited to the TV image, and may be a still image such as a photograph or another moving image.
  • each functional block in the block diagram is typically realized as an LSI which is an integrated circuit. These may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
  • the name used here is LSI, but it may also be called IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
  • the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • each component may be configured by dedicated hardware, or a component that can be realized by software is realized by executing a program. Also good.
  • the video display device according to the present invention is useful when a tablet or a smartphone is connected as an operation terminal to a display device such as a TV or a projector.

Abstract

This invention provides a video display device capable of performing frame interpolation satisfactorily even for a video for which calculating motion information is difficult. A video display device (30) that performs frame interpolation for increasing the frame rate, the video display device (30) being provided with: a drawing processor (31) for generating, using a first frame rate, a user-operable first image on the basis of a first image signal that has been inputted; a drawing controller (33) for acquiring operation information describing an operation and generating control information that includes the operation information when a user performs an operation with respect to the first image via an operation terminal (50); and an interpolation image generator (32) for generating an interpolation image of the first image on the basis of the first image and the control information, and outputting to a display device a video display signal including the first image and the interpolation image of the first image using a second frame rate that is higher than the first frame rate.

Description

映像表示装置および集積回路Video display device and integrated circuit
 本発明は、映像表示装置および映像表示回路、特に、フレームレートを上げるためのフレーム補間を行う映像表示装置および集積回路に関する。 The present invention relates to a video display device and a video display circuit, and more particularly to a video display device and an integrated circuit that perform frame interpolation for increasing a frame rate.
 表示デバイスにおいて、当該表示デバイスが表示可能なフレームレートより低いフレームレートの映像信号が入力された場合、映像を滑らかに表示するために、従来は、例えば、フレームレートを上げるフレームレート変換を行っている。 In a display device, when a video signal having a frame rate lower than the frame rate that can be displayed by the display device is input, conventionally, for example, frame rate conversion is performed to increase the frame rate in order to display the video smoothly. Yes.
 具体的には、例えば、通常のTVに携帯電話機で撮像した動画を表示する場合や、通常のTVよりも高いフレームレートで映像を表示する倍速パネルを備えるTVに、通常のTV放送を表示する場合などが考えられる。 Specifically, for example, when displaying a moving image captured by a mobile phone on a normal TV, or displaying a normal TV broadcast on a TV having a double-speed panel that displays video at a frame rate higher than that of the normal TV. Cases can be considered.
 ここで、フレームレートを上げるフレームレート変換に係る技術としては、例えば、映像を構成する複数の画像のうち、連続する2枚の画像を比較して、動き方向とその大きさを含む動き情報を算出し、算出した動き方向とその大きさに合わせて補間画像を生成し、連続する2枚の画像の間に補間画像を挿入する動き補償によるフレーム補間技術がある(例えば、特許文献1参照)。 Here, as a technique related to frame rate conversion for increasing the frame rate, for example, by comparing two consecutive images among a plurality of images constituting a video, motion information including a motion direction and its size is obtained. There is a frame interpolation technique based on motion compensation in which an interpolation image is generated in accordance with the calculated motion direction and size, and an interpolation image is inserted between two consecutive images (for example, see Patent Document 1). .
特開2008-166966号公報JP 2008-166966 A
 しかしながら、映像が動画の場合において、新聞のテキストが表示された画像や、小さい文字を含むテキストが表示された画像など、動き情報を算出することが困難な画像の場合、動き補償によるフレーム補間技術では、フレーム補間エラーが生じたり、画像がぼやけたりする場合がある。 However, when the video is a moving image, it is difficult to calculate motion information, such as an image displaying newspaper text or an image displaying text containing small characters. In this case, a frame interpolation error may occur or the image may be blurred.
 本発明は、動き情報が算出することが困難な映像であっても、良好にフレーム補間を行うことができる映像表示装置を提供することを目的とする。 It is an object of the present invention to provide an image display device that can perform frame interpolation satisfactorily even for images for which motion information is difficult to calculate.
 上記の課題を解決するために、本発明に係る映像表示装置の一態様は、フレームレートを高めるためのフレーム補間を行う映像表示装置であって、入力された第1の画像信号に基づき、ユーザが操作可能な第1の画像を第1のフレームレートで生成する描画処理部と、前記ユーザにより操作端末を介して前記第1の画像に対する操作が行われた場合に、前記操作を示す操作情報を取得し、前記操作情報を含む制御情報を生成する描画制御部と、前記第1の画像および前記制御情報に基づいて、前記第1の画像の補間画像を生成し、前記第1のフレームレートより高い第2のフレームレートで前記第1の画像と前記第1の画像の補間画像とを含む映像表示信号を生成し、前記映像表示信号を表示装置に対して出力する補間画像生成部と、を備える。 In order to solve the above-described problem, an aspect of the video display device according to the present invention is a video display device that performs frame interpolation for increasing a frame rate, and is based on an input first image signal. A rendering processing unit that generates a first image that can be operated at a first frame rate, and operation information that indicates the operation when the user performs an operation on the first image via an operation terminal And generating a control information including the operation information, and generating an interpolated image of the first image based on the first image and the control information, and the first frame rate An interpolated image generating unit that generates a video display signal including the first image and an interpolated image of the first image at a higher second frame rate, and outputs the video display signal to a display device; Be equipped That.
 本発明の映像表示装置によれば、動き情報が算出することが困難な映像であっても、良好にフレーム補間を行うことができる。 According to the video display device of the present invention, it is possible to perform frame interpolation satisfactorily even for video for which motion information is difficult to calculate.
図1は、映像表示装置およびその周辺装置の概略構成例を示す概略ブロック図である。FIG. 1 is a schematic block diagram illustrating a schematic configuration example of a video display device and its peripheral devices. 図2Aは、映像表示装置を構成するユーザの操作を検出する動作の一例を示すフローチャートである。FIG. 2A is a flowchart illustrating an example of an operation of detecting an operation of a user configuring the video display device. 図2Bは、映像表示装置の処理動作の一例を示すフローチャートである。FIG. 2B is a flowchart illustrating an example of a processing operation of the video display device. 図3Aは、合成画像の一例を示すブロック図である。FIG. 3A is a block diagram illustrating an example of a composite image. 図3Bは、合成画像の補間画像の一例を示すブロック図である。FIG. 3B is a block diagram illustrating an example of an interpolated image of a composite image. 図3Cは、合成画像の一例を示すブロック図である。FIG. 3C is a block diagram illustrating an example of a composite image. 図4Aは、合成画像の一例を示すブロック図である。FIG. 4A is a block diagram illustrating an example of a composite image. 図4Bは、合成画像の補間画像の一例を示すブロック図である。FIG. 4B is a block diagram illustrating an example of an interpolated image of a composite image. 図4Cは、合成画像の一例を示すブロック図である。FIG. 4C is a block diagram illustrating an example of a composite image. 図5は、オブジェクト画像が複数の場合の合成画像の一例を示すブロック図である。FIG. 5 is a block diagram illustrating an example of a composite image when there are a plurality of object images. 図6Aは、左目用画像の一例を示すブロック図である。FIG. 6A is a block diagram illustrating an example of a left-eye image. 図6Bは、右目用画像の一例を示すブロック図である。FIG. 6B is a block diagram illustrating an example of a right-eye image. 図7は、映像表示装置を備える表示デバイスの一例であるTVを示すブロック図である。FIG. 7 is a block diagram illustrating a TV as an example of a display device including a video display device. 図8は、操作端末の一例である携帯電話機を示すブロック図である。FIG. 8 is a block diagram showing a mobile phone as an example of the operation terminal.
 (本発明の一態様を得るに至った経緯)
 近年、TVでは、表示パネル上に、TV放送に加え、TV放送以外の画像、例えば、インターネットのネットブラウザなどを表示することが行われている。
(Background to obtaining one embodiment of the present invention)
In recent years, in TV, in addition to TV broadcasting, an image other than TV broadcasting, for example, an Internet browser or the like, is displayed on a display panel.
 このように、TVやプロジェクターなどの表示デバイスに、ネットブラウザや新聞のテキスト、雑誌の紙面などの画像を表示する場合、TVのリモコンなど表示デバイスが備える操作端末だけでなく、多機能付き携帯電話機(例えば、スマートフォン)やタブレットを操作端末として接続し、操作端末により、表示デバイス上に表示された画像を操作することが考えられる。 Thus, when displaying an image such as a net browser, newspaper text, or magazine paper on a display device such as a TV or projector, not only the operation terminal provided in the display device such as a TV remote control but also a multi-function mobile phone. (For example, a smart phone) and a tablet are connected as an operation terminal, and it is possible to operate the image displayed on the display device with the operation terminal.
 ここで、上述したように、倍速パネルを備えるTVなど、表示パネルが表示可能なフレームレートより低いフレームレートの映像信号が入力される表示デバイスでは、従来は、動き補償によりフレームレートを上げるフレームレート変換を行っている。 Here, as described above, in a display device that inputs a video signal having a frame rate lower than the frame rate that can be displayed on the display panel, such as a TV having a double-speed panel, a frame rate that increases the frame rate by motion compensation has been conventionally used. Conversion is in progress.
 動き補償によるフレーム補間を行う映像表示装置は、例えば、表示パネルより低いフレームレートで動作するGPU(Graphics Processing Unit)と、補間画像を生成するFRC(Frame Rate Converter)と、映像表示装置の各種機能を管理するCPU(Central Processing Unit)とを備えて構成されている。 Video display devices that perform frame interpolation by motion compensation include, for example, GPU (Graphics Processing Unit) that operates at a lower frame rate than the display panel, FRC (Frame Rate Converter) that generates interpolated images, and various functions of the video display device And a central processing unit (CPU) that manages the system.
 上述した倍速パネルを備えるTVの場合、GPUは、Webページを示すオブジェクト信号から、低いフレームレートでオブジェクト画像(第1の画像の一例)を生成し、放送信号から、低いフレームレートでTV画像(第2の画像の一例)を生成する。オブジェクト画像は、例えば、Webページが表示されたネットブラウザの画像である。さらに、GPUは、TV画像にオブジェクト画像を合成することにより、低いフレームレートで合成画像を生成する。 In the case of a TV having the above-described double-speed panel, the GPU generates an object image (an example of a first image) at a low frame rate from an object signal indicating a Web page, and a TV image (at a low frame rate from a broadcast signal). An example of the second image) is generated. The object image is, for example, an image of a net browser on which a web page is displayed. Further, the GPU generates a composite image at a low frame rate by combining the object image with the TV image.
 FRCは、時間的に連続する2枚の合成画像から、動き補償により、補間画像を生成する。具体的には、FRCは、GPUから低いフレームレートで合成画像を取得し、時間的に連続する2枚の合成画像を比較して合成画像の補間画像を生成する。さらに、FRCは、補間画像の生成に用いた2枚の合成画像の間に、生成した合成画像の補間画像を挿入することにより、映像表示画像を生成して、表示パネルに出力する。このように構成することにより、入力された映像信号およびオブジェクト信号のフレームレートよりも高いフレームレート、すなわち、表示パネルのフレームレートで、画像を表示できる。 FRC generates an interpolation image by motion compensation from two temporally continuous composite images. Specifically, the FRC acquires a composite image from the GPU at a low frame rate, compares two temporally continuous composite images, and generates an interpolated image of the composite image. Further, the FRC generates a video display image by inserting the interpolated image of the generated composite image between the two composite images used for generating the interpolated image, and outputs it to the display panel. With this configuration, an image can be displayed at a frame rate higher than the frame rate of the input video signal and object signal, that is, the frame rate of the display panel.
 しかしながら、上述したように、ネットブラウザに表示されるテキストなど、動き情報を算出することが困難な画像は、動き補償によるフレーム補間が困難であり、フレーム補間エラーが生じたり、画像がぼやけたりする場合がある。 However, as described above, it is difficult to calculate motion information, such as text displayed on a net browser, it is difficult to perform frame interpolation by motion compensation, resulting in a frame interpolation error or blurred image. There is a case.
 具体的には、ユーザが操作端末(携帯電話機)を用いて、Webページのスクロールを行ったときには、フレーム補間エラーによるノイズが発生したり、Webページの表示がぼやけたりする場合がある。また、TV画面上におけるWebページの移動などを行うと、ノイズや表示ぼけ、あるいは、移動がぎこちないものとなる場合がある。 Specifically, when a user scrolls a Web page using an operation terminal (mobile phone), noise due to a frame interpolation error may occur or the display of the Web page may be blurred. In addition, when a Web page is moved on a TV screen, noise and display blur, or movement may be awkward.
 なお、動き補償によるフレーム補間を行うGPUを、表示デバイスと同じフレームレートで動作する高速のGPUとすれば、動き補償によるフレーム補間を行う必要がなくなるため、フレーム補間エラーや表示ぼけの問題を解決できるが、高速のGPUは相当高価であり、製造コストが相当増大する。 If the GPU that performs frame interpolation by motion compensation is a high-speed GPU that operates at the same frame rate as the display device, there is no need to perform frame interpolation by motion compensation, which solves the problem of frame interpolation errors and display blurring. Although it is possible, a high-speed GPU is quite expensive and the manufacturing costs are considerably increased.
 (発明の概要)
 本開示の一態様に係る映像表示装置は、フレームレートを高めるためのフレーム補間を行う映像表示装置であって、入力された第1の画像信号に基づき、ユーザが操作可能な第1の画像を第1のフレームレートで生成する描画処理部と、前記ユーザにより操作端末を介して前記第1の画像に対する操作が行われた場合に、前記操作を示す操作情報を取得し、前記操作情報を含む制御情報を生成する描画制御部と、前記第1の画像および前記制御情報に基づいて、前記第1の画像の補間画像を生成し、前記第1のフレームレートより高い第2のフレームレートで前記第1の画像と前記第1の画像の補間画像とを含む映像表示信号を生成し、前記映像表示信号を表示装置に対して出力する補間画像生成部と、を備える。
(Summary of Invention)
A video display device according to an aspect of the present disclosure is a video display device that performs frame interpolation to increase a frame rate, and a first image that can be operated by a user based on an input first image signal. When the user performs an operation on the first image via the operation terminal by the drawing processing unit generated at the first frame rate, the operation information indicating the operation is acquired and the operation information is included. Based on the drawing control unit for generating control information, the first image and the control information, an interpolated image of the first image is generated, and the second frame rate higher than the first frame rate is used to generate the interpolated image. An interpolation image generation unit that generates a video display signal including a first image and an interpolation image of the first image, and outputs the video display signal to a display device.
 このように映像表示装置を構成すれば、第1の画像(オブジェクト画像)の移動やスクロールなどの操作が行われた場合に、動き補償ではなく、ユーザによる操作を示す操作情報に基づいて行うことが可能になる。補間画像の生成に動き補償ではなく操作情報を用いることから、フレーム補間エラーが生じるのを防止できる。具体的には、第1の画像の滑らかな移動や、第1の画像内における表示ぼけのないスクロールなどを実現できる。また、上記構成の映像表示装置では、高速のGPUを必要としないことから、製造コストの増大を抑制することが可能になる。 If the video display device is configured in this way, when an operation such as moving or scrolling the first image (object image) is performed, the operation is not based on motion compensation but based on operation information indicating an operation by the user. Is possible. Since operation information is used for generating an interpolated image instead of motion compensation, it is possible to prevent a frame interpolation error from occurring. Specifically, smooth movement of the first image, scroll without display blur in the first image, and the like can be realized. In addition, since the video display device having the above configuration does not require a high-speed GPU, an increase in manufacturing cost can be suppressed.
 また、例えば、前記描画処理部は、さらに、入力された第2の画像信号に基づき、前記ユーザの操作の対象とならない第2の画像を生成し、前記第2の画像と前記第1の画像とを合成した合成画像を生成し、前記描画制御部は、前記制御情報と、前記第1の画像を特定するための特定情報とを前記補間画像生成部に出力し、前記補間画像生成部は、前記特定情報を用いて前記合成画像から前記第1の画像を特定し、特定した前記第1の画像および前記制御情報に基づいて前記第1の画像の補間画像を生成するように構成してもよい。 In addition, for example, the drawing processing unit further generates a second image that is not a target of the user operation based on the input second image signal, and the second image and the first image are generated. The drawing control unit outputs the control information and identification information for identifying the first image to the interpolation image generation unit, and the interpolation image generation unit And specifying the first image from the composite image using the specifying information, and generating an interpolated image of the first image based on the specified first image and the control information. Also good.
 このように映像表示装置を構成すれば、例えば、第2の画像上に第1の画像を重ねて、あるいは、第2の画像と第1の画像を並べて表示する場合などに、第1の画像のスクロールや移動などを、表示ぼけなしに、滑らかに行うことができる。 If the video display device is configured in this way, the first image is displayed when, for example, the first image is superimposed on the second image, or the second image and the first image are displayed side by side. Scrolling and moving can be performed smoothly without blurring the display.
 また、例えば、前記特定情報は、前記第2の画像に対する前記第1の画像の位置、および、前記第1画像の大きさを含むように構成してもよい。 Further, for example, the specific information may be configured to include a position of the first image with respect to the second image and a size of the first image.
 また、例えば、前記制御情報は、前記操作情報として、前記第2の画像に対する前記第1の画像の操作方向および操作速度を含むように構成してもよい。 Further, for example, the control information may include an operation direction and an operation speed of the first image with respect to the second image as the operation information.
 また、例えば、前記描画制御部は、前記操作情報に基づいて、前記第1の画像を構成する右目用画像と左目用画像のそれぞれについて、3次元空間上の操作方向および操作速度を算出して前記操作情報を求め、前記補間画像生成部は、前記右目用画像と前記右目用画像における操作方向および操作速度とに基づいて右目用補間画像を生成し、前記左目用画像と前記左目用画像における操作方向および操作速度とに基づいて左目用補間画像を生成するように構成してもよい。 For example, the drawing control unit calculates an operation direction and an operation speed in a three-dimensional space for each of the right-eye image and the left-eye image constituting the first image based on the operation information. The operation information is obtained, and the interpolation image generation unit generates a right-eye interpolation image based on an operation direction and an operation speed in the right-eye image and the right-eye image, and in the left-eye image and the left-eye image The left-eye interpolation image may be generated based on the operation direction and the operation speed.
 このように映像表示装置を構成すれば、3次元映像(3D映像)を表示する場合にも、ユーザの操作情報による、表示ぼけのない、滑らかな表示が可能になる。 If the video display device is configured in this way, even when a three-dimensional video (3D video) is displayed, smooth display without display blur due to user operation information becomes possible.
 また、例えば、前記描画処理部は、前記第1の画像信号が静止画であるオブジェクト画像を示す場合に、前記オブジェクト画像を前記第1のフレームレートで描画して、前記第1の画像を生成するように構成してもよい。 In addition, for example, when the first image signal indicates an object image that is a still image, the drawing processing unit generates the first image by drawing the object image at the first frame rate. You may comprise.
 本開示の一態様に係る集積回路は、フレームレートを高めるためのフレーム補間を行う映像表示装置のための集積回路であって、入力された第1の画像信号に基づき、ユーザが操作可能な第1の画像を前記フレームレートで生成する描画処理部と、前記ユーザにより操作端末を介して前記第1の画像に対する操作が行われた場合に、前記操作を示す操作情報を取得し、前記操作情報を含む制御情報を生成する描画制御部と、前記第1の画像および前記制御情報に基づいて、前記フレームレートにおける前記第1の画像の補間画像を生成し、前記第1の画像と前記第1の画像の補間画像とを含む映像表示信号を表示装置に対して出力する補間画像生成部と、を備える。 An integrated circuit according to an aspect of the present disclosure is an integrated circuit for a video display device that performs frame interpolation for increasing a frame rate, and is based on an input first image signal and can be operated by a user. When the user performs an operation on the first image through the operation terminal, the operation information indicating the operation is acquired, and the operation information is acquired. Based on the first image and the control information, a drawing control unit that generates control information including the first image and the first image and an interpolation image of the first image are generated, and the first image and the first image are generated. An interpolated image generation unit that outputs a video display signal including the interpolated image of the image to the display device.
 なお、本発明は、装置として実現できるだけでなく、その装置を構成する処理手段をステップとする方法として実現したり、それらステップをコンピュータに実行させるプログラムとして実現したり、そのプログラムを記録したコンピュータ読み取り可能なCD-ROMなどの記録媒体として実現したり、そのプログラムを示す情報、データまたは信号として実現したりすることもできる。そして、それらプログラム、情報、データおよび信号は、インターネット等の通信ネットワークを介して配信してもよい。 Note that the present invention can be realized not only as an apparatus but also as a method using steps as processing units constituting the apparatus, as a program for causing a computer to execute the steps, or as a computer read recording the program. It can be realized as a possible recording medium such as a CD-ROM, or as information, data or a signal indicating the program. These programs, information, data, and signals may be distributed via a communication network such as the Internet.
 以下、本発明の実施の形態を、図面を用いて詳細に説明する。なお、以下で説明する実施の形態は、いずれも本発明の望ましい一具体例を示すものである。以下の実施の形態で示される構成要素、構成要素の配置位置および接続形態、処理、処理の順序などは、一例であり、本発明を限定する主旨ではない。また、以下の実施の形態における構成要素のうち、本発明の最上位概念を示す独立請求項に記載されていない構成要素については、より望ましい形態を構成する任意の構成要素として説明される。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Each of the embodiments described below shows a desirable specific example of the present invention. Constituent elements, arrangement positions and connection forms of constituent elements, processing, processing order, and the like shown in the following embodiments are merely examples, and are not intended to limit the present invention. In addition, among the constituent elements in the following embodiments, constituent elements that are not described in the independent claims showing the highest concept of the present invention are described as optional constituent elements that constitute a more desirable form.
 (実施の形態1)
 実施の形態1の映像表示装置について、図1~図4Cを基に説明する。
(Embodiment 1)
The video display apparatus according to Embodiment 1 will be described with reference to FIGS. 1 to 4C.
 映像表示装置30は、フレームレートを高めるためのフレーム補間を行う装置である。なお、本実施形態では、表示デバイス20上に、ユーザが操作可能なオブジェクト画像とユーザの操作対象とはならない映像画像とを同時に表示し、操作端末50によりオブジェクト画像を操作する場合を例に説明する。 The video display device 30 is a device that performs frame interpolation to increase the frame rate. In the present embodiment, an example will be described in which an object image that can be operated by the user and a video image that is not an operation target of the user are simultaneously displayed on the display device 20, and the object image is operated by the operation terminal 50. To do.
 [表示デバイスおよび周辺装置の構成]
 先ず、映像表示装置30を備える表示デバイス20およびその周辺装置の装置構成について、図1を基に説明する。図1は、映像表示装置30を備える表示デバイス20、周辺装置である映像入力装置10および操作端末50の一構成例を示すブロック図である。なお、本実施形態では、表示デバイス20は、例えば、図7に示すTVである場合を例に説明する。
[Configuration of display device and peripheral device]
First, the device configuration of the display device 20 including the video display device 30 and its peripheral devices will be described with reference to FIG. FIG. 1 is a block diagram illustrating a configuration example of a display device 20 including a video display device 30, a video input device 10 that is a peripheral device, and an operation terminal 50. In the present embodiment, a case where the display device 20 is a TV shown in FIG. 7 will be described as an example.
 映像入力装置10は、ユーザの操作対象とはならない映像信号STV(第2の画像信号の一例)を表示デバイス20に出力する。本実施形態では、映像入力装置10は、TVアンテナであり、映像信号STVは、TV放送の放送波である場合を例に説明する。なお、映像入力装置10は、TVアンテナに限るものではなく、有線放送(ケーブルテレビ)のセットトップボックスなどであってもよい。 The video input device 10 outputs a video signal STV (an example of a second image signal) that is not an operation target of the user to the display device 20. In the present embodiment, a case where the video input device 10 is a TV antenna and the video signal STV is a broadcast wave of TV broadcasting will be described as an example. Note that the video input device 10 is not limited to a TV antenna, and may be a set-top box for cable broadcasting (cable television).
 操作端末50は、本実施形態では、図7に示すようなタッチパネルを備える多機能付き携帯電話機であり、表示デバイス20とは、AV出力ケーブルやHDMIケーブル(有線)、あるいは、赤外線通信(無線)により接続するように構成されている場合を例に説明する。操作端末50は、ユーザが操作可能なオブジェクト信号SOB(第1の画像信号に相当)と、操作検出信号SODとを表示デバイス20に出力するように構成されている。なお、本実施形態では、オブジェクト信号SOBは、携帯電話機から映像表示装置30に出力される場合を例に説明するが、インターネット回線からのデータを中継するルータや有線放送(ケーブルテレビ)のセットトップボックスなど、他の機器から映像表示装置30に出力されるように構成してもよい。 In this embodiment, the operation terminal 50 is a multi-function mobile phone having a touch panel as shown in FIG. 7, and is connected to the display device 20 with an AV output cable, an HDMI cable (wired), or infrared communication (wireless). The case where it is comprised so that it may connect by is demonstrated to an example. The operation terminal 50 is configured to output an object signal SOB (corresponding to a first image signal) that can be operated by the user and an operation detection signal SOD to the display device 20. In the present embodiment, the object signal SOB will be described as an example in which the object signal SOB is output from the mobile phone to the video display device 30. However, a router for relaying data from the Internet line or a set top of cable broadcasting (cable television). You may comprise so that it may output to the video display apparatus 30 from other apparatuses, such as a box.
 なお、本実施形態では、オブジェクト信号SOBが、インターネット上のWebページを示すHTMLなどのデータと、Webページを表示するためのネットブラウザの画像を示す信号である場合を例に説明する。Webページは、本実施形態では、小さな文字からなるテキストを含んで構成されている。また、Webページのデータは、本実施形態では、Webページ全体のデータであり、ネットブラウザに表示されない範囲を含む場合がある。 In this embodiment, an example will be described in which the object signal SOB is a signal indicating data such as HTML indicating a Web page on the Internet and an image of a net browser for displaying the Web page. In the present embodiment, the Web page is configured to include text composed of small characters. In this embodiment, the Web page data is data of the entire Web page, and may include a range not displayed on the network browser.
 操作端末50は、タッチパネルがユーザによるオブジェクト画像の操作を検出した場合に、即時に、操作内容を示す操作検出信号SODを、映像表示装置30に対して出力する。 When the touch panel detects an operation on the object image by the user, the operation terminal 50 immediately outputs an operation detection signal SOD indicating the operation content to the video display device 30.
 また、操作端末50は、携帯電話機に限られるものではなく、例えば、表示デバイス20の一例であるTVを操作するためのリモコン、表示デバイス20に接続したPC(Personal Computer)のマウス、表示デバイス20に接続したタブレット、カメラなど、ユーザが意図した操作内容を送信する端末であればよい。カメラの場合は、例えば、カメラが撮影した動画を操作検出情報として映像表示装置30に出力し、後述する映像表示装置30の操作検出部34において、動画を解析して、ユーザの動きから操作内容を特定するように構成してもよい。 Further, the operation terminal 50 is not limited to a mobile phone. For example, a remote controller for operating a TV, which is an example of the display device 20, a PC (Personal Computer) mouse connected to the display device 20, and the display device 20. Any terminal that transmits the operation content intended by the user, such as a tablet or camera connected to the terminal, may be used. In the case of a camera, for example, a moving image captured by the camera is output as operation detection information to the video display device 30, and the operation detection unit 34 of the video display device 30 described later analyzes the moving image, and the operation content is determined from the user's movement. You may comprise so that it may identify.
 表示デバイス20は、図1に示すように、本実施形態では、映像表示装置30と、表示パネル40とを備えて構成されるTVである場合を例に説明する。なお、表示デバイス20は、TVに限られるものではなく、プロジェクターなどであってもよい。 As shown in FIG. 1, the display device 20 will be described as an example in which the display device 20 is a TV configured by including a video display device 30 and a display panel 40. The display device 20 is not limited to the TV, and may be a projector.
 また、表示デバイス20は、本実施形態では、表示パネル40上に、TV放送の放送画面(映像画像あるいは第2の画像に相当)と、Webページが表示されたネットブラウザ(オブジェクト画像)とを同時に表示可能に構成されている。なお、本実施形態では、映像表示装置30が、表示デバイス20に搭載されている場合を例に説明するが、これに限るものではない。表示デバイス20の外部に、表示デバイス20と接続可能に構成されていてもよいし、ユーザの操作端末50など、他の機器に搭載されていてもよい。 In the present embodiment, the display device 20 displays a TV broadcast screen (corresponding to a video image or a second image) and a net browser (object image) on which a Web page is displayed on the display panel 40. It can be displayed at the same time. In the present embodiment, the case where the video display device 30 is mounted on the display device 20 will be described as an example, but the present invention is not limited to this. The display device 20 may be configured to be connectable to the display device 20 or may be mounted on another device such as the user operation terminal 50.
 表示パネル40は、TV放送の放送波(例えば、地上デジタル放送では、29.97fpsまたは59.9fps)よりも高いフレームレート、例えば、120fpsで映像を表示可能に構成されている。 The display panel 40 is configured to be able to display an image at a frame rate higher than a broadcast wave of TV broadcast (for example, 29.97 fps or 59.9 fps in terrestrial digital broadcast), for example, 120 fps.
 映像表示装置30は、本実施形態では、表示パネル40より低い第1のフレームレートで動作する描画処理部31と、映像表示装置30の各機能を制御する描画制御部33と、操作端末50におけるユーザの操作を検出する操作検出部34と、フレームレートを高めるフレーム補間を行う補間画像生成部32とを備えて構成されている。 In this embodiment, the video display device 30 includes a drawing processing unit 31 that operates at a first frame rate lower than that of the display panel 40, a drawing control unit 33 that controls each function of the video display device 30, and an operation terminal 50. An operation detection unit 34 that detects a user operation and an interpolated image generation unit 32 that performs frame interpolation to increase the frame rate are provided.
 描画処理部31は、GPUで構成され、入力されたオブジェクト信号SOBからユーザが操作可能なオブジェクト画像を生成し、入力された映像信号STVからユーザの操作対象とはならない映像画像を生成し、映像画像とオブジェクト画像とを合成して、合成画像を生成する。なお、本実施形態の描画処理部31は、映像信号STVまたはオブジェクト信号SOBが静止画を示す場合は、当該静止画を第1のフレームレートで描画して、第2の画像または第1の画像を動画として生成する。 The drawing processing unit 31 is configured by a GPU, generates an object image that can be operated by the user from the input object signal SOB, generates a video image that is not a user operation target from the input video signal STV, A composite image is generated by combining the image and the object image. Note that, when the video signal STV or the object signal SOB indicates a still image, the rendering processing unit 31 of the present embodiment renders the still image at the first frame rate, and performs the second image or the first image. As a video.
 描画制御部33は、CPUで構成され、オブジェクト画像を特定するための特定情報SLOを管理している。特定情報SLOは、本実施形態では、映像画像上のオブジェクト画像の位置(例えば、映像画像上の座標)および大きさである。描画制御部33は、描画処理部31からの要求に基づいて、描画処理部31に対し特定情報SLOを出力する。 The drawing control unit 33 is composed of a CPU and manages identification information SLO for identifying an object image. In this embodiment, the specific information SLO is the position (for example, coordinates on the video image) and size of the object image on the video image. The drawing control unit 33 outputs specific information SLO to the drawing processing unit 31 based on a request from the drawing processing unit 31.
 さらに、描画制御部33は、ユーザにより操作端末50を介してオブジェクト画像の操作が行われた場合に、操作検出部34から、当該操作を示す操作情報SOIを取得する。ここでの操作情報SOIは、例えば、操作方向および操作速度の定量化データである。描画制御部33は、操作情報SOIに含まれる操作方向および操作速度の定量化データを、補間画像生成部32が利用可能なベクトル量に変換する。描画制御部33は、特定情報SLOと、定量化データをベクトル量に変換した後の操作情報SOIとを含む制御情報SCを生成し、補間画像生成部32に出力する。 Further, when the user operates the object image via the operation terminal 50, the drawing control unit 33 acquires operation information SOI indicating the operation from the operation detection unit 34. The operation information SOI here is, for example, quantification data of the operation direction and operation speed. The drawing control unit 33 converts the operation direction and operation speed quantification data included in the operation information SOI into a vector amount usable by the interpolation image generation unit 32. The drawing control unit 33 generates the control information SC including the specific information SLO and the operation information SOI after converting the quantified data into the vector amount, and outputs the control information SC to the interpolation image generation unit 32.
 補間画像生成部32は、FRCで構成されており、描画制御部33から出力される制御情報SCを用いてオブジェクト画像の補間画像を生成する。さらに、補間画像生成部32は、映像画像の補間画像を、動き補償により生成する。補間画像生成部32は、映像画像の補間画像とオブジェクト画像の補間画像を合成して、合成画像の補間画像を生成する。補間画像生成部32は、第2のフレームレートで合成画像と合成画像の補間画像とを含む映像表示信号を生成し、生成した映像表示信号を表示装置に対して出力する。 The interpolation image generation unit 32 is configured by FRC, and generates an interpolation image of the object image using the control information SC output from the drawing control unit 33. Furthermore, the interpolation image generation unit 32 generates an interpolation image of the video image by motion compensation. The interpolated image generating unit 32 combines the interpolated image of the video image and the interpolated image of the object image to generate an interpolated image of the combined image. The interpolated image generation unit 32 generates a video display signal including the composite image and the interpolated image of the composite image at the second frame rate, and outputs the generated video display signal to the display device.
 操作検出部34は、操作端末50から出力される操作検出信号SODを受信すると、操作検出信号SODから操作方向および操作速度の定量化データを取得する。操作検出部34は、操作方向および操作速度の定量化データを操作情報SOIとして、描画制御部33に出力する。 When the operation detection unit 34 receives the operation detection signal SOD output from the operation terminal 50, the operation detection unit 34 acquires quantification data of the operation direction and the operation speed from the operation detection signal SOD. The operation detection unit 34 outputs the operation direction and operation speed quantification data to the drawing control unit 33 as operation information SOI.
 なお、描画処理部31、描画制御部33、および、補間画像生成部32は、それぞれ並列して処理を実行するように構成されている。 Note that the drawing processing unit 31, the drawing control unit 33, and the interpolated image generation unit 32 are each configured to execute processing in parallel.
 [映像表示装置30の動作例]
 以下、映像表示装置30の動作例について、図2A、図2B、図3A~図3C、図4A~図4Cを基に説明する。
[Operation Example of Video Display Device 30]
Hereinafter, an operation example of the video display device 30 will be described with reference to FIGS. 2A, 2B, FIGS. 3A to 3C, and FIGS. 4A to 4C.
 なお、本実施形態では、簡単のため、必ずしも実際のTV放送のフレームレートとは一致しないが、FRCの処理速度が50Hz、表示パネル40の処理速度が100Hzである場合を例に説明する。 In the present embodiment, for the sake of simplicity, the actual TV broadcast frame rate is not necessarily the same, but the FRC processing speed is 50 Hz and the display panel 40 processing speed is 100 Hz.
 また、本実施形態では、図3A~図3Cおよび図4A~図4Cに示すように、映像画像41を表示パネル40の全画面に表示し、オブジェクト画像42を、表示パネル40の一部に、且つ、映像画像41より上の階層に表示する場合を例に説明する。オブジェクト画像42の大きさは、映像画像41の視認性を大きく損なわない程度に、映像画像41より小さく設定されている。また、本実施形態では、オブジェクト画像42は1つであり、オブジェクト画像42の大きさが一定である場合を例に説明する。 In this embodiment, as shown in FIGS. 3A to 3C and FIGS. 4A to 4C, the video image 41 is displayed on the entire screen of the display panel 40, and the object image 42 is displayed on a part of the display panel 40. In addition, a case where the image is displayed in a hierarchy above the video image 41 will be described as an example. The size of the object image 42 is set smaller than the video image 41 so that the visibility of the video image 41 is not significantly impaired. Further, in the present embodiment, a case where there is one object image 42 and the size of the object image 42 is constant will be described as an example.
 映像表示装置30は、制御信号に基づいて合成画像の補間画像を生成する基本動作と、ユーザの操作を検出するオブジェクト操作検出動作とを実行する。なお、基本動作は映像信号STVとオブジェクト信号SOBが入力されている間は常時実行される動作である。オブジェクト操作検出動作は、ユーザの操作が検出されたときの動作であり、基本動作とは非同期に実行される。図2Aは、オブジェクト操作検出動作を示すフローチャートであり、図2Bは、基本動作を示すフローチャートである。 The video display device 30 executes a basic operation for generating an interpolated image of a composite image based on the control signal and an object operation detection operation for detecting a user operation. The basic operation is an operation that is always executed while the video signal STV and the object signal SOB are input. The object operation detection operation is an operation when a user operation is detected, and is executed asynchronously with the basic operation. 2A is a flowchart showing an object operation detection operation, and FIG. 2B is a flowchart showing a basic operation.
 [オブジェクト操作検出動作]
 オブジェクト操作検出動作では、映像表示装置30の操作検出部34は、図2Aに示すように、ユーザが操作端末50(携帯電話機)を操作したときに即時に出力される操作検出信号SODを受け付ける(ステップS11)。
[Object operation detection operation]
In the object operation detection operation, as shown in FIG. 2A, the operation detection unit 34 of the video display device 30 receives an operation detection signal SOD that is immediately output when the user operates the operation terminal 50 (mobile phone) ( Step S11).
 操作検出信号SODを受け付けると、操作検出部34は、操作検出信号SODから、操作種別と、操作速度および操作方向を定量化した定量化データとを含む操作情報SOIを生成して、描画制御部33に出力する(ステップS12)。 Upon receiving the operation detection signal SOD, the operation detection unit 34 generates operation information SOI including the operation type, quantified data obtained by quantifying the operation speed and the operation direction from the operation detection signal SOD, and the drawing control unit. It outputs to 33 (step S12).
 ここで、操作種別は、オブジェクト画像に対する操作の種類であり、ネットブラウザに表示されたWebページをスクロールさせるスクロール操作と、ネットブラウザを映像画像41上で移動させる移動操作とを例に説明する。 Here, the operation type is an operation type for the object image, and a scroll operation for scrolling a Web page displayed on the net browser and a move operation for moving the net browser on the video image 41 will be described as examples.
 操作種別の判定は、例えば、携帯電話機のタッチパネルにおいて、最初の接触検出位置がネットブラウザの外枠部分に対応する箇所の場合は、移動操作と判定し、最初の接触検出位置がネットブラウザの外枠部分を除くWebページ部分の場合、スクロール操作と判定するように構成してもよい。また、定量化データは、例えば、ユーザの操作の速さおよび操作の方向から求める。 For example, when the first contact detection position is a part corresponding to the outer frame portion of the net browser on the touch panel of the mobile phone, the operation type is determined as a moving operation, and the first contact detection position is outside the net browser. In the case of the Web page portion excluding the frame portion, it may be configured to determine that the operation is a scroll operation. The quantification data is obtained from, for example, the speed of the user's operation and the direction of the operation.
 描画制御部33は、操作検出部34から操作情報SOIが出力されると、操作速度および操作方向の定量化データを補間画像生成部32で利用可能なベクトル量に変換する。ここでのベクトル量は、例えば、操作方向と画素量とを用いて表される。描画制御部33は、特定情報SLOと定量化データをベクトル量に変換した後の操作情報SOIとを含む制御情報SCを生成し、補間画像生成部32に出力する(ステップS13)。また、操作情報SOIの操作種別が、移動操作を示す場合は、描画制御部33は、管理する特定情報SLOのうち、映像画像41上のオブジェクト画像42の位置を更新する。なお、描画制御部33は、操作端末50あるいは操作検出部34におけるユーザの操作に対する検出周波数が、表示パネル40の周波数よりも低い場合は、表示パネル40の周波数に合わせて、ベクトル量の補間を行う。ベクトル量の補間は、例えば、線形補間やベジエ曲線補間などを用いて行う。 When the operation information SOI is output from the operation detection unit 34, the drawing control unit 33 converts the quantified data of the operation speed and the operation direction into a vector amount that can be used by the interpolation image generation unit 32. The vector amount here is expressed using, for example, the operation direction and the pixel amount. The drawing control unit 33 generates the control information SC including the specific information SLO and the operation information SOI after converting the quantified data into a vector amount, and outputs the control information SC to the interpolated image generation unit 32 (step S13). When the operation type of the operation information SOI indicates a move operation, the drawing control unit 33 updates the position of the object image 42 on the video image 41 in the specific information SLO to be managed. The drawing control unit 33 interpolates the vector amount in accordance with the frequency of the display panel 40 when the detection frequency for the user operation in the operation terminal 50 or the operation detection unit 34 is lower than the frequency of the display panel 40. Do. Vector quantity interpolation is performed using, for example, linear interpolation or Bezier curve interpolation.
 [基本動作]
 基本動作では、映像表示装置30は、図2Bに示すように、映像入力装置10から映像信号STVを受け付け、携帯端末からオブジェクト信号SOBを受け付ける(ステップS21)。なお、映像信号STVは、本実施形態では、TV放送の放送信号であることから、映像表示装置30に連続して入力される。また、オブジェクト信号SOBは、Webページのデータであることから、Webページを開いたときに、新たなWebページのデータが映像表示装置30に入力される。
[basic action]
In the basic operation, the video display device 30 receives the video signal STV from the video input device 10 and the object signal SOB from the mobile terminal as shown in FIG. 2B (step S21). Note that the video signal STV is continuously input to the video display device 30 because it is a TV broadcast signal in this embodiment. Since the object signal SOB is data of a web page, new web page data is input to the video display device 30 when the web page is opened.
 描画処理部31は、オブジェクト信号SOBからユーザが操作可能なオブジェクト画像42を生成し、映像信号STVからユーザの操作対象とはならない映像画像41を生成し、映像画像41にオブジェクト画像42を合成する(ステップS22)。 The drawing processing unit 31 generates an object image 42 that can be operated by the user from the object signal SOB, generates a video image 41 that is not an operation target of the user from the video signal STV, and combines the object image 42 with the video image 41. (Step S22).
 なお、図3Aおよび図3Cは、Webページがスクロール操作された場合の合成画像43の一例を示しており、図4Aおよび図4Cは、ネットブラウザが移動操作された場合の合成画像43の一例を示している。 3A and 3C show an example of the composite image 43 when the Web page is scrolled. FIGS. 4A and 4C show an example of the composite image 43 when the net browser is moved. Show.
 より具体的には、描画処理部31は、映像画像41毎に、描画制御部33からオブジェクト画像42の位置および大きさを含む特定情報SLOを取得する。次に、描画処理部31は、特定情報SLOが示す映像画像41上の位置に、オブジェクト画像42を重ね合わせることにより、合成画像43を生成する。 More specifically, the drawing processing unit 31 acquires the specific information SLO including the position and size of the object image 42 from the drawing control unit 33 for each video image 41. Next, the drawing processing unit 31 generates a composite image 43 by superimposing the object image 42 on the position on the video image 41 indicated by the specific information SLO.
 オブジェクト画像42の位置は、本実施形態では、オブジェクト画像42であるネットブラウザの左上角部の画素が表示される映像画像41上の座標である。オブジェクト画像42の左上角の画素の座標(x、y)は、映像画像41の左上角の画素の座標を(0,0)とした場合に、左上角の画素からの図面右方向の画素数xと図面下方向の画素数yを用いて(x、y)と表される。なお、オブジェクト画像42の位置は、左上角の画素の座標に限られるものではなく、オブジェクト画像42の中心画素の座標など、オブジェクト画像42の他の画素の座標であってもよい。また、座標(0,0)を映像画像41の別の位置に設定してもよい。さらに、オブジェクト画像42の位置は、座標(0,0)からの距離rとx軸からの角度θを用いて表してもよい。 In the present embodiment, the position of the object image 42 is a coordinate on the video image 41 where the pixel at the upper left corner of the net browser that is the object image 42 is displayed. The coordinates (x 1 , y 1 ) of the pixel in the upper left corner of the object image 42 are those in the right direction of the drawing from the pixel in the upper left corner when the coordinate of the pixel in the upper left corner of the video image 41 is (0, 0). This is expressed as (x 1 , y 1 ) using the number of pixels x 1 and the number of pixels y 1 in the downward direction of the drawing. The position of the object image 42 is not limited to the coordinates of the pixel at the upper left corner, but may be the coordinates of other pixels of the object image 42 such as the coordinates of the center pixel of the object image 42. Further, the coordinates (0, 0) may be set at another position of the video image 41. Furthermore, the position of the object image 42 may be expressed using a distance r from the coordinates (0, 0) and an angle θ from the x axis.
 オブジェクト画像42の大きさは、本実施形態では、ネットブラウザのx軸方向(図面横方向)の長さ(画素数)とy軸方向(図面縦方向)の長さ(画素数)である場合を例に説明するが、これに限るものではない。 In this embodiment, the size of the object image 42 is the length (number of pixels) in the x-axis direction (horizontal direction in the drawing) and the length (number of pixels) in the y-axis direction (vertical direction in the drawing) of the net browser. However, the present invention is not limited to this.
 補間画像生成部32は、描画制御部33から制御情報SCを取得し、制御情報SCに基づいて補間画像の生成を行う(ステップS23)。 The interpolation image generation unit 32 acquires the control information SC from the drawing control unit 33, and generates an interpolation image based on the control information SC (step S23).
 より詳細には、補間画像生成部32は、本実施形態では、描画処理部31の処理速度が50Hz、表示パネル40の処理速度が100Hzであることから、2枚の連続する合成画像43の間に1枚の補間画像を生成するフレーム補間を行う。なお、映像信号STV全体に対する補間画像の生成枚数や挿入箇所は、入力される映像信号STVのフレームレートと、表示パネル40が表示可能なフレームレートとから適切に設定する。 More specifically, in the present embodiment, the interpolation image generation unit 32 has a processing speed of the drawing processing unit 31 of 50 Hz and a processing speed of the display panel 40 of 100 Hz. Frame interpolation for generating one interpolated image is performed. Note that the number of interpolated images generated and the insertion location for the entire video signal STV are appropriately set based on the frame rate of the input video signal STV and the frame rate that can be displayed on the display panel 40.
 ここでの制御情報SCは、前後の合成画像43のそれぞれにおけるオブジェクト画像42の特定情報SLOを含んでいる。また、制御情報SCは、操作検出部34によりユーザによるオブジェクト画像42に対する操作が検出された場合は、2つの特定情報SLOに加え、操作識別情報と操作量および操作方向の定量化データをベクトル量に変換した操作情報SOIとを含んで構成されている。なお、制御情報SCの取得は、オブジェクト画像の補間画像42iの生成毎に行う。 Here, the control information SC includes specific information SLO of the object image 42 in each of the preceding and succeeding composite images 43. In addition, when the operation detection unit 34 detects an operation on the object image 42 by the user, the control information SC includes the operation identification information, the operation amount, and the quantified data of the operation direction in addition to the two pieces of specific information SLO. And the operation information SOI converted into. The control information SC is acquired every time the interpolation image 42i of the object image is generated.
 補間画像生成部32は、制御情報SCに操作種別情報が含まれない場合、すなわち、オブジェクト画像42が操作されていない場合、先ず、特定情報SLOが示すオブジェクト画像42の位置および大きさに基づき、時間的に連続する2枚の合成画像43のそれぞれから、オブジェクト画像42を特定する。補間画像生成部32は、オブジェクト画像42の特定後、映像画像の補間画像41iを、動き補償により生成する。 When the operation type information is not included in the control information SC, that is, when the object image 42 is not operated, the interpolated image generation unit 32 first determines, based on the position and size of the object image 42 indicated by the specific information SLO. The object image 42 is specified from each of the two composite images 43 that are temporally continuous. After specifying the object image 42, the interpolated image generation unit 32 generates an interpolated image 41i of the video image by motion compensation.
 さらに、補間画像生成部32は、オブジェクト画像42の特定後、時間的に連続する2枚の合成画像43の何れかから、特定されたオブジェクト画像42をオブジェクト画像の補間画像42iとして取得する。なお、ここでは、オブジェクト画像42は操作されていないので、合成画像43におけるオブジェクト画像42と、オブジェクト画像の補間画像42iは同じになる。また、映像画像の補間画像41iにおけるオブジェクト画像42の位置は、映像画像41におけるオブジェクト画像42の位置と同じになる。 Further, after specifying the object image 42, the interpolation image generating unit 32 acquires the specified object image 42 as an interpolation image 42i of the object image from any one of the two temporally continuous composite images 43. Here, since the object image 42 is not operated, the object image 42 in the composite image 43 and the interpolation image 42i of the object image are the same. The position of the object image 42 in the interpolated image 41 i of the video image is the same as the position of the object image 42 in the video image 41.
 補間画像生成部32は、映像画像の補間画像41iに、オブジェクト画像の補間画像42iを重ね合わせて、合成画像の補間画像43iを生成する。 The interpolated image generation unit 32 superimposes the interpolated image 42i of the object image on the interpolated image 41i of the video image to generate an interpolated image 43i of the composite image.
 補間画像生成部32は、制御情報SCに操作情報SOIが含まれ、かつ、操作種別情報がスクロール操作を示す情報である場合、先ず、特定情報SLOが示すオブジェクト画像42の位置および大きさに基づき、時間的に連続する2枚の合成画像43のそれぞれから、オブジェクト画像42を特定する。本実施形態では、図3Aおよび図3Cに示す合成画像43から、オブジェクト画像42を特定する。補間画像生成部32は、オブジェクト画像42の特定後、映像画像の補間画像41iを、動き補償により生成する。 When the operation information SOI is included in the control information SC and the operation type information is information indicating a scroll operation, the interpolated image generation unit 32 first determines based on the position and size of the object image 42 indicated by the specific information SLO. The object image 42 is specified from each of the two composite images 43 that are continuous in time. In the present embodiment, the object image 42 is specified from the composite image 43 shown in FIGS. 3A and 3C. After specifying the object image 42, the interpolated image generation unit 32 generates an interpolated image 41i of the video image by motion compensation.
 さらに、補間画像生成部32は、図3Bに示すように、オブジェクト画像42の特定後、時間的に連続する2枚の合成画像43のオブジェクト画像42から、操作情報SOIのベクトル量に基づいて、Webページをスクロールさせたオブジェクト画像の補間画像42iを生成する。 Further, as shown in FIG. 3B, the interpolated image generation unit 32 determines, based on the vector amount of the operation information SOI, from the object image 42 of the two composite images 43 that are temporally continuous after specifying the object image 42. An interpolation image 42i of the object image obtained by scrolling the web page is generated.
 具体的には、補間画像生成部32は、例えば、図3Aに示す1つ前の合成画像43のオブジェクト画像42を、ベクトル量に応じてスクロールさせる。言い換えると、補間画像生成部32は、ベクトル量に基づいて上方向または下方向に、ベクトル量が示す量でWebページをスライドさせる。補間画像生成部32は、スクロール操作により、1つ前の合成画像43のオブジェクト画像42から、オブジェクト画像の補間画像42iに含まれることになる画像(図3Aの破線で示す部分の画像)を取得する。なお、補間画像生成部32は、画像のスライドによって欠落した部分については、1つ後の合成画像43のオブジェクト画像42を用いる(図3Cの一点破線で示す部分の画像)。なお、ここでは、スクロール操作であるため、オブジェクト画像の補間画像42iの位置は、前後の合成画像43におけるオブジェクト画像42の位置と同じである。また、ネットブラウザのUI(User Interface)のように、オブジェクト画像内にスクロールしない部分がある場合は、特定情報に、Webデータの表示部分(スクロールさせる部分)とスクロールしない部分のデータを含ませておき、Webページの表示部分のみ、上述した方法でスクロールさせて補間画像を生成する、あるいは、Webページの表示部分のみをオブジェクト画像と考えて補間画像を生成するのも望ましい態様である。 Specifically, for example, the interpolated image generation unit 32 scrolls the object image 42 of the previous composite image 43 shown in FIG. 3A according to the vector amount. In other words, the interpolated image generation unit 32 slides the Web page in the upward or downward direction based on the vector amount by the amount indicated by the vector amount. The interpolated image generating unit 32 obtains an image (image indicated by a broken line in FIG. 3A) that is included in the interpolated image 42 i of the object image from the object image 42 of the previous composite image 43 by scroll operation. To do. Note that the interpolated image generation unit 32 uses the object image 42 of the next composite image 43 for the portion missing due to the slide of the image (the image of the portion indicated by the one-dot broken line in FIG. 3C). Here, since it is a scroll operation, the position of the interpolation image 42i of the object image is the same as the position of the object image 42 in the preceding and succeeding composite images 43. Also, if there is a non-scrolling part in the object image, such as the UI (User Interface) of the net browser, the specific information includes the data of the Web data display part (the part to be scrolled) and the non-scrolling part. In addition, it is also desirable to generate an interpolation image by scrolling only the display portion of the Web page by the above-described method, or to generate the interpolation image by considering only the display portion of the Web page as an object image.
 補間画像生成部32は、図3Bに示すように、映像画像の補間画像41iにオブジェクト画像の補間画像42iを重ね合わせて、合成画像の補間画像43iを生成する。 3B, the interpolated image generating unit 32 superimposes the interpolated image 42i of the object image on the interpolated image 41i of the video image, and generates an interpolated image 43i of the composite image.
 補間画像生成部32は、制御情報SCに操作情報SOIが含まれ、且つ、操作種別情報がネットブラウザの移動操作を示す情報である場合、先ず、特定情報SLOが示すオブジェクト画像42の位置および大きさに基づき、挿入する補間画像の前後の合成画像43のそれぞれから、オブジェクト画像42を特定する。なお、オブジェクト画像42の映像画像41上の位置は、オブジェクト画像42であるネットブラウザが移動操作されていることから、図4Aおよび図4Cからわかるように、1つ前の合成画像43と1つ後の合成画像43とで異なっている。補間画像生成部32は、オブジェクト画像42の特定後、映像画像の補間画像41iを、動き補償により生成する。 When the operation information SOI is included in the control information SC and the operation type information is information indicating the movement operation of the network browser, the interpolation image generation unit 32 firstly determines the position and size of the object image 42 indicated by the specific information SLO. Based on this, the object image 42 is specified from each of the synthesized images 43 before and after the interpolation image to be inserted. It should be noted that the position of the object image 42 on the video image 41 is the same as that of the previous composite image 43, as can be seen from FIG. 4A and FIG. This is different from the later composite image 43. After specifying the object image 42, the interpolated image generation unit 32 generates an interpolated image 41i of the video image by motion compensation.
 補間画像生成部32は、オブジェクト画像42の特定後、前後の合成画像43の何れかから、特定されたオブジェクト画像42をオブジェクト画像の補間画像42iとして取得する。ネットブラウザの移動では、ネットブラウザに表示されるWebページまたはWebページの表示箇所は変更されないので、合成画像43におけるオブジェクト画像42と、オブジェクト画像の補間画像42iは同じになる。 After specifying the object image 42, the interpolated image generation unit 32 acquires the specified object image 42 as an interpolated image 42i of the object image from any one of the preceding and succeeding synthesized images 43. In the movement of the network browser, the Web page displayed on the network browser or the display location of the Web page is not changed, so the object image 42 in the composite image 43 and the interpolation image 42i of the object image are the same.
 さらに、補間画像生成部32は、本実施形態では、1つ前の合成画像43におけるオブジェクト画像42の位置を操作情報SOIのベクトル量だけ移動させた位置を、合成画像の補間画像43iにおけるオブジェクト画像42の位置として算出する。 Further, in this embodiment, the interpolation image generation unit 32 moves the position of the object image 42 in the previous composite image 43 that has been moved by the vector amount of the operation information SOI to the object image in the interpolation image 43i of the composite image. 42 is calculated.
 補間画像生成部32は、図4Bに示すように、算出した合成画像の補間画像43iにおけるオブジェクト画像42の位置に、取得したオブジェクト画像42を重ね合わせて、合成画像の補間画像43iを生成する。 4B, the interpolated image generation unit 32 superimposes the acquired object image 42 on the position of the object image 42 in the calculated interpolated image 43i of the interpolated image to generate an interpolated image 43i of the combined image.
 合成画像の補間画像43iの生成後、補間画像生成部32は、生成した合成画像の補間画像43iを合成画像43の間に挿入した映像表示信号を生成し、表示パネル40に出力する(ステップS24)。 After generating the interpolated image 43i of the composite image, the interpolated image generating unit 32 generates a video display signal in which the interpolated image 43i of the generated composite image is inserted between the composite images 43, and outputs it to the display panel 40 (step S24). ).
 以上説明したように、本実施形態では、表示ぼけのない1つ前あるいは1つ後の合成画像43のオブジェクト画像42をそのままオブジェクト画像の補間画像42iとして利用する、あるいは、表示ぼけのない前後の合成画像43のオブジェクト画像42をそのまま組み合わせてオブジェクト画像の補間画像42iとして利用するので、合成画像の補間画像43iは、表示ぼけのない画像となり、補間画像の精度を向上させることが可能になる。 As described above, in the present embodiment, the object image 42 of the previous or next synthesized image 43 with no display blur is directly used as the interpolation image 42i of the object image, or before and after the display blur is not generated. Since the object image 42 of the composite image 43 is combined as it is and used as the interpolation image 42 i of the object image, the interpolation image 43 i of the composite image becomes an image with no display blur, and the accuracy of the interpolation image can be improved.
 (実施の形態2)
 実施の形態2の映像表示装置30について、図1および図5を基に説明する。
(Embodiment 2)
A video display device 30 according to the second embodiment will be described with reference to FIGS. 1 and 5.
 本実施形態の映像表示装置30が、実施の形態1の映像表示装置30と異なる点は、表示パネル40の画面上に、複数のオブジェクト画像42a~42gを表示可能に構成されている点である。このため、オブジェクト画像42a~42gを特定するための特定情報SLOは、本実施形態では、オブジェクト画像42a~42gの識別情報を含んでいる。 The video display device 30 of the present embodiment is different from the video display device 30 of the first embodiment in that a plurality of object images 42a to 42g can be displayed on the screen of the display panel 40. . Therefore, the specific information SLO for specifying the object images 42a to 42g includes identification information of the object images 42a to 42g in the present embodiment.
 本実施形態の表示デバイス20は、図1に示すように、実施の形態1と同様に、映像入力装置10および操作端末50と接続され、映像表示装置30と表示パネル40とを備えて構成されるTVである場合を例に説明する。なお、映像入力装置10、操作端末50、および、表示パネル40の構成は、実施の形態1と同じである。 As shown in FIG. 1, the display device 20 of the present embodiment is connected to the video input device 10 and the operation terminal 50, and includes the video display device 30 and the display panel 40, as in the first embodiment. A case of a TV is described as an example. The configurations of the video input device 10, the operation terminal 50, and the display panel 40 are the same as those in the first embodiment.
 映像表示装置30は、実施の形態1と同様に、表示パネル40より低いフレームレートで動作する描画処理部31と、映像表示装置30の各機能を制御する描画制御部33と、操作端末50におけるユーザの操作を検出する操作検出部34と、フレームレートを高めるフレーム補間を行う補間画像生成部32とを備えて構成されている。 Similar to the first embodiment, the video display device 30 includes a drawing processing unit 31 that operates at a lower frame rate than the display panel 40, a drawing control unit 33 that controls each function of the video display device 30, and an operation terminal 50. An operation detection unit 34 that detects a user operation and an interpolated image generation unit 32 that performs frame interpolation to increase the frame rate are provided.
 描画処理部31は、実施の形態1と同様に、GPUで構成される。本実施形態の描画処理部31は、図5に示すように、入力された複数のオブジェクト信号SOBから、ユーザが操作可能な複数のオブジェクト画像42a~42gを生成し、入力された映像信号STVからユーザの操作対象とはならない映像画像41を生成する。描画処理部31は、さらに、映像画像41と複数のオブジェクト画像42a~42gとを合成して、合成画像43を生成する。本実施形態では、オブジェクト画像42aが、Webページを表示するためのネットブラウザであり、オブジェクト画像42b~42gが、コンテンツ画像である場合を例に説明する。コンテンツ画像は、例えば、映画のサムネイル画像であり、当該コンテンツ画像を選択操作すると、映画の案内画面(広告宣伝用画面、あるいは、映画のデータの販売画面など)が表示される。なお、本実施形態では、オブジェクト画像42a~42gが複数であることから、オブジェクト信号SOBおよび特定情報SLOは、対応するオブジェクト画像を識別するための識別情報を備えている。 The drawing processing unit 31 is configured by a GPU as in the first embodiment. As shown in FIG. 5, the rendering processing unit 31 of the present embodiment generates a plurality of object images 42a to 42g that can be operated by the user from a plurality of input object signals SOB, and from the input video signal STV. A video image 41 that is not an operation target of the user is generated. The drawing processing unit 31 further synthesizes the video image 41 and the plurality of object images 42a to 42g to generate a composite image 43. In this embodiment, an example will be described in which the object image 42a is a net browser for displaying a Web page, and the object images 42b to 42g are content images. The content image is, for example, a thumbnail image of a movie. When the content image is selected, a movie guidance screen (such as an advertisement screen or a movie data sales screen) is displayed. In the present embodiment, since there are a plurality of object images 42a to 42g, the object signal SOB and the specific information SLO include identification information for identifying the corresponding object image.
 描画処理部31は、描画制御部33から複数の特定情報SLOを取得し、複数のオブジェクト画像42a~42gそれぞれに対応する特定情報SLOを識別情報を用いて特定する。描画処理部31は、複数のオブジェクト画像42a~42gのそれぞれについて、特定情報SLOに基づいて位置および大きさを特定し、映像画像41と複数のオブジェクト画像42a~42gとを合成して合成画像43を生成する。 The drawing processing unit 31 acquires a plurality of specific information SLOs from the drawing control unit 33, and specifies the specific information SLO corresponding to each of the plurality of object images 42a to 42g using the identification information. The drawing processing unit 31 identifies the position and size of each of the plurality of object images 42a to 42g based on the identification information SLO, and synthesizes the video image 41 and the plurality of object images 42a to 42g to form a composite image 43. Is generated.
 図5に示す表示例では、所望の背景画像44に、映像画像41を表示するための表示領域、ネットブラウザを表示するための表示領域、および、複数のコンテンツ画像を表示するための表示領域が設定されている。映像画像41の表示領域は、画面左上部分に、ネットブラウザの表示領域は画面右上に、複数のコンテンツ画像の表示領域が画面下部分に、それぞれ設定されている。 In the display example shown in FIG. 5, a desired background image 44 includes a display area for displaying the video image 41, a display area for displaying a net browser, and a display area for displaying a plurality of content images. Is set. The display area of the video image 41 is set in the upper left part of the screen, the display area of the net browser is set in the upper right part of the screen, and the display areas of a plurality of content images are set in the lower part of the screen.
 描画制御部33は、実施の形態1と同様に、CPUで構成される。本実施形態の描画制御部33は、オブジェクト信号SOB別に、オブジェクト信号SOBの識別情報、位置および大きさを含む特定情報SLOを管理している。 The drawing control unit 33 is configured by a CPU as in the first embodiment. The drawing control unit 33 of this embodiment manages specific information SLO including identification information, position, and size of the object signal SOB for each object signal SOB.
 描画制御部33は、基本動作では、描画処理部31からの要求に基づいて、描画処理部31に対し特定情報SLOを出力する。 In the basic operation, the drawing control unit 33 outputs specific information SLO to the drawing processing unit 31 based on a request from the drawing processing unit 31.
 また、本実施形態の描画制御部33は、オブジェクト操作検出動作では、実施の形態1と同様に、ユーザにより操作端末50を介してオブジェクト画像42aが操作された場合に、操作検出部34から、操作対象のオブジェクト画像42aを示す識別情報と操作内容を示す定量化データとを含む操作情報SOIを取得する。描画制御部33は、特定情報SLOと操作情報SOIとを含む制御情報SCを生成し、補間画像生成部32に出力する。なお、オブジェクト画像が複数選択された場合は、選択された全てのオブジェクト画像について、制御情報SCを生成し出力する。 In addition, in the object operation detection operation, the drawing control unit 33 according to the present embodiment, from the operation detection unit 34, when the user operates the object image 42a via the operation terminal 50, as in the first embodiment. Operation information SOI including identification information indicating the operation target object image 42a and quantification data indicating the operation content is acquired. The drawing control unit 33 generates control information SC including the specific information SLO and the operation information SOI, and outputs the control information SC to the interpolation image generation unit 32. When a plurality of object images are selected, control information SC is generated and output for all the selected object images.
 補間画像生成部32は、実施の形態1と同様に、FRCで構成されており、複数のオブジェクト画像42a~42gおよび制御情報SCに基づいて補間画像を生成し、合成画像43と合成画像の補間画像43iからなる映像表示信号を表示装置に対して出力する。 As in the first embodiment, the interpolation image generation unit 32 is configured by FRC, generates an interpolation image based on the plurality of object images 42a to 42g and the control information SC, and interpolates the composite image 43 and the composite image. A video display signal composed of the image 43i is output to the display device.
 補間画像生成部32は、制御情報SCに操作種別情報が含まれない場合、描画制御部33からの制御情報SCに基づいて、前後の合成画像43のそれぞれから、複数のオブジェクト画像42a~42gを特定する。補間画像生成部32は、複数のオブジェクト画像42a~42gの特定後、映像画像の補間画像41iを、動き補償により生成する。さらに、補間画像生成部32は、図5に示すように、映像画像の補間画像41iと特定した複数のオブジェクト画像42a~42gとを背景画像44に合成して、合成画像の補間画像43iを生成する。 When the operation type information is not included in the control information SC, the interpolated image generation unit 32 selects a plurality of object images 42a to 42g from each of the previous and next synthesized images 43 based on the control information SC from the drawing control unit 33. Identify. After specifying the plurality of object images 42a to 42g, the interpolated image generating unit 32 generates an interpolated image 41i of the video image by motion compensation. Further, as shown in FIG. 5, the interpolated image generating unit 32 synthesizes the interpolated image 41i of the video image and the plurality of identified object images 42a to 42g with the background image 44 to generate an interpolated image 43i of the synthesized image. To do.
 補間画像生成部32は、制御情報SCに操作情報SOIが含まれる場合、先ず、合成画像43のそれぞれから複数のオブジェクト画像42a~42gを特定する。補間画像生成部32は、複数のオブジェクト画像42a~42gの特定後、映像画像の補間画像41iを、動き補償により生成する。さらに、補間画像生成部32は、オブジェクト画像42a~42gの特定後、操作対象のオブジェクト画像のそれぞれについて、操作種別に応じたオブジェクト画像の補間画像42iを生成する。操作種別に応じたオブジェクト画像の補間画像42iの生成方法は、実施の形態1と同じである。また、補間画像生成部32は、オブジェクト画像42aの特定後、操作対象ではないオブジェクト画像のそれぞれについては、前後の合成画像43のオブジェクト画像を、オブジェクト画像の補間画像42iとする。補間画像生成部32は、背景画像44に、映像画像の補間画像41iとオブジェクト画像の補間画像42iとを重ね合わせて、合成画像の補間画像43iを生成する。 When the operation information SOI is included in the control information SC, the interpolated image generation unit 32 first specifies a plurality of object images 42a to 42g from each of the composite images 43. After specifying the plurality of object images 42a to 42g, the interpolated image generating unit 32 generates an interpolated image 41i of the video image by motion compensation. Further, after specifying the object images 42a to 42g, the interpolation image generation unit 32 generates an interpolation image 42i of the object image corresponding to the operation type for each of the object images to be operated. The method for generating the interpolated image 42i of the object image corresponding to the operation type is the same as in the first embodiment. In addition, after specifying the object image 42a, the interpolation image generation unit 32 sets the object image of the composite image 43 before and after the object image that is not the operation target as the interpolation image 42i of the object image. The interpolated image generating unit 32 superimposes the interpolated image 41 i of the video image and the interpolated image 42 i of the object image on the background image 44 to generate an interpolated image 43 i of the composite image.
 合成画像の補間画像43iの生成後、補間画像生成部32は、生成した合成画像の補間画像43iを合成画像43の間に挿入した映像表示信号を生成し、表示パネル40に出力する。 After the interpolated image 43i of the composite image is generated, the interpolated image generating unit 32 generates a video display signal in which the interpolated image 43i of the generated composite image is inserted between the composite images 43 and outputs the video display signal to the display panel 40.
 操作検出部34は、操作端末50から出力される操作検出信号SODを受信し、操作検出信号SODから操作対象のオブジェクト画像を特定し、操作内容を検出する。さらに、操作検出部34は、オブジェクト画像を特定する識別情報と操作内容を示す定量化データとを含む操作情報SOIを描画制御部33に出力する。 The operation detection unit 34 receives the operation detection signal SOD output from the operation terminal 50, specifies the object image to be operated from the operation detection signal SOD, and detects the operation content. Further, the operation detection unit 34 outputs operation information SOI including identification information for specifying the object image and quantification data indicating the operation content to the drawing control unit 33.
 操作検出部34は、操作端末50から出力される操作検出信号SODを受信すると、先ず、操作検出信号SODに基づいて、複数のオブジェクト画像42a~42gから操作対象のオブジェクト画像を特定する。具体的には、例えば、携帯電話機のタッチパネルの場合、最初の接触検出位置にあるオブジェクト画像を、操作対象のオブジェクト画像であると判定する。さらに、操作検出部34は、操作検出信号SODから、操作種別と、操作速度および操作方向を定量化したデータとを検出する。操作種別および定量化データの検出方法は、実施の形態1と同じである。操作検出部34は、操作対象のオブジェクト画像の識別情報と操作種別と定量化データとを含む操作情報SOIを生成し、描画制御部33に出力する。 When receiving the operation detection signal SOD output from the operation terminal 50, the operation detection unit 34 first specifies an object image to be operated from the plurality of object images 42a to 42g based on the operation detection signal SOD. Specifically, for example, in the case of a touch panel of a mobile phone, the object image at the first contact detection position is determined as the object image to be operated. Furthermore, the operation detection unit 34 detects the operation type, and data obtained by quantifying the operation speed and the operation direction from the operation detection signal SOD. The operation type and the quantification data detection method are the same as those in the first embodiment. The operation detection unit 34 generates operation information SOI including the identification information of the operation target object image, the operation type, and the quantification data, and outputs the operation information SOI to the drawing control unit 33.
 (実施の形態3)
 実施の形態3の映像表示装置30について、図1、図6Aおよび図6Bを基に説明する。
(Embodiment 3)
The video display device 30 according to the third embodiment will be described with reference to FIGS. 1, 6A, and 6B.
 本実施形態の映像表示装置30が、実施の形態2の映像表示装置30と異なる点は、表示パネル40が3次元画像を表示可能であり、3次元画像を表示するための右目用画像と左目用画像とで構成される映像表示信号を生成する点である。 The video display device 30 of the present embodiment is different from the video display device 30 of the second embodiment in that the display panel 40 can display a 3D image, and the right eye image and the left eye for displaying the 3D image. It is a point which produces | generates the video display signal comprised by the image for work.
 本実施形態の表示デバイス20は、図1に示すように、実施の形態1および実施の形態2と同様に、映像入力装置10および操作端末50と接続され、映像表示装置30と表示パネル40とを備えて構成されるTVである場合を例に説明する。なお、映像入力装置10、操作端末50、および、表示パネル40の構成は、実施の形態1および実施の形態2と同じである。 As shown in FIG. 1, the display device 20 of the present embodiment is connected to the video input device 10 and the operation terminal 50 as in the first and second embodiments, and the video display device 30, the display panel 40, and the like. A case where the TV is configured to include the above will be described as an example. The configurations of the video input device 10, the operation terminal 50, and the display panel 40 are the same as those in the first and second embodiments.
 映像表示装置30は、実施の形態1と同様に、表示パネル40より低いフレームレートで動作する描画処理部31と、映像表示装置30の各機能を制御する描画制御部33と、操作端末50におけるユーザの操作を検出する操作検出部34と、フレームレートを高めるフレーム補間を行う補間画像生成部32とを備えて構成されている。 Similar to the first embodiment, the video display device 30 includes a drawing processing unit 31 that operates at a lower frame rate than the display panel 40, a drawing control unit 33 that controls each function of the video display device 30, and an operation terminal 50. An operation detection unit 34 that detects a user operation and an interpolated image generation unit 32 that performs frame interpolation to increase the frame rate are provided.
 描画処理部31は、GPUで構成され、入力された複数のオブジェクト信号SOBから、ユーザが操作可能な複数のオブジェクト画像42a~42gを生成し、入力された映像信号STVからユーザの操作対象とはならない映像画像41を生成する。なお、3D画像を表示するため、合成画像は、右目用画像と左目用画像の2種類の画像を生成する。さらに、描画処理部31は、特定情報SLOを用い、背景画像44に映像画像41と複数のオブジェクト画像42a~42gとを重ね合わせて、合成画像を生成する。 The drawing processing unit 31 is configured by a GPU, generates a plurality of object images 42a to 42g that can be operated by the user from a plurality of input object signals SOB, and is an operation target of the user from the input video signal STV. A video image 41 that is not to be generated is generated. In order to display a 3D image, two types of images, a right-eye image and a left-eye image, are generated as composite images. Further, the drawing processing unit 31 uses the specific information SLO to superimpose the video image 41 and the plurality of object images 42a to 42g on the background image 44 to generate a composite image.
 図6Aは、左目用の補間画像43Lの一例を示すブロック図であり、図6Bは、図6Aに対応する右目用の補間画像43Rの一例を示すブロック図である。図6Aおよび図6Bでは、所望の背景画像44の左上部分に映像画像41を表示するための表示領域が設定され、右上部分にネットブラウザを表示するための表示領域が設定され、下部分に複数のコンテンツ画像を表示するための表示領域が設定されている。 6A is a block diagram illustrating an example of an interpolation image 43L for the left eye, and FIG. 6B is a block diagram illustrating an example of an interpolation image 43R for the right eye corresponding to FIG. 6A. 6A and 6B, a display area for displaying the video image 41 is set in the upper left part of the desired background image 44, a display area for displaying a net browser is set in the upper right part, and a plurality of display areas are set in the lower part. A display area for displaying the content image is set.
 描画制御部33は、CPUで構成され、オブジェクト信号SOB別に、オブジェクト信号SOBの識別情報、位置および大きさを含む特定情報SLOを管理している。ここでの位置は、3次元上での位置である。描画制御部33は、特定情報SLOを描画処理部31および補間画像生成部32に出力する場合は、生成する合成画像および合成画像の補間画像が、右目用の合成画像であるか左目用の合成画像であるかに応じて、対応する画像上の座標に変換して出力する。 The drawing control unit 33 is constituted by a CPU and manages specific information SLO including identification information, position and size of the object signal SOB for each object signal SOB. The position here is a three-dimensional position. When the drawing control unit 33 outputs the specific information SLO to the drawing processing unit 31 and the interpolated image generation unit 32, the composite image to be generated and the interpolated image of the composite image are a composite image for the right eye or a composite for the left eye. Depending on whether it is an image, it is converted into coordinates on the corresponding image and output.
 描画制御部33は、基本動作では、描画処理部31からの要求に基づいて、描画処理部31に対し特定情報SLOを出力する。 In the basic operation, the drawing control unit 33 outputs specific information SLO to the drawing processing unit 31 based on a request from the drawing processing unit 31.
 また、描画制御部33は、オブジェクト操作検出動作では、ユーザにより操作端末50を介してオブジェクト画像が操作された場合に、操作検出部34から、識別情報と操作内容を示す定量化データとを含む操作情報SOIを取得する。ここでの定量化データは、3次元での操作方向および操作速度で構成される。描画制御部33は、特定情報SLOと操作情報SOIとを含む制御情報SCを生成し、補間画像生成部32に出力する。なお、オブジェクト画像が複数選択された場合は、選択された全てのオブジェクト画像について、制御情報SCを生成し出力する。 Further, in the object operation detection operation, the drawing control unit 33 includes identification information and quantified data indicating the operation content from the operation detection unit 34 when an object image is operated by the user via the operation terminal 50. Operation information SOI is acquired. The quantification data here is composed of the operation direction and operation speed in three dimensions. The drawing control unit 33 generates control information SC including the specific information SLO and the operation information SOI, and outputs the control information SC to the interpolation image generation unit 32. When a plurality of object images are selected, control information SC is generated and output for all the selected object images.
 補間画像生成部32は、FRCで構成され、複数のオブジェクト画像42a~42gおよび制御情報SCに基づいて補間画像を生成し、合成画像と合成画像の補間画像からなる映像表示信号を表示装置に対して出力する。 The interpolated image generation unit 32 is configured by FRC, generates an interpolated image based on the plurality of object images 42a to 42g and the control information SC, and outputs a video display signal composed of the composite image and the interpolated image of the composite image to the display device. Output.
 制御情報SCに操作種別情報が含まれない場合における合成画像の補間画像の生成方法は、実施の形態2と同じである。 The method for generating the interpolated image of the composite image when the operation type information is not included in the control information SC is the same as in the second embodiment.
 また、制御情報SCに操作情報SOIが含まれる場合において、平面方向の移動のときは、合成画像の補間画像の生成方法は、実施の形態2と同じである。 In the case where the operation information SOI is included in the control information SC, the method for generating the interpolated image of the composite image is the same as that in the second embodiment when moving in the plane direction.
 制御情報SCに操作情報SOIが含まれる場合において、奥行き方向の移動が検出された場合について、図6Aおよび図6Bを基に説明する。ここでは、ネットブラウザが、手前側に移動操作された場合について説明する。なお、図6Aは、左目用の補間画像43Lの一例を示すブロック図であり、図6Bは、右目用の補間画像43Rの一例を示すブロック図である。図6Aおよび図6Bにおいて、破線は、移動前のネットブラウザの位置を示している。 A case where movement in the depth direction is detected when the operation information SOI is included in the control information SC will be described with reference to FIGS. 6A and 6B. Here, a case where the net browser is moved to the near side will be described. 6A is a block diagram illustrating an example of an interpolation image 43L for the left eye, and FIG. 6B is a block diagram illustrating an example of an interpolation image 43R for the right eye. 6A and 6B, the broken line indicates the position of the net browser before movement.
 この場合、補間画像生成部32は、左目用の補間画像43Lとして、図6Aに示すように、ベクトル量に応じてネットブラウザを右側に移動させた画像を生成する。同様に、右目用の補間画像43Rとして、図6Bに示すように、ベクトル量に応じてネットブラウザを左側に移動させた画像を生成する。この場合の合成画像の生成方法は、実施の形態2における移動操作の場合と同じである。 In this case, the interpolated image generation unit 32 generates, as the left-eye interpolated image 43L, an image in which the net browser is moved to the right according to the vector amount as shown in FIG. 6A. Similarly, as the interpolation image 43R for the right eye, as shown in FIG. 6B, an image is generated by moving the net browser to the left according to the vector amount. The composite image generation method in this case is the same as that in the moving operation in the second embodiment.
 左目用の補間画像43Lおよび右目用の補間画像43Rの生成後、補間画像生成部32は、左目用の補間画像43Lおよび右目用の補間画像43Rのそれぞれについて、生成した合成画像の補間画像を合成画像の間に挿入した映像表示信号を生成し、表示パネル40に出力する。 After generating the left-eye interpolation image 43L and the right-eye interpolation image 43R, the interpolation image generation unit 32 synthesizes the generated synthesized image interpolation image for each of the left-eye interpolation image 43L and the right-eye interpolation image 43R. A video display signal inserted between the images is generated and output to the display panel 40.
 操作検出部34は、操作端末50から出力される操作検出信号SODを受信し、操作検出信号SODから操作対象のオブジェクト画像を特定し、3次元の操作方向と操作速度を定量化した定量化データを検出する。操作検出部34は、特定したオブジェクト画像の識別情報と定量化データとを含む操作情報SOIを生成し、描画制御部33に出力する。 The operation detection unit 34 receives the operation detection signal SOD output from the operation terminal 50, specifies the object image to be operated from the operation detection signal SOD, and quantifies the three-dimensional operation direction and operation speed. Is detected. The operation detection unit 34 generates operation information SOI including the identification information of the identified object image and the quantification data, and outputs the operation information SOI to the drawing control unit 33.
 このように構成することにより、3D画像における奥行き方向の移動にも対応できる。 By configuring in this way, it is possible to cope with movement in the depth direction in the 3D image.
 (別実施形態)
(1)上記実施の形態1~実施の形態3では、操作種別として、スクロール操作および移動操作を例に説明したが、これに限るものではない。オブジェクト画像の拡大縮小操作や、オブジェクト画像が複数の場合のオブジェクト画像の入れ替え操作など、他の種別であってもよい。なお、オブジェクト画像の拡大縮小の場合、特定情報SLOの位置および大きさを変更し、操作情報SOIにオブジェクト画像の大きさの変更情報(変更量)を付加することで対応できる。
(Another embodiment)
(1) In Embodiments 1 to 3, the scroll operation and the movement operation have been described as examples of the operation type. However, the present invention is not limited to this. Other types such as an enlargement / reduction operation of an object image or an operation of replacing an object image when there are a plurality of object images may be used. Note that the enlargement / reduction of the object image can be handled by changing the position and size of the specific information SLO and adding change information (change amount) of the size of the object image to the operation information SOI.
 (2)実施の形態1~実施の形態3では、映像画像41とオブジェクト画像とを同時に表示する場合について説明したが、映像画像41は、表示しない構成としてもよい。すなわち、映像画像41を用いる構成は、本発明の必須構成ではないが、より望ましい実施の形態として説明した。なお、映像画像41を表示しない場合、操作種別としては、スクロール操作や、オブジェクト画像の拡大縮小などが考えられる。 (2) In the first to third embodiments, the case where the video image 41 and the object image are simultaneously displayed has been described. However, the video image 41 may be configured not to be displayed. That is, the configuration using the video image 41 is not an essential configuration of the present invention, but has been described as a more preferable embodiment. When the video image 41 is not displayed, the operation type may be a scroll operation or an object image enlargement / reduction.
 (3)上記実施の形態1~実施の形態3において、第1の画像は、例えば、Webページが表示されたネットブラウザや、コンテンツ画像である場合を例に説明したが、これに限るものではない。アイコンなど、ユーザの操作対象となる他の画像であってもよい。 (3) In the first to third embodiments, the first image has been described as an example of a net browser on which a Web page is displayed or a content image. However, the first image is not limited thereto. Absent. It may be another image that is a user's operation target, such as an icon.
 また、第2の画像は、TV画像に限るものではなく、写真などの静止画や他の動画などであってもよい。 Further, the second image is not limited to the TV image, and may be a still image such as a photograph or another moving image.
 (4)なお、上記実施の形態1~実施の形態3において、ブロック図(図1)の各機能ブロックは典型的には集積回路であるLSIとして実現される。これらは個別に1チップ化されてもよいし、一部または全てを含むように1チップ化されてもよい。ここでは、LSIとしたが、集積度の違いにより、IC、システムLSI、スーパーLSI、ウルトラLSIと呼称されることもある。 (4) In the first to third embodiments, each functional block in the block diagram (FIG. 1) is typically realized as an LSI which is an integrated circuit. These may be individually made into one chip, or may be made into one chip so as to include a part or all of them. The name used here is LSI, but it may also be called IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
 また、集積回路化の手法はLSIに限るものではなく、専用回路または汎用プロセッサで実現してもよい。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)や、LSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。 Further, the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible. An FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
 さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積化を行ってもよい。バイオ技術の適応等が可能性としてありえる。 Furthermore, if integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technology, it is naturally also possible to integrate functional blocks using this technology. Biotechnology can be applied.
 また、上記実施の形態1~実施の形態3において、各構成要素は専用のハードウェアにより構成されてもよく、あるいは、ソフトウェアにより実現可能な構成要素については、プログラムを実行することによって実現されてもよい。 In the first to third embodiments, each component may be configured by dedicated hardware, or a component that can be realized by software is realized by executing a program. Also good.
 以上、図面を参照して本発明の実施の形態を説明したが、この発明は、上述した実施の形態に限定されない。上述した実施の形態に対して、この発明と同一の範囲において、あるいは均等の範囲内において、種々の修正や変形を加えることが可能である。 As mentioned above, although embodiment of this invention was described with reference to drawings, this invention is not limited to embodiment mentioned above. Various modifications and variations can be made to the above-described embodiment within the same range or equivalent range as the present invention.
 本発明に係る映像表示装置は、TVやプロジェクターなどの表示デバイスに、タブレットやスマートフォンを操作端末として接続する場合に有用である。 The video display device according to the present invention is useful when a tablet or a smartphone is connected as an operation terminal to a display device such as a TV or a projector.
10   映像入力装置
20   表示デバイス
30   映像表示装置
31   描画処理部
32   補間画像生成部
33   描画制御部
34   操作検出部
40   表示パネル
41   映像画像
41i  映像画像の補間画像
42、42a、42b、42c、42d、42e、42f、42g  オブジェクト画像
42i  オブジェクト画像の補間画像
43   合成画像
43i  合成画像の補間画像
43L  左目用の補間画像
43R  右目用の補間画像
44   背景画像
50   操作端末
DESCRIPTION OF SYMBOLS 10 Video input device 20 Display device 30 Video display device 31 Drawing process part 32 Interpolation image generation part 33 Drawing control part 34 Operation detection part 40 Display panel 41 Video image 41i Interpolation image 42, 42a, 42b, 42c, 42d of a video image 42e, 42f, 42g Object image 42i Interpolated image 43 of object image 43 Composite image 43i Interpolated image 43L of composite image Interpolated image 43R for left eye Interpolated image 44 for right eye Background image 50 Operation terminal

Claims (7)

  1.  フレームレートを高めるためのフレーム補間を行う映像表示装置であって、
     入力された第1の画像信号に基づき、ユーザが操作可能な第1の画像を第1のフレームレートで生成する描画処理部と、
     前記ユーザにより操作端末を介して前記第1の画像に対する操作が行われた場合に、前記操作を示す操作情報を取得し、前記操作情報を含む制御情報を生成する描画制御部と、
     前記第1の画像および前記制御情報に基づいて、前記第1の画像の補間画像を生成し、前記第1のフレームレートより高い第2のフレームレートで前記第1の画像と前記第1の画像の補間画像とを含む映像表示信号を生成し、前記映像表示信号を表示装置に対して出力する補間画像生成部と、を備える
     映像表示装置。
    A video display device that performs frame interpolation to increase the frame rate,
    A rendering processing unit that generates a first image operable by the user at a first frame rate based on the input first image signal;
    A drawing control unit that acquires operation information indicating the operation and generates control information including the operation information when the user performs an operation on the first image via an operation terminal;
    An interpolated image of the first image is generated based on the first image and the control information, and the first image and the first image are generated at a second frame rate higher than the first frame rate. An interpolated image generating unit that generates a video display signal including the interpolated image and outputs the video display signal to a display device.
  2.  前記描画処理部は、さらに、入力された第2の画像信号に基づき、前記ユーザの操作の対象とならない第2の画像を生成し、前記第2の画像と前記第1の画像とを合成した合成画像を生成し、
     前記描画制御部は、前記制御情報と、前記第1の画像を特定するための特定情報とを前記補間画像生成部に出力し、
     前記補間画像生成部は、前記特定情報を用いて前記合成画像から前記第1の画像を特定し、特定した前記第1の画像および前記制御情報に基づいて前記第1の画像の補間画像を生成する
     請求項1に記載の映像表示装置。
    The drawing processing unit further generates a second image that is not a target of the user's operation based on the input second image signal, and combines the second image and the first image. Generate a composite image,
    The drawing control unit outputs the control information and identification information for identifying the first image to the interpolation image generation unit,
    The interpolated image generation unit specifies the first image from the synthesized image using the specifying information, and generates an interpolated image of the first image based on the specified first image and the control information The video display device according to claim 1.
  3.  前記特定情報は、前記第2の画像に対する前記第1の画像の位置、および、前記第1画像の大きさを含む
     請求項2に記載の映像表示装置。
    The video display device according to claim 2, wherein the specific information includes a position of the first image with respect to the second image and a size of the first image.
  4.  前記制御情報は、前記操作情報として、前記第2の画像に対する前記第1の画像の操作方向および操作速度を含む
     請求項2または3に記載の映像表示装置。
    The video display device according to claim 2, wherein the control information includes, as the operation information, an operation direction and an operation speed of the first image with respect to the second image.
  5.  前記描画制御部は、前記操作情報に基づいて、前記第1の画像を構成する右目用画像と左目用画像のそれぞれについて、3次元空間上の操作方向および操作速度を算出して前記操作情報を求め、
     前記補間画像生成部は、前記右目用画像と前記右目用画像における操作方向および操作速度とに基づいて右目用補間画像を生成し、前記左目用画像と前記左目用画像における操作方向および操作速度とに基づいて左目用補間画像を生成する
     請求項1~4の何れか1項に記載の映像表示装置。
    The drawing control unit calculates an operation direction and an operation speed in a three-dimensional space for each of the right-eye image and the left-eye image constituting the first image based on the operation information, and calculates the operation information. Seeking
    The interpolation image generation unit generates a right-eye interpolation image based on an operation direction and an operation speed in the right-eye image and the right-eye image, and an operation direction and an operation speed in the left-eye image and the left-eye image The video display device according to any one of claims 1 to 4, wherein an interpolated image for the left eye is generated on the basis of.
  6.  前記描画処理部は、前記第1の画像信号が静止画であるオブジェクト画像を示す場合に、前記オブジェクト画像を前記第1のフレームレートで描画して、前記第1の画像を生成する
     請求項1~5に記載の映像表示装置。
    The drawing processing unit generates the first image by drawing the object image at the first frame rate when the first image signal indicates an object image that is a still image. 5. The video display device described in 5 above.
  7.  フレームレートを高めるためのフレーム補間を行う映像表示装置のための集積回路であって、
     入力された第1の画像信号に基づき、ユーザが操作可能な第1の画像を前記フレームレートで生成する描画処理部と、
     前記ユーザにより操作端末を介して前記第1の画像に対する操作が行われた場合に、前記操作を示す操作情報を取得し、前記操作情報を含む制御情報を生成する描画制御部と、
     前記第1の画像および前記制御情報に基づいて、前記フレームレートにおける前記第1の画像の補間画像を生成し、前記第1の画像と前記第1の画像の補間画像とを含む映像表示信号を表示装置に対して出力する補間画像生成部と、を備える
     集積回路。
    An integrated circuit for a video display device that performs frame interpolation to increase a frame rate,
    A drawing processing unit that generates a first image operable by a user at the frame rate based on the input first image signal;
    A drawing control unit that acquires operation information indicating the operation and generates control information including the operation information when the user performs an operation on the first image via an operation terminal;
    An interpolated image of the first image at the frame rate is generated based on the first image and the control information, and a video display signal including the first image and the interpolated image of the first image is generated. An integrated circuit comprising: an interpolated image generation unit that outputs to a display device.
PCT/JP2012/002472 2012-04-09 2012-04-09 Video display device and integrated circuit WO2013153568A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/002472 WO2013153568A1 (en) 2012-04-09 2012-04-09 Video display device and integrated circuit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/002472 WO2013153568A1 (en) 2012-04-09 2012-04-09 Video display device and integrated circuit

Publications (1)

Publication Number Publication Date
WO2013153568A1 true WO2013153568A1 (en) 2013-10-17

Family

ID=49327184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/002472 WO2013153568A1 (en) 2012-04-09 2012-04-09 Video display device and integrated circuit

Country Status (1)

Country Link
WO (1) WO2013153568A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004120757A (en) * 2002-09-24 2004-04-15 Matsushita Electric Ind Co Ltd Method for processing picture signal and picture processing unit
JP2009267612A (en) * 2008-04-23 2009-11-12 Canon Inc Image processor, and image processing method
JP2010010778A (en) * 2008-06-24 2010-01-14 Canon Inc Video processing apparatus and method for controlling the video processing apparatus
JP2010199674A (en) * 2009-02-23 2010-09-09 Canon Inc Image display system, image display apparatus, and control method for image display apparatus
JP2011082899A (en) * 2009-10-09 2011-04-21 Canon Inc Image processing apparatus and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004120757A (en) * 2002-09-24 2004-04-15 Matsushita Electric Ind Co Ltd Method for processing picture signal and picture processing unit
JP2009267612A (en) * 2008-04-23 2009-11-12 Canon Inc Image processor, and image processing method
JP2010010778A (en) * 2008-06-24 2010-01-14 Canon Inc Video processing apparatus and method for controlling the video processing apparatus
JP2010199674A (en) * 2009-02-23 2010-09-09 Canon Inc Image display system, image display apparatus, and control method for image display apparatus
JP2011082899A (en) * 2009-10-09 2011-04-21 Canon Inc Image processing apparatus and control method thereof

Similar Documents

Publication Publication Date Title
JP6167703B2 (en) Display control device, program, and recording medium
US9619861B2 (en) Apparatus and method for improving quality of enlarged image
CN109831662B (en) Real-time picture projection method and device of AR (augmented reality) glasses screen, controller and medium
CN109792561B (en) Image display apparatus and method of operating the same
US8418063B2 (en) Aiding device in creation of content involving image display according to scenario and aiding method therein
CN110770785B (en) Screen sharing for display in VR
WO2010150554A1 (en) Stereoscopic image display device
US8922622B2 (en) Image processing device, image processing method, and program
CN110569013B (en) Image display method and device based on display screen
JP5899503B2 (en) Drawing apparatus and method
JP2015039052A (en) Image processing apparatus, image processing method, and image processing program
CN106919376B (en) Dynamic picture transmission method, server device and user device
CN110892361B (en) Display apparatus, control method of display apparatus, and computer program product thereof
WO2013153568A1 (en) Video display device and integrated circuit
US20120162198A1 (en) Information Processor, Information Processing Method, and Computer Program Product
CN105338261B (en) A kind of method and device of transmission picture relevant information
JP5409245B2 (en) Image processing apparatus and control method thereof
JP6443505B2 (en) Program, display control apparatus, and display control method
KR102246904B1 (en) Image display apparatus
JP2008262392A (en) Image processor, image processing method and image processing program
JP2008199097A (en) Image processing apparatus, image processing method, program, recording medium, portable terminal, and receiver
JP5937871B2 (en) Stereoscopic image display device, stereoscopic image display method, and stereoscopic image display program
JP2023012941A (en) Receiving device and program
CN114285958A (en) Image processing circuit, image processing method, and electronic device
JP5389083B2 (en) Image processing apparatus, image encoding system, and image decoding system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12873991

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12873991

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP