WO2018123202A1 - Moving-image processing device, display device, moving-image processing method, and control program - Google Patents

Moving-image processing device, display device, moving-image processing method, and control program Download PDF

Info

Publication number
WO2018123202A1
WO2018123202A1 PCT/JP2017/036763 JP2017036763W WO2018123202A1 WO 2018123202 A1 WO2018123202 A1 WO 2018123202A1 JP 2017036763 W JP2017036763 W JP 2017036763W WO 2018123202 A1 WO2018123202 A1 WO 2018123202A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving image
texture
unit
motion vector
image processing
Prior art date
Application number
PCT/JP2017/036763
Other languages
French (fr)
Japanese (ja)
Inventor
直大 北城
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2018123202A1 publication Critical patent/WO2018123202A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • G06T7/231Analysis of motion using block-matching using full search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region

Definitions

  • the following disclosure relates to a moving image processing apparatus and the like.
  • Patent Literature 1 discloses a technique for imparting a predetermined texture to a predetermined area in a moving image.
  • Patent Document 1 discloses an image texture manipulation method for dynamically manipulating an image area (second image area) that gives a transparent layer texture.
  • Patent Document 1 it is not possible to perform moving image processing according to the texture of individual objects (objects) expressed in the moving image.
  • One aspect of the present disclosure has been made in view of the above-described problems, and an object thereof is to realize a moving image processing apparatus or the like that can enhance the texture of an object expressed in a moving image. .
  • a moving image processing apparatus analyzes a motion vector of a moving image to determine a texture of an object expressed in the moving image moving image. And a moving image processing unit that processes the moving image according to the determination result of the texture determining unit.
  • a moving image processing method analyzes a motion vector of a moving image to determine a texture of an object expressed in the moving image. And a moving image processing step for processing the moving image in accordance with the determination result in the texture determination step.
  • the moving image processing apparatus has an effect that the texture of the object expressed in the moving image can be enhanced.
  • FIG. 3 is a functional block diagram illustrating a configuration of a main part of the display device according to the first embodiment.
  • FIG. It is the schematic for demonstrating a motion vector. It is a figure which shows an example of a 2nd motion vector set.
  • (A) And (b) is a figure for demonstrating the relationship between the viscosity and the texture of a liquid, respectively. It is a figure which shows an example of HMM.
  • (A) And (b) is a figure for demonstrating the moving image process in the display apparatus of FIG. 1, respectively. It is a functional block diagram which shows roughly the signal processing part which concerns on Embodiment 2, and the structure of the periphery.
  • FIG. 1 is a functional block diagram illustrating a configuration of a main part of the display device 1.
  • the display device 1 includes a signal processing unit 10 (moving image processing device), a receiving unit 60, a decoding unit 61, a display unit 70, and a storage unit 90.
  • the display device 1 may be a television or a PC (Personal Computer).
  • the display device 1 may be a portable information terminal such as a multifunction mobile phone (smartphone) or a tablet.
  • the signal processing unit 10 processes the moving image (input image image) and outputs the processed moving image (output moving image) to the display unit 70.
  • the display unit 70 displays a moving image.
  • the display unit 70 may be, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display.
  • the signal processing unit 10 is provided as a part of a control unit (not shown) that comprehensively controls each unit of the display device 1.
  • the function of the control unit may be realized by a CPU (Central Processing Unit) executing a program stored in the storage unit 90.
  • the function of each part of the signal processing unit 10 will be described in detail later.
  • the storage unit 90 stores various programs executed by the signal processing unit 10 and data used by the programs.
  • the storage unit 90 stores a feature pattern model DB (DataBase) 91.
  • the feature pattern model DB 91 is a DB in which various feature pattern models (described later) are stored.
  • the receiving unit 60 receives broadcast waves (radio waves).
  • the decoding unit 61 acquires compressed moving image data (moving image data compressed by a predetermined encoding method) included in the broadcast wave received by the receiving unit 60. Subsequently, the decoding unit 61 acquires an input moving image (input video signal) by decoding the compressed moving image data. Then, the decoding unit 61 supplies the acquired input moving image to the signal processing unit 10 (more specifically, the first correction unit 11 described later).
  • an input moving image (moving image input to the signal processing unit 10) is also referred to as a moving image A.
  • the moving image A is a moving image to be processed in the signal processing unit 10.
  • the moving image A may have a resolution of 4K2K (resolution of 3840 horizontal pixels ⁇ 2160 vertical pixels).
  • the resolution of each moving image described in the first embodiment is not limited to the above, and may be set as appropriate.
  • the receiving unit 60 and the decoding unit 61 may be provided as an integrated functional unit.
  • a known tuner can be used as the receiving unit 60 and the decoding unit 61.
  • the signal processing unit 10 may acquire the moving image A from the storage unit 90.
  • the signal processing unit 10 may acquire the moving image A from an external device (for example, a digital movie camera) connected to the display device 1.
  • the signal processing unit 10 processes the moving image A supplied from the receiving unit 60 to generate an output moving image (output video signal). Then, the signal processing unit 10 (more specifically, a texture correction unit 14 described later) supplies the output moving image to the display unit 70. According to this configuration, the output moving image can be displayed on the display unit 70.
  • a display control unit (not shown) that controls the operation of the display unit 70 may be provided in the signal processing unit 10 or may be provided in the display unit 70 itself.
  • the signal processing unit 10 includes a first correction unit 11, a frame rate conversion unit 12, a texture detection unit 13 (texture determination unit), and a texture correction unit 14 (moving image processing unit). Yes.
  • the texture detection unit 13, the texture correction unit 14, and the feature pattern model DB 91 are main parts of the moving image processing apparatus according to an aspect of the present disclosure.
  • the texture detection unit 13, the texture correction unit 14, and the feature pattern model DB 91 may be collectively referred to as “texture processing unit”.
  • the texture processing unit is indicated by a dotted line for convenience of explanation.
  • the first correction unit 11 processes the moving image A described above.
  • the moving image after processing in the first correction unit 11 is also referred to as a moving image B.
  • the process in the first correction unit 11 may be a known image quality correction process.
  • the first correction unit 11 may perform scaling (resolution change) on the moving image A.
  • the resolution of the moving image displayed on the display unit 70 can be converted into a resolution according to the performance specifications of the display unit 70.
  • the first correction unit 11 is not an essential component in the signal processing unit 10 as shown in FIG. For example, if the resolution of the moving image A already conforms to the performance specifications of the display unit 70, the first correction unit 11 does not need to generate the moving image B (convert the resolution).
  • the first correction unit 11 sets image quality parameters of the moving image A (eg, parameters indicating the degree of brightness, contrast, color density, peaking, outline enhancement, etc.) according to the user's operation. Also good. In this case, the first correction unit 11 processes the moving image A using the set image quality parameter. For example, when the image quality parameter is arbitrarily selected by the user according to the usage mode of the user, the first correction unit 11 may be operated as described above.
  • image quality parameters of the moving image A eg, parameters indicating the degree of brightness, contrast, color density, peaking, outline enhancement, etc.
  • the first correction unit 11 supplies the moving image B to the frame rate conversion unit 12 (more specifically, each of the interpolation image generation unit 121 and the motion vector calculation unit 122 described below).
  • the moving image A may be supplied from the decoding unit 61 to the frame rate conversion unit 12.
  • the frame rate conversion unit 12 includes an interpolated image generation unit 121 and a motion vector calculation unit 122.
  • the interpolated image generation unit 121 performs processing for increasing the frame rate of the moving image B. Specifically, the interpolated image generation unit 121 extracts each of a plurality of frames constituting the moving image B from the moving image B. Each frame extracted by the interpolated image generation unit 121 may be stored, for example, in a frame memory (not shown).
  • the frame memory may be provided in the frame rate conversion unit 12 or may be provided outside the frame rate conversion unit 12.
  • the interpolated image generation unit 121 generates an interpolation frame (intermediate frame) based on the frame using a known algorithm. For example, the interpolated image generation unit 121 may generate an interpolation frame using a motion vector described below. Then, the interpolated image generation unit 121 increases the frame rate of the moving image B by inserting interpolation frames into the moving image B at predetermined frame intervals.
  • the processed moving image in the interpolated image generation unit 121 is also referred to as a moving image C.
  • the frame rate conversion unit 12 may increase the frame rate of the moving image B by a factor of two. For example, when the frame rate of the moving image B is 60 fps (frames per second), the interpolated image generating unit 121 generates a moving image C having a frame rate of 120 fps. Then, the interpolated image generation unit 121 supplies the moving image C to the texture correction unit 14 (more specifically, the second correction unit 142 described below).
  • the conversion rate of the frame rate in the frame rate conversion unit 12 is not limited to the above, and may be set as appropriate. Further, the frame rate of each moving image described in the first embodiment is not limited to the above.
  • the interpolated image generation unit 121 By providing the interpolated image generation unit 121, the frame rate of the moving image displayed on the display unit 70 can be converted into one according to the performance specifications of the display unit 70.
  • the interpolated image generation unit 121 is not an essential component of the signal processing unit 10 as illustrated in FIG. For example, if the frame rate of the moving image B (moving image A) already conforms to the performance specifications of the display unit 70, the interpolated image generating unit 121 generates the moving image C (converts the frame rate). It is not necessary.
  • the motion vector calculation unit 122 calculates (detects) a motion vector by analyzing the moving image B (more specifically, each frame of the moving image B stored in the frame memory). A known algorithm may be used to calculate the motion vector in the motion vector calculation unit 122.
  • the interpolated image generation unit 121 when the interpolated image generation unit 121 is excluded from the frame rate conversion unit 12, a function of extracting each frame from the moving image B may be given to the motion vector calculation unit 122. Further, as shown in FIG. 8 and the like described later, the motion vector calculation unit 122 can be excluded from the signal processing unit 10. That is, it should be noted that the frame rate conversion unit 12 is also not an essential component in the signal processing unit 10.
  • a motion vector is a block (more specifically, a virtual object located in a block) in one frame (eg, a reference frame) and another frame (eg, a reference frame) subsequent to the one frame. This is a vector indicating the positional deviation from the corresponding block in the next frame.
  • a motion vector is a vector indicating to which position a block in one frame has moved in another subsequent frame.
  • the motion vector is used as an index indicating the movement amount of the block.
  • FIG. 2 is a schematic diagram for explaining a motion vector.
  • each frame included in the moving image B is uniformly divided into blocks having a horizontal length (resolution) a and a vertical length b.
  • the horizontal pixel number of the moving image B is represented as H
  • the vertical pixel number is represented as V.
  • each frame is divided in the horizontal direction (H / a) and in the vertical direction (V / b). That is, each frame is divided into (H / a) ⁇ (V / b) blocks. Note that the values of a, b, H, and V may be set arbitrarily.
  • one of the blocks in FIG. 2 is represented as a block (x, y).
  • x and y are indices (numbers) indicating horizontal and vertical positions in each frame, respectively.
  • the block located in the upper left among the blocks in FIG. 2 be a block (0, 0).
  • the block number in the horizontal direction increases from left to right
  • the block number in the vertical direction increases by 1 from top to bottom. Is set. Therefore, “0 ⁇ x ⁇ H / a ⁇ 1 and 0 ⁇ y ⁇ V / b ⁇ 1”.
  • the motion vector of the block (x, y) is represented as MV (x, y).
  • the motion vector calculation unit 122 calculates a motion vector for each block in FIG.
  • a set of motion vectors calculated by the motion vector calculation unit 122 for each block of one frame is referred to as a first motion vector set.
  • the motion vector calculation unit 122 supplies the first motion vector set described above to the interpolation image generation unit 121 and the texture detection unit 13 (more specifically, the extraction unit 131 described below).
  • the texture detection unit 13 includes an extraction unit 131 and a collation unit 132. As will be described below, the texture detection unit 13 analyzes each motion vector of a moving image (for example, moving image B), so that each object represented in the moving image (more specifically, in the moving image) Is determined. Then, the texture detection unit 13 supplies the determination result (a texture ID described later) to the texture correction unit 14. Hereinafter, each part of the texture detection part 13 is demonstrated concretely.
  • the extraction unit 131 extracts (acquires) a part (subset) of the first motion vector set described above.
  • this subset is referred to as a second motion vector set.
  • FIG. 3 is a diagram illustrating an example of the second motion vector set.
  • the extraction unit 131 may extract an area composed of blocks (m, n) to (m + A ⁇ 1, n + B ⁇ 1) in each frame as a partial area.
  • the values of m, n, A, and B may be arbitrary values as long as the partial area is set so as not to deviate spatially from each frame.
  • the extraction unit 131 acquires a motion vector of each block in the partial area. That is, the extraction unit 131 acquires, as the second motion vector set, the motion vector set of each block in the partial area in the first motion vector set. Then, the extraction unit 131 supplies the second motion vector set to the collation unit 132.
  • the collation unit 132 compares (collates) the second motion vector set (motion vector) with various feature pattern models included in the feature pattern model DB 91 described above.
  • the “feature pattern model” may be a model representing a motion vector (more specifically, a set of motion vectors) representing the texture of an object represented in a moving image.
  • the feature pattern model may be a model related to a motion vector (a set of motion vectors) representing the texture.
  • the feature pattern model is a set of motion vectors derived (set) by performing learning (automatic learning) with a prior pattern recognition technique on the second motion vector set.
  • the feature pattern model may be set in advance in the display device 1.
  • the matching unit 132 can determine the texture of an object based on the feature pattern model.
  • the collation unit 132 has a function as a function unit (pattern setting unit) for setting a feature pattern model is illustrated.
  • the pattern setting unit may be provided as a functional unit separate from the matching unit 132.
  • the pattern setting unit performs the above learning using a known algorithm and sets a feature pattern model.
  • the pattern setting unit stores the feature pattern model in the feature pattern model DB 91.
  • a feature pattern model corresponding to the texture of each object can be stored in the feature pattern model DB 91.
  • the “texture” in the present specification is a sensation perceived by a person (user, viewer), and among those sensations, such as a glossy feeling or a material feeling perceived by dynamically changing. Mean sense.
  • the sensation is expressed by onomatopoeia or mimetic words such as “sarasara”, “nebaba”, “yurayura”, “fluffy”, “kirakira”, and the like.
  • the texture in this specification does not necessarily have to directly specify the material of the object (eg, metal or paper).
  • the texture in this specification may simulate visual texture in a general sense.
  • FIG. 3 described above schematically shows an example of the second motion vector set when the object represented in the moving image is a liquid having high viscosity (eg, oil).
  • the motion vector included in the second motion vector set is one of indices indicating the liquid flow velocity.
  • the flow rate generally depends on the viscosity (viscosity) of the fluid. Therefore, the magnitude of the viscosity of the liquid can be distinguished by the motion vector. From this, the difference in the texture according to the viscosity can be distinguished by the motion vector.
  • FIG. 4A shows a liquid having a low viscosity (for example, water).
  • a low viscosity for example, water
  • the liquid When a person sees a low-viscosity liquid flowing (moving), the liquid generally flows at a relatively high flow rate, and thus it is common for a person to perceive a “free” sensation.
  • a liquid eg oil having a high viscosity is shown in FIG.
  • a liquid eg oil having a high viscosity
  • the liquid flows at a relatively low flow rate, and thus it is common for a person to perceive a “sticky” sensation.
  • the liquid generally moves smoothly (is easily deformed) in a natural state as compared with a solid. That is, the movement pattern differs greatly between liquid and solid. Even if it is a solid, the movement pattern varies depending on, for example, the difference in rigidity. From this, it can be said that each object has a peculiar movement pattern according to the texture.
  • the surface of the object reflects the light, so that a luminance distribution of reflected light is formed on the surface.
  • a person perceives the glossiness (sensation of “glitter”) of an object by the luminance distribution.
  • the person perceives the glossiness of the object more specifically by the movement (change) of the reflected light (specular reflection component or diffuse reflection component) accompanying the change of the viewpoint or the movement of the object.
  • a motion vector indicating the pattern of reflected light movement can also be used as one index indicating the glossiness of the object.
  • the motion vector of the moving image can be used as one of indices indicating the texture of the object (particularly the surface of the object) expressed in the moving image.
  • the inventor of the present application “discriminates (detects and estimates) the texture of the object represented in the moving image based on the motion vector of the moving image, and the moving image corresponding to the determination result”.
  • process a novel technical idea (knowledge) that “process”.
  • Each component of the texture processing unit described above (for example, the texture detection unit 13 and the texture correction unit 14 included in the signal processing unit 10) is conceived based on the technical idea.
  • the collation unit 132 calculates (evaluates) the relevance (matching degree) of the second motion vector set (motion vector) to the feature pattern model. That is, the collation unit 132 acquires information indicating how much the texture of the object indicated by the second motion vector set matches the texture of the object indicated by the feature pattern model.
  • texture ID texture discrimination information
  • the texture ID may be understood as information indicating a result of determining the texture of each object expressed in the moving image (eg, moving image B) by the matching unit 132.
  • the texture detection unit 13 supplies the texture ID to the texture correction unit 14 (more specifically, the parameter setting unit 141 described below).
  • the collation unit 132 may acquire the texture ID using a known pattern recognition method for a two-dimensional vector set (vector sequence). Specific examples of the pattern recognition method include the following three.
  • a correlation function between the second motion vector set and the feature pattern model is calculated.
  • the collation unit 132 calculates the correlation function ⁇ MV in MV database (x ′, y ′).
  • the matching unit 132 may calculate the value of the correlation function as the texture ID.
  • MV in represents a second motion vector set (observation series)
  • MV database represents a feature pattern model.
  • the texture ID can be calculated by a simple calculation as compared with the second method and the third method described below. For this reason, the texture ID can be calculated even when the hardware resources are relatively limited (for example, when the processing performance of the processor is relatively low). Therefore, the signal processing unit 10 can be realized with a simple configuration.
  • FIG. 5 is a diagram illustrating an example of the HMM.
  • t is a symbol representing time.
  • a ij is a state transition probability
  • b ij is an output probability.
  • the matching unit 132 is made to calculate the probability P (Y).
  • the collation unit 132 may calculate the value of the probability P (Y) as the texture ID.
  • the texture ID can be calculated with higher accuracy than in the first method described above.
  • the texture ID can be appropriately calculated even when non-local deformation (that is, deformation with continuous expansion / contraction) occurs in the object (when the motion vector distribution includes expansion / contraction).
  • non-local deformation that is, deformation with continuous expansion / contraction
  • the second method may be adopted.
  • an appropriate probability model needs to be set in advance by the designer of the display device 1.
  • Adopt deep learning technology For example, using a neural network such as CNN (Convolutional Neural Network), the pattern setting unit is made to learn a feature pattern model in advance. Then, the collation unit 132 is made to compare the above-described observation series MV in with the feature pattern model. In this case, the collation unit 132 may output the comparison result as a texture ID.
  • CNN Convolutional Neural Network
  • the texture ID can be calculated with higher accuracy than in the second method described above.
  • the feature setting model is learned by the pattern setting unit using sufficient hardware resources, it can be expected to calculate the texture ID with particularly high accuracy.
  • the feature pattern can be obtained by appropriate learning even if the designer of the display device 1 does not specify in advance the individual features of the motion vector according to the type of texture (for example, glossiness).
  • a model can be obtained. Therefore, it is expected that a texture ID corresponding to a wide range of textures can be acquired by a more flexible method.
  • the third method it is necessary to prepare particularly abundant hardware resources.
  • the texture correction unit 14 includes a parameter setting unit 141 and a second correction unit 142 (correction unit).
  • the parameter setting unit 141 selects (sets) an image quality parameter based on the texture ID acquired from the matching unit 132.
  • the parameter setting unit 141 a table of image quality parameters associated with the texture ID is set in advance.
  • the parameter setting unit 141 refers to the table and selects an image quality parameter corresponding to the texture ID. Then, the parameter setting unit 141 supplies the selected image quality parameter to the second correction unit 142.
  • the second correction unit 142 may be a functional unit that performs a known image quality correction process, similarly to the first correction unit 11 described above. However, the second correction unit 142 may perform image quality correction processing (image quality correction processing mainly intended for texture reproduction) by a new method that is not known.
  • the moving image C is supplied from the interpolated image generation unit 121 to the second correction unit 142.
  • the second correction unit 142 processes the moving image C using the image quality parameter selected by the parameter setting unit 141.
  • the moving image after processing in the second correction unit 142 is also referred to as a moving image D (output moving image).
  • the second correction unit 142 supplies the moving image D to the display unit 70.
  • the second correction unit 142 can perform processing according to the texture of the object expressed in the moving image C using the image quality parameter selected by the parameter setting unit 141.
  • the image quality correction process for example, it is possible to enhance the glossiness or texture of an object.
  • the second correction unit 142 can generate the moving image D with a higher texture. Therefore, it is possible to provide the user with the moving image D in which the texture is more emphasized (expressed more effectively).
  • the second correction unit 142 may process the moving image C using an image quality parameter corresponding to the texture ID of the object that occupies the most part in the moving image C. That is, the second correction unit 142 may process the moving image C based on one texture ID (main texture ID).
  • the second correction unit 142 may process the moving image C using image quality parameters corresponding to the texture IDs of the plurality of objects.
  • the second correction unit 142 may use an image quality parameter corresponding to the texture ID of an object that occupies the most part in each of the partial regions in FIG. In this case, the second correction unit 142 processes each area corresponding to each partial area in the moving image C.
  • the parameter setting unit 141 may also supply the selected image quality parameter to the first correction unit 11 described above.
  • the first correction unit 11 can perform the same image quality correction process as that of the second correction unit 142 on the moving image A described above.
  • each of the first correction unit 11 and the second correction unit 142 can perform processing according to the texture of the object, so that the texture can be more effectively emphasized.
  • the second correction unit 142 may further perform processing for enhancing the texture pattern of the object expressed in the moving image C.
  • the second correction unit 142 may store in advance a table indicating the correspondence relationship between the texture ID and the texture pattern.
  • the 2nd correction part 142 may give a texture pattern to an object according to texture ID.
  • the second correction unit 142 may enhance the texture pattern of the object by performing a filter process.
  • the second correction unit 142 may perform special processing such as HDR (High Dynamic Range) expansion on the object according to the texture ID.
  • HDR High Dynamic Range
  • the brightness of the object can be partially enhanced, so that a predetermined texture (eg, glossiness) of the object can be enhanced.
  • the second correction unit 142 may perform a process of distorting the object and the area around the object according to the texture ID. Also by this processing, the texture of the object can be enhanced. As described above, the second correction unit 142 may be configured to perform an arbitrary process for enhancing the texture of an object according to the texture ID.
  • the signal processing unit 10 in the display device 1 includes a texture detection unit 13 and a texture correction unit 14 (components of the above-described texture processing unit).
  • the texture detecting unit 13 determines the texture of each object expressed in the moving image by analyzing the motion vector of the moving image. Then, the texture detection unit 13 supplies the texture correction unit 14 with the texture ID indicating the determination result.
  • the texture correction unit 14 processes the moving image based on the texture ID (according to the determination result of the texture detection unit 13). That is, the moving image can be processed by the texture correction unit 14 so as to more effectively express the texture of the object. Therefore, according to the signal processing unit 10, it is possible to improve the texture of the object expressed in the moving image.
  • the signal processing unit 10 even if (i) the resolution of the moving image is not necessarily high enough, or (ii) the moving image deteriorates during decoding in the decoding unit 61, The texture of the can be expressed effectively. That is, a moving image that can sufficiently express the texture of an object can be provided with a simpler configuration than in the past.
  • FIGS. 6A and 6B are diagrams for explaining the moving image processing in the texture correction unit 14, respectively. Specifically, FIGS. 6A and 6B show an object before the moving image processing and an object after the moving image processing are performed.
  • FIG. 6A illustrates a case where moving image processing (eg, contrast adjustment, brightness adjustment, HDR expansion) for enhancing the glossiness of an object is performed by moving image processing in the texture correction unit 14. .
  • moving image processing eg, contrast adjustment, brightness adjustment, HDR expansion
  • the motion vector is used as one of indexes indicating the glossiness of the object in the processing in the texture detection unit 13.
  • FIG. 6B when moving image processing (eg, contour correction) is performed by the moving image processing in the texture correction unit 14 to enhance the “fluffy” feeling of the object (material feeling representing lightness). Is illustrated. In the case of FIG. 6B, it is understood that the motion vector is used as one of indexes indicating the “fluffy” feeling of the object in the processing in the texture detection unit 13.
  • moving image processing eg, contour correction
  • the moving image processing method includes: (i) a texture determining step of determining a texture of an object represented in the moving image by analyzing a motion vector of the moving image; and (ii) a determination in the texture determining step.
  • FIG. 7 is a functional block diagram schematically showing the configuration of the signal processing unit 20 (moving image processing apparatus) and its periphery according to the second embodiment.
  • the display device of Embodiment 2 is referred to as a display device 2.
  • portions not shown are the same as those of the display device 1 of FIG. This also applies to each embodiment described below.
  • the signal processing unit 20 has a configuration in which the first correction unit 11 and the interpolated image generation unit 121 are excluded from the signal processing unit 10 of the first embodiment.
  • the above-described moving image A (input moving image) is supplied from the decoding unit 61 to each of the motion vector calculation unit 122 and the texture correction unit 14.
  • the motion vector calculation unit 122 supplies the set of motion vectors of the moving image A to the extraction unit 131 of the texture detection unit 13 as the first motion vector set. Similar to the first embodiment, the extraction unit 131 extracts a second motion vector set from the first motion vector set. As in the first embodiment, the collation unit 132 compares the second motion vector set with various feature pattern models included in the feature pattern model DB 91. Since the specific processing of the texture detection unit 13 is the same, the description thereof is omitted.
  • the texture correction unit 14 processes the moving image A based on the texture ID acquired from the matching unit 132 of the texture detection unit 13. That is, the texture correction unit 14 processes the moving image A to generate a moving image D (output moving image) and supplies the moving image D to the display unit 70.
  • the first correction unit 11 image quality correction processing before the texture is determined
  • the interpolated image generation unit 121 frame
  • the configuration of the moving image processing apparatus can be simplified as compared with the first embodiment.
  • FIG. 8 is a functional block diagram schematically showing the configuration of the signal processing unit 30 (moving image processing apparatus) and its periphery according to the third embodiment. Note that the display device of Embodiment 3 is referred to as a display device 3.
  • the decoding unit 61 acquires the compressed moving image data from the receiving unit 60.
  • information motion vector information
  • An example of a format of compressed moving image data including the motion vector information can be MPEG4.
  • the signal processing unit 30 has a configuration in which the motion vector calculation unit 122 is excluded from the signal processing unit 20 of the second embodiment. That is, in the signal processing unit 30, the configuration of the moving image processing apparatus is further simplified as compared with the second embodiment described above.
  • the moving image A (input moving image) is supplied from the decoding unit 61 to the texture correction unit 14.
  • the extraction unit 131 acquires motion vector information included in the above-described compressed moving image data from the decoding unit 61.
  • the extraction unit 131 acquires a set of motion vectors indicated in the motion vector information as a first motion vector set. Then, the extraction unit 131 extracts a second vector set from the motion vector information (first motion vector set). And the collation part 132 compares a 2nd motion vector set with the various feature pattern models contained in feature pattern model DB91 similarly to each above-mentioned embodiment.
  • the texture detection unit 13 in the third embodiment performs the same processing as in each of the above-described embodiments using the motion vector information included in the compressed moving image data. That is, the texture detection unit 13 according to the third embodiment analyzes a motion vector included in advance in the compressed moving image data.
  • the motion image processing apparatus when the motion vector information is included in the compressed motion image data, the motion image processing apparatus according to an aspect of the present disclosure can omit the process of calculating the motion vector. Therefore, the configuration of the moving image processing apparatus is further simplified.
  • FIG. 9 is a functional block diagram schematically showing the configuration of the signal processing unit 30v (moving image processing apparatus) and its periphery according to the present modification. Note that the display device of this modification is referred to as a display device 3v.
  • FIG. 9 a signal processing unit 30v as an example of a variation of the signal processing unit 30 of the third embodiment is illustrated for convenience of explanation.
  • the configuration of the present modification may be applied to the above-described first and second embodiments and the later-described fourth embodiment.
  • auxiliary information may be further input to the texture detection unit 13 (more specifically, each of the extraction unit 131 and the collation unit 132).
  • the auxiliary information means information other than the motion vector included in the moving image.
  • information indicating an object boundary, a color, a texture pattern, and the like can be given.
  • the texture detection unit 13 may further determine the texture of the object expressed in the moving image by further using auxiliary information in addition to the motion vector information (motion vector). According to this configuration, the texture can be determined by further considering the shape and color of the object indicated by the auxiliary information, and it is expected that the texture can be determined with higher accuracy.
  • FIG. 10 is a functional block diagram schematically illustrating the configuration of the signal processing unit 40 (moving image processing apparatus) and its periphery according to the fourth embodiment. Note that the display device of Embodiment 4 is referred to as a display device 4.
  • the signal processing part 40 as an example of the variation of the signal processing part 30 of Embodiment 3 is illustrated for convenience of explanation.
  • the configuration of the fourth embodiment may be applied to any of the above-described embodiments and modifications.
  • the texture of an object is determined using a motion vector calculated for one frame (eg, a reference frame) included in a moving image.
  • a motion vector calculated for one frame e.g, a reference frame
  • N is an integer equal to or greater than 2
  • the texture of the object is determined using only the motion vector in the Nth frame. It was.
  • the texture of an object may be determined by further using a single motion vector. That is, the texture of the object may be determined using the motion vector history in the past frame.
  • the fourth embodiment exemplifies a configuration in which a motion vector history 92 described below is added to the above-described texture processing unit.
  • the signal processing unit 40 of the fourth embodiment shows a configuration in which the motion vector history 92 is further stored in the storage unit 90 in the signal processing unit 30 of the third embodiment.
  • the motion vector history 92 stores a set of motion vectors in the first to N ⁇ 1th frames. That is, the motion vector history 92 stores a history of a set of motion vectors (first motion vector set) in a past frame.
  • the decoding unit 61 stores motion vector information indicating the set of motion vectors described above in the motion vector history 92 for each frame of a moving image. That is, the motion vector history 92 stores motion vector information (hereinafter also referred to as “past motion vector information”) indicating a set of motion vectors in the first to (N ⁇ 1) th frames.
  • past motion vector information motion vector information
  • the extraction unit 131 acquires the first motion vector set of the current (Nth) frame as in the third embodiment. Specifically, the extraction unit 131 acquires, from the decoding unit 61, motion vector information indicating a set of motion vectors in the current frame as a first motion vector set.
  • the extraction unit 131 further acquires past motion vector information included in the motion vector history 92.
  • the texture detection unit 13 can further determine the texture of the object by further using the motion vector history in the past frame in addition to the motion vector in the current frame.
  • the main elements that characterize the texture of an object include the pattern of movement of the object or the pattern of movement of reflected light. Therefore, by paying attention to the motion vector history in the past frame, the temporal transition of each pattern can be further considered. Therefore, it is expected that the texture can be discriminated with higher accuracy.
  • control blocks (particularly the signal processing units 10 to 40) of the display devices 1 to 4 may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or a CPU (Central Processing Unit). It may be realized by software using
  • the display devices 1 to 4 include a CPU that executes instructions of a program that is software that realizes each function, and a ROM (Read Only Memory) in which the program and various data are recorded so as to be readable by the computer (or CPU). ) Or a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like. And the objective of 1 aspect of this indication is achieved when a computer (or CPU) reads and runs the said program from the said recording medium.
  • the recording medium a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
  • the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
  • an arbitrary transmission medium such as a communication network or a broadcast wave
  • one aspect of the present disclosure can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
  • a moving image processing apparatus (signal processing unit 10) according to aspect 1 of the present disclosure includes a texture determination unit (13) that determines a texture of an object represented in the moving image by analyzing a motion vector of the moving image.
  • a moving image processing unit (texture correcting unit 14) that processes the moving image according to the determination result of the texture determining unit.
  • the inventor of the present application has proposed a novel technique of “determining the texture of an object expressed in a moving image based on the motion vector of the moving image and performing moving image processing according to the determination result”. I found a new technical idea. The above configuration has been conceived based on the technical idea.
  • the texture determination unit can determine the texture of the object expressed in the moving image by analyzing the motion vector of the moving image. Then, the moving image processing unit performs moving image processing according to the determination result. That is, it is possible to cause the moving image processing unit to perform moving image processing so as to more effectively express the texture of the object. Therefore, it is possible to enhance the texture of the object expressed in the moving image.
  • a feature pattern model that is a model representing the motion vector representing the texture or a model related to the motion vector representing the texture in the aspect 1 is set in advance.
  • the texture determination unit preferably determines the texture by comparing the motion vector of the moving image with the feature pattern model.
  • the texture determination unit compares the motion vector of the moving image with the feature pattern model, thereby moving the moving image with respect to the feature pattern model.
  • Material discrimination information indicating the degree of coincidence of the motion vectors of the image may be generated as the discrimination result.
  • the moving image processing apparatus is any one of Aspects 1 to 3, wherein the texture determination unit is included in advance in the compressed moving image data that is data obtained by compressing the moving image. It is preferable to analyze the motion vector.
  • the configuration of the moving image processing apparatus can be simplified.
  • the display device preferably includes the moving image processing apparatus according to any one of aspects 1 to 4.
  • the motion vector of the moving image is analyzed to determine the texture of the object represented in the moving image, and the determination result in the texture determining step. And a moving image processing step for processing the moving image accordingly.
  • the moving image processing apparatus may be realized by a computer.
  • the moving image processing apparatus is operated by causing the computer to operate as each unit (software element) included in the moving image processing apparatus.
  • a control program for a moving image processing apparatus for realizing the above in a computer and a computer-readable recording medium on which the control program is recorded also fall within the category of one aspect of the present disclosure.
  • Display device 10 1, 2, 3, 3v, 4 Display device 10, 20, 30, 30v, 40 Signal processing unit (moving image processing device) 13 Texture detection unit (material discrimination unit) 14 Texture correction unit (moving image processing unit)

Abstract

The texture of an object represented in a moving-image is heightened. A signal processing unit (10) is provided with a texture detection unit (13) for analyzing the motion vector of a moving-image and thereby determining the texture of an object represented in the moving-image, and a texture correction unit (14) for performing a moving-image process that corresponds to the determination result of the texture detection unit (13).

Description

動画像処理装置、表示装置、動画像処理方法、および制御プログラムMoving image processing device, display device, moving image processing method, and control program
 以下の開示は、動画像処理装置等に関する。 The following disclosure relates to a moving image processing apparatus and the like.
 従来より、人(ユーザ,鑑賞者)にとって実物と同等の見え方となる動画像を提供するために、様々な動画像処理の技術が提案されている。一例として、特許文献1には、動画像内の所定の領域に、所定の質感を付与するための技術が開示されている。具体的には、特許文献1には、透明層質感を与える画像領域(第2画像領域)を動的に操作することを目的とした画像質感操作方法が開示されている。 Conventionally, various moving image processing techniques have been proposed in order to provide a moving image that looks like a real thing to a person (user, viewer). As an example, Patent Literature 1 discloses a technique for imparting a predetermined texture to a predetermined area in a moving image. Specifically, Patent Document 1 discloses an image texture manipulation method for dynamically manipulating an image area (second image area) that gives a transparent layer texture.
日本国公開特許公報「特開2016-126641号公報(2016年7月11日公開)」Japanese Patent Publication “Japanese Unexamined Patent Publication No. 2016-126641 (Released on July 11, 2016)”
 しかしながら、特許文献1に開示された発明では、動画像において表現された個々の物体(オブジェクト)の質感に応じた動画像処理を行うことはできない。 However, in the invention disclosed in Patent Document 1, it is not possible to perform moving image processing according to the texture of individual objects (objects) expressed in the moving image.
 本開示の一態様は、上記の問題点に鑑みてなされたものであり、その目的は、動画像において表現された物体の質感を高めることが可能な動画像処理装置等を実現することにある。 One aspect of the present disclosure has been made in view of the above-described problems, and an object thereof is to realize a moving image processing apparatus or the like that can enhance the texture of an object expressed in a moving image. .
 上記の課題を解決するために、本開示の一態様に係る動画像処理装置は、動画像の動きベクトルを解析することにより、当該動画像動画像において表現された物体の質感を判別する質感判別部と、上記質感判別部の判別結果に応じて上記動画像を処理する動画像処理部と、を備えている。 In order to solve the above-described problem, a moving image processing apparatus according to an aspect of the present disclosure analyzes a motion vector of a moving image to determine a texture of an object expressed in the moving image moving image. And a moving image processing unit that processes the moving image according to the determination result of the texture determining unit.
 また、上記の課題を解決するために、本開示の一態様に係る動画像処理方法は、動画像の動きベクトルを解析することにより、当該動画像において表現された物体の質感を判別する質感判別工程と、上記質感判別工程における判別結果に応じて上記動画像を処理する動画像処理工程と、を含んでいる。 In order to solve the above-described problem, a moving image processing method according to an aspect of the present disclosure analyzes a motion vector of a moving image to determine a texture of an object expressed in the moving image. And a moving image processing step for processing the moving image in accordance with the determination result in the texture determination step.
 本開示の一態様に係る動画像処理装置によれば、動画像において表現された物体の質感を高めることが可能となるという効果を奏する。 The moving image processing apparatus according to one aspect of the present disclosure has an effect that the texture of the object expressed in the moving image can be enhanced.
 また、本開示の一態様に係る動画像処理方法によっても、同様の効果を奏する。 Also, the same effect can be achieved by the moving image processing method according to an aspect of the present disclosure.
実施形態1に係る表示装置の要部の構成を示す機能ブロック図である。3 is a functional block diagram illustrating a configuration of a main part of the display device according to the first embodiment. FIG. 動きベクトルを説明するための概略図である。It is the schematic for demonstrating a motion vector. 第2動きベクトル集合の一例を示す図である。It is a figure which shows an example of a 2nd motion vector set. (a)および(b)はそれぞれ、液体の粘度と質感との関係を説明するための図である。(A) And (b) is a figure for demonstrating the relationship between the viscosity and the texture of a liquid, respectively. HMMの一例を示す図である。It is a figure which shows an example of HMM. (a)および(b)はそれぞれ、図1の表示装置における動画像処理を説明するための図である。(A) And (b) is a figure for demonstrating the moving image process in the display apparatus of FIG. 1, respectively. 実施形態2に係る信号処理部およびその周辺の構成を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly the signal processing part which concerns on Embodiment 2, and the structure of the periphery. 実施形態3に係る信号処理部およびその周辺の構成を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly the structure of the signal processing part which concerns on Embodiment 3, and its periphery. 一変形例に係る信号処理部およびその周辺の構成を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly the structure of the signal processing part which concerns on one modification, and its periphery. 実施形態4に係る信号処理部およびその周辺の構成を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly the signal processing part which concerns on Embodiment 4, and the structure of the periphery.
 〔実施形態1〕
 以下、実施形態1について、図1~図6に基づいて詳細に説明する。まず、図1を参照して、実施形態1の表示装置1の概要について述べる。図1は、表示装置1の要部の構成を示す機能ブロック図である。
Embodiment 1
Hereinafter, Embodiment 1 will be described in detail with reference to FIGS. First, an overview of the display device 1 according to the first embodiment will be described with reference to FIG. FIG. 1 is a functional block diagram illustrating a configuration of a main part of the display device 1.
 (表示装置1の概要)
 表示装置1は、信号処理部10(動画像処理装置)、受信部60、復号部61、表示部70、および記憶部90を備えている。一例として、表示装置1は、テレビまたはPC(Personal Computer)であってよい。あるいは、表示装置1は、多機能型携帯電話機(スマートフォン)またはタブレット等の携帯型情報端末等であってもよい。
(Outline of display device 1)
The display device 1 includes a signal processing unit 10 (moving image processing device), a receiving unit 60, a decoding unit 61, a display unit 70, and a storage unit 90. As an example, the display device 1 may be a television or a PC (Personal Computer). Alternatively, the display device 1 may be a portable information terminal such as a multifunction mobile phone (smartphone) or a tablet.
 以下に述べるように、表示装置1では、信号処理部10において動画像(入力像画像)に処理が施され、処理後の動画像(出力動画像)が表示部70に出力される。表示部70は、動画像を表示する。表示部70は、例えば液晶ディスプレイまたは有機EL(Electro-Luminescence)ディスプレイであってよい。 As described below, in the display device 1, the signal processing unit 10 processes the moving image (input image image) and outputs the processed moving image (output moving image) to the display unit 70. The display unit 70 displays a moving image. The display unit 70 may be, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display.
 信号処理部10は、表示装置1の各部を統括的に制御する制御部(不図示)の一部分として設けられている。当該制御部の機能は、記憶部90に記憶されたプログラムを、CPU(Central Processing Unit)が実行することで実現されてよい。信号処理部10の各部の機能については、後により詳細に述べる。 The signal processing unit 10 is provided as a part of a control unit (not shown) that comprehensively controls each unit of the display device 1. The function of the control unit may be realized by a CPU (Central Processing Unit) executing a program stored in the storage unit 90. The function of each part of the signal processing unit 10 will be described in detail later.
 記憶部90は、信号処理部10が実行する各種のプログラム、および当該プログラムによって使用されるデータを格納する。一例として、記憶部90には、特徴パターンモデルDB(DataBase)91が格納されている。特徴パターンモデルDB91は、各種の特徴パターンモデル(後述)が格納されたDBである。 The storage unit 90 stores various programs executed by the signal processing unit 10 and data used by the programs. As an example, the storage unit 90 stores a feature pattern model DB (DataBase) 91. The feature pattern model DB 91 is a DB in which various feature pattern models (described later) are stored.
 実施形態1において、受信部60は、放送波(電波)を受信する。復号部61は、受信部60が受信した放送波に含まれる、圧縮動画像データ(所定の符号方式により圧縮された動画像のデータ)を取得する。続いて、復号部61は、圧縮動画像データを復号することにより、入力動画像(入力映像信号)を取得する。そして、復号部61は、取得した入力動画像を、信号処理部10(より具体的には、後述する第1補正部11)に供給する。 In Embodiment 1, the receiving unit 60 receives broadcast waves (radio waves). The decoding unit 61 acquires compressed moving image data (moving image data compressed by a predetermined encoding method) included in the broadcast wave received by the receiving unit 60. Subsequently, the decoding unit 61 acquires an input moving image (input video signal) by decoding the compressed moving image data. Then, the decoding unit 61 supplies the acquired input moving image to the signal processing unit 10 (more specifically, the first correction unit 11 described later).
 以降、簡単のために、入力動画像(信号処理部10に入力される動画像)を動画像Aとも称する。動画像Aは、信号処理部10における処理の対象となる動画像である。一例として、動画像Aは、4K2Kの解像度(水平画素数3840×垂直画素数2160の解像度)を有していてよい。但し、実施形態1において述べる各動画像の解像度は、上記のものに限定されず、適宜設定されてよい。 Hereinafter, for the sake of simplicity, an input moving image (moving image input to the signal processing unit 10) is also referred to as a moving image A. The moving image A is a moving image to be processed in the signal processing unit 10. As an example, the moving image A may have a resolution of 4K2K (resolution of 3840 horizontal pixels × 2160 vertical pixels). However, the resolution of each moving image described in the first embodiment is not limited to the above, and may be set as appropriate.
 なお、受信部60および復号部61は、一体の機能部として設けられてもよい。例えば、受信部60および復号部61として、公知のチューナを用いることができる。 Note that the receiving unit 60 and the decoding unit 61 may be provided as an integrated functional unit. For example, a known tuner can be used as the receiving unit 60 and the decoding unit 61.
 また、記憶部90に動画像Aがあらかじめ格納されている場合、信号処理部10は、記憶部90から動画像Aを取得してもよい。あるいは、信号処理部10は、表示装置1に接続された外部装置(例:デジタルムービーカメラ)から動画像Aを取得してもよい。 Further, when the moving image A is stored in the storage unit 90 in advance, the signal processing unit 10 may acquire the moving image A from the storage unit 90. Alternatively, the signal processing unit 10 may acquire the moving image A from an external device (for example, a digital movie camera) connected to the display device 1.
 以下に述べるように、信号処理部10は、受信部60から供給された動画像Aを処理し、出力動画像(出力映像信号)を生成する。そして、信号処理部10(より具体的には、後述する質感補正部14)は、出力動画像を表示部70に供給する。当該構成によれば、表示部70において出力動画像を表示できる。なお、表示部70の動作を制御する表示制御部(不図示)は、信号処理部10に設けられてもよいし、表示部70自体に設けられてもよい。 As described below, the signal processing unit 10 processes the moving image A supplied from the receiving unit 60 to generate an output moving image (output video signal). Then, the signal processing unit 10 (more specifically, a texture correction unit 14 described later) supplies the output moving image to the display unit 70. According to this configuration, the output moving image can be displayed on the display unit 70. A display control unit (not shown) that controls the operation of the display unit 70 may be provided in the signal processing unit 10 or may be provided in the display unit 70 itself.
 (信号処理部10)
 続いて、信号処理部10の具体的な構成について述べる。図1に示されるように、信号処理部10は、第1補正部11、フレームレート変換部12、質感検出部13(質感判別部)、および質感補正部14(動画像処理部)を備えている。
(Signal processing unit 10)
Next, a specific configuration of the signal processing unit 10 will be described. As shown in FIG. 1, the signal processing unit 10 includes a first correction unit 11, a frame rate conversion unit 12, a texture detection unit 13 (texture determination unit), and a texture correction unit 14 (moving image processing unit). Yes.
 なお、以下に述べるように、質感検出部13、質感補正部14、および特徴パターンモデルDB91は、本開示の一態様に係る動画像処理装置の要部である。質感検出部13、質感補正部14、および特徴パターンモデルDB91は、総称的に「質感処理部」と称されてもよい。図1および後述の各図面では、説明の便宜上、質感処理部が点線によって示されている。 Note that, as described below, the texture detection unit 13, the texture correction unit 14, and the feature pattern model DB 91 are main parts of the moving image processing apparatus according to an aspect of the present disclosure. The texture detection unit 13, the texture correction unit 14, and the feature pattern model DB 91 may be collectively referred to as “texture processing unit”. In FIG. 1 and each drawing to be described later, the texture processing unit is indicated by a dotted line for convenience of explanation.
 第1補正部11は、上述の動画像Aを処理する。以降、第1補正部11における処理後の動画像を、動画像Bとも称する。第1補正部11における処理は、公知の画質補正処理であってよい。一例として、第1補正部11は、動画像Aにスケーリング(解像度の変更)を施してもよい。この場合、表示部70に表示させる動画像の解像度を、当該表示部70の性能仕様に応じたものに変換できる。 The first correction unit 11 processes the moving image A described above. Hereinafter, the moving image after processing in the first correction unit 11 is also referred to as a moving image B. The process in the first correction unit 11 may be a known image quality correction process. As an example, the first correction unit 11 may perform scaling (resolution change) on the moving image A. In this case, the resolution of the moving image displayed on the display unit 70 can be converted into a resolution according to the performance specifications of the display unit 70.
 但し、後述の図7等に示されるように、第1補正部11は、信号処理部10における必須の構成要素ではないことに留意されたい。例えば、動画像Aの解像度がすでに表示部70の性能仕様に応じたものであれば、第1補正部11において動画像Bを生成する(解像度の変換を行う)必要はないためである。 However, it should be noted that the first correction unit 11 is not an essential component in the signal processing unit 10 as shown in FIG. For example, if the resolution of the moving image A already conforms to the performance specifications of the display unit 70, the first correction unit 11 does not need to generate the moving image B (convert the resolution).
 なお、第1補正部11は、ユーザの操作に応じて、動画像Aの画質パラメータ(例:明るさ、コントラスト、色の濃さ、ピーキング、輪郭強調の程度等を示すパラメータ)を設定してもよい。この場合、第1補正部11は、設定した画質パラメータを用いて、動画像Aを処理する。例えば、ユーザの使用態様に応じて、画質パラメータをユーザに随意に選択させる場合には、第1補正部11を上記のように動作させてもよい。 Note that the first correction unit 11 sets image quality parameters of the moving image A (eg, parameters indicating the degree of brightness, contrast, color density, peaking, outline enhancement, etc.) according to the user's operation. Also good. In this case, the first correction unit 11 processes the moving image A using the set image quality parameter. For example, when the image quality parameter is arbitrarily selected by the user according to the usage mode of the user, the first correction unit 11 may be operated as described above.
 第1補正部11は、フレームレート変換部12(より具体的には、以下に述べる内挿画像生成部121および動きベクトル算出部122のそれぞれ)に、動画像Bを供給する。なお、信号処理部10から第1補正部11を除外した場合には、復号部61からフレームレート変換部12に動画像Aが供給されてよい。 The first correction unit 11 supplies the moving image B to the frame rate conversion unit 12 (more specifically, each of the interpolation image generation unit 121 and the motion vector calculation unit 122 described below). When the first correction unit 11 is excluded from the signal processing unit 10, the moving image A may be supplied from the decoding unit 61 to the frame rate conversion unit 12.
 フレームレート変換部12は、内挿画像生成部121および動きベクトル算出部122を備えている。内挿画像生成部121は、動画像Bのフレームレートを増加させる処理を行う。具体的には、内挿画像生成部121は、動画像Bから、当該動画像Bを構成する複数のフレームのそれぞれを抽出する。内挿画像生成部121によって抽出された各フレームは、例えばフレームメモリ(不図示)に格納されてよい。当該フレームメモリは、フレームレート変換部12に設けられていてもよいし、フレームレート変換部12の外部に設けられていてもよい。 The frame rate conversion unit 12 includes an interpolated image generation unit 121 and a motion vector calculation unit 122. The interpolated image generation unit 121 performs processing for increasing the frame rate of the moving image B. Specifically, the interpolated image generation unit 121 extracts each of a plurality of frames constituting the moving image B from the moving image B. Each frame extracted by the interpolated image generation unit 121 may be stored, for example, in a frame memory (not shown). The frame memory may be provided in the frame rate conversion unit 12 or may be provided outside the frame rate conversion unit 12.
 続いて、内挿画像生成部121は、公知のアルゴリズムを用いて、上記フレームに基づいて補間フレーム(中間フレーム)を生成する。例えば、内挿画像生成部121は、以下に述べる動きベクトルを用いて、補間フレームを生成してよい。そして、内挿画像生成部121は、動画像Bに対して、所定のフレーム間隔ごとに補間フレームを挿入することにより、動画像Bのフレームレートを増加させる。 Subsequently, the interpolated image generation unit 121 generates an interpolation frame (intermediate frame) based on the frame using a known algorithm. For example, the interpolated image generation unit 121 may generate an interpolation frame using a motion vector described below. Then, the interpolated image generation unit 121 increases the frame rate of the moving image B by inserting interpolation frames into the moving image B at predetermined frame intervals.
 以降、内挿画像生成部121における処理後の動画像を、動画像Cとも称する。一例として、フレームレート変換部12は、動画像Bのフレームレートを2倍に増加させてよい。例えば、動画像Bのフレームレートが60fps(frames per second)である場合、内挿画像生成部121は、120fpsのフレームレートを有する動画像Cを生成する。そして、内挿画像生成部121は、質感補正部14(より具体的には、以下に述べる第2補正部142)に、動画像Cを供給する。 Hereinafter, the processed moving image in the interpolated image generation unit 121 is also referred to as a moving image C. As an example, the frame rate conversion unit 12 may increase the frame rate of the moving image B by a factor of two. For example, when the frame rate of the moving image B is 60 fps (frames per second), the interpolated image generating unit 121 generates a moving image C having a frame rate of 120 fps. Then, the interpolated image generation unit 121 supplies the moving image C to the texture correction unit 14 (more specifically, the second correction unit 142 described below).
 なお、フレームレート変換部12におけるフレームレートの変換倍率は、上記のものに限定されず、適宜設定されてよい。また、実施形態1において述べる各動画像のフレームレートも、上記のものに限定されない。 Note that the conversion rate of the frame rate in the frame rate conversion unit 12 is not limited to the above, and may be set as appropriate. Further, the frame rate of each moving image described in the first embodiment is not limited to the above.
 内挿画像生成部121が設けられることにより、表示部70に表示させる動画像のフレームレートを、当該表示部70の性能仕様に応じたものに変換できる。但し、後述の図7等に示されるように、内挿画像生成部121は、信号処理部10における必須の構成要素ではないことに留意されたい。例えば、動画像B(動画像A)のフレームレートがすでに表示部70の性能仕様に応じたものであれば、内挿画像生成部121において動画像Cを生成する(フレームレートの変換を行う)必要はないためである。 By providing the interpolated image generation unit 121, the frame rate of the moving image displayed on the display unit 70 can be converted into one according to the performance specifications of the display unit 70. However, it should be noted that the interpolated image generation unit 121 is not an essential component of the signal processing unit 10 as illustrated in FIG. For example, if the frame rate of the moving image B (moving image A) already conforms to the performance specifications of the display unit 70, the interpolated image generating unit 121 generates the moving image C (converts the frame rate). It is not necessary.
 動きベクトル算出部122は、動画像B(より具体的には、フレームメモリに格納された、動画像Bの各フレーム)を解析することにより、動きベクトルを算出(検出)する。動きベクトル算出部122における動きベクトルの算出には、公知のアルゴリズムが用いられてよい。 The motion vector calculation unit 122 calculates (detects) a motion vector by analyzing the moving image B (more specifically, each frame of the moving image B stored in the frame memory). A known algorithm may be used to calculate the motion vector in the motion vector calculation unit 122.
 なお、フレームレート変換部12から内挿画像生成部121を除外した場合には、動画像Bから各フレームを抽出する機能を、動きベクトル算出部122に付与してもよい。さらに、後述の図8等に示されるように、信号処理部10から動きベクトル算出部122を除外することもできる。つまり、フレームレート変換部12もまた、信号処理部10における必須の構成要素ではないことに留意されたい。 Note that when the interpolated image generation unit 121 is excluded from the frame rate conversion unit 12, a function of extracting each frame from the moving image B may be given to the motion vector calculation unit 122. Further, as shown in FIG. 8 and the like described later, the motion vector calculation unit 122 can be excluded from the signal processing unit 10. That is, it should be noted that the frame rate conversion unit 12 is also not an essential component in the signal processing unit 10.
 続いて、動きベクトルについて説明する。まず、動画像(例:動画像B)を構成する各フレームを複数のブロック(領域)に空間的に分割した場合を考える。動きベクトルとは、1つのフレーム(例:基準フレーム)におけるブロック(より具体的には、ブロック内に位置する仮想的なオブジェクト)と、当該1つのフレームに後続する別のフレーム(例:基準フレームの次のフレーム)における対応するブロックとの位置のずれを示すベクトルである。 Next, the motion vector will be described. First, consider a case where each frame constituting a moving image (eg, moving image B) is spatially divided into a plurality of blocks (regions). A motion vector is a block (more specifically, a virtual object located in a block) in one frame (eg, a reference frame) and another frame (eg, a reference frame) subsequent to the one frame. This is a vector indicating the positional deviation from the corresponding block in the next frame.
 つまり、動きベクトルとは、1つのフレームにおけるブロックが、後続する別のフレームにおいてどの位置へと移動したかを示すベクトルである。動きベクトルは、上記ブロックの移動量を示す指標として用いられる。 That is, a motion vector is a vector indicating to which position a block in one frame has moved in another subsequent frame. The motion vector is used as an index indicating the movement amount of the block.
 図2は、動きベクトルを説明するための概略図である。図2に示されるように、動画像Bに含まれる各フレームは、水平方向の長さ(解像度)a、垂直方向の長さbのブロックに均一に分割される。ここで、動画像Bの水平画素数をH、垂直画素数をVとしてそれぞれ表す。 FIG. 2 is a schematic diagram for explaining a motion vector. As shown in FIG. 2, each frame included in the moving image B is uniformly divided into blocks having a horizontal length (resolution) a and a vertical length b. Here, the horizontal pixel number of the moving image B is represented as H, and the vertical pixel number is represented as V.
 この場合、各フレームは、水平方向に(H/a)分割されるとともに、垂直方向に(V/b)分割される。すなわち、各フレームは、(H/a)×(V/b)個のブロックに分割される。なお、a、b、H、およびVの値は、それぞれ任意に設定されてよい。 In this case, each frame is divided in the horizontal direction (H / a) and in the vertical direction (V / b). That is, each frame is divided into (H / a) × (V / b) blocks. Note that the values of a, b, H, and V may be set arbitrarily.
 ここで、図2におけるブロックのうちの1つを、ブロック(x,y)として表す。xおよびyはそれぞれ、各フレームにおける水平方向および垂直方向の位置を示す指標(番号)である。なお、図2におけるブロックのうち、最も左上に位置するブロックを、ブロック(0,0)とする。また、図2において、(i)ブロックの水平方向の番号は、左側から右側に向かうにつれて、(ii)ブロックの垂直方向の番号は、上側から下側に向かうにつれて、1つずつ増加するように設定されている。従って、「0≦x≦H/a-1、かつ、0≦y≦V/b-1」である。 Here, one of the blocks in FIG. 2 is represented as a block (x, y). x and y are indices (numbers) indicating horizontal and vertical positions in each frame, respectively. In addition, let the block located in the upper left among the blocks in FIG. 2 be a block (0, 0). In FIG. 2, (i) the block number in the horizontal direction increases from left to right, and (ii) the block number in the vertical direction increases by 1 from top to bottom. Is set. Therefore, “0 ≦ x ≦ H / a−1 and 0 ≦ y ≦ V / b−1”.
 図2に示されるように、ブロック(x,y)の動きベクトルを、MV(x,y)として表す。動きベクトル算出部122は、図2における各ブロックについて、動きベクトルを算出する。 As shown in FIG. 2, the motion vector of the block (x, y) is represented as MV (x, y). The motion vector calculation unit 122 calculates a motion vector for each block in FIG.
 以下、動きベクトル算出部122が1つのフレームの各ブロックに対して算出した動きベクトルの集合を、第1動きベクトル集合と称する。なお、第1動きベクトル集合は、以下の集合MVSet1、すなわち、
  MVSet1={MV(0,0),MV(1,0),…,MV(H/a-1,0),MV(0,1),MV(1,1),…,MV(H/a-1,1),…,MV(H/a-1,V/b-1)}
として表される。
Hereinafter, a set of motion vectors calculated by the motion vector calculation unit 122 for each block of one frame is referred to as a first motion vector set. Note that the first motion vector set is the following set MVSet1, that is,
MVSet1 = {MV (0, 0), MV (1, 0),..., MV (H / a-1, 0), MV (0, 1), MV (1, 1),. a-1, 1), ..., MV (H / a-1, V / b-1)}
Represented as:
 動きベクトル算出部122は、上述の第1動きベクトル集合を、内挿画像生成部121および質感検出部13(より具体的には、以下に述べる抽出部131)にそれぞれ供給する。 The motion vector calculation unit 122 supplies the first motion vector set described above to the interpolation image generation unit 121 and the texture detection unit 13 (more specifically, the extraction unit 131 described below).
 質感検出部13は、抽出部131および照合部132を備えている。以下に述べるように、質感検出部13は、動画像(例:動画像B)の動きベクトルを解析することにより、当該動画像において表現された各物体(より具体的には、当該動画像内に存在する各物体の像)の質感を判別する。そして、質感検出部13は、当該判別結果(後述する質感ID)を、質感補正部14に供給する。以下、質感検出部13の各部について具体的に説明する。 The texture detection unit 13 includes an extraction unit 131 and a collation unit 132. As will be described below, the texture detection unit 13 analyzes each motion vector of a moving image (for example, moving image B), so that each object represented in the moving image (more specifically, in the moving image) Is determined. Then, the texture detection unit 13 supplies the determination result (a texture ID described later) to the texture correction unit 14. Hereinafter, each part of the texture detection part 13 is demonstrated concretely.
 抽出部131は、上述の第1動きベクトル集合の一部分(部分集合)を抽出(取得)する。以降、当該部分集合を、第2動きベクトル集合と称する。 The extraction unit 131 extracts (acquires) a part (subset) of the first motion vector set described above. Hereinafter, this subset is referred to as a second motion vector set.
 図3は、第2動きベクトル集合の一例を示す図である。図3に示されるように、抽出部131は、各フレームにおいて、ブロック(m,n)~ブロック(m+A-1,n+B-1)によって構成される領域を、部分領域として抽出してもよい。なお、m、n、A、およびBの値は、部分領域が各フレームから空間的に逸脱しないように設定される限り、任意の値であってよい。 FIG. 3 is a diagram illustrating an example of the second motion vector set. As illustrated in FIG. 3, the extraction unit 131 may extract an area composed of blocks (m, n) to (m + A−1, n + B−1) in each frame as a partial area. The values of m, n, A, and B may be arbitrary values as long as the partial area is set so as not to deviate spatially from each frame.
 続いて、抽出部131は、上記部分領域内の各ブロックの動きベクトルを取得する。つまり、抽出部131は、第1動きベクトル集合のうち、部分領域内の各ブロックの動きベクトル集合を、第2動きベクトル集合として取得する。そして、抽出部131は、第2動きベクトル集合を、照合部132に供給する。 Subsequently, the extraction unit 131 acquires a motion vector of each block in the partial area. That is, the extraction unit 131 acquires, as the second motion vector set, the motion vector set of each block in the partial area in the first motion vector set. Then, the extraction unit 131 supplies the second motion vector set to the collation unit 132.
 なお、図3の例において、第2動きベクトル集合は、以下の集合MVSet2、すなわち、
  MVSet2={MV(m,n),MV(m+1,n),…,MV(m+A-1,n),MV(m,n+1),MV(m+1,n+1),…,MV(m+A-1,1),…,MV(m+A-1,n+B-1)}
として表される。
In the example of FIG. 3, the second motion vector set is the following set MVSet2, that is,
MVSet2 = {MV (m, n), MV (m + 1, n), ..., MV (m + A-1, n), MV (m, n + 1), MV (m + 1, n + 1), ..., MV (m + A-1, 1), ..., MV (m + A-1, n + B-1)}
Represented as:
 照合部132は、第2動きベクトル集合(動きベクトル)を、上述の特徴パターンモデルDB91に含まれる各種の特徴パターンモデルと比較(照合)する。ここで、「特徴パターンモデル」とは、動画像において表現された物体の質感を表す動きベクトル(より具体的には、動きベクトルの集合)自体を表すモデルであってよい。あるいは、特徴パターンモデルは、上記質感を表す動きベクトル(動きベクトルの集合)に関連するモデルであってもよい。 The collation unit 132 compares (collates) the second motion vector set (motion vector) with various feature pattern models included in the feature pattern model DB 91 described above. Here, the “feature pattern model” may be a model representing a motion vector (more specifically, a set of motion vectors) representing the texture of an object represented in a moving image. Alternatively, the feature pattern model may be a model related to a motion vector (a set of motion vectors) representing the texture.
 一例として、特徴パターンモデルは、第2動きベクトル集合に対して、事前のパターン認識技術による学習(自動学習)を行うことによって導出(設定)された、動きベクトルの集合である。当該特徴パターンモデルは、表示装置1においてあらかじめ設定されていてよい。以下に述べるように、照合部132は、特徴パターンモデルに基づいて、物体の質感を判別できる。 As an example, the feature pattern model is a set of motion vectors derived (set) by performing learning (automatic learning) with a prior pattern recognition technique on the second motion vector set. The feature pattern model may be set in advance in the display device 1. As will be described below, the matching unit 132 can determine the texture of an object based on the feature pattern model.
 なお、実施形態1では、説明の便宜上、照合部132が特徴パターンモデルを設定する機能部(パターン設定部)としての機能を併有している場合を例示する。但し、パターン設定部は、照合部132とは別体の機能部として設けられてもよい。 In the first embodiment, for convenience of explanation, a case where the collation unit 132 has a function as a function unit (pattern setting unit) for setting a feature pattern model is illustrated. However, the pattern setting unit may be provided as a functional unit separate from the matching unit 132.
 パターン設定部は、公知のアルゴリズムを用いて上記学習を行い、特徴パターンモデルを設定する。そして、パターン設定部は、当該特徴パターンモデルを、特徴パターンモデルDB91に格納する。様々な動画像(異なる物体が撮影された動画像)に対して、上記処理をそれぞれ行うことにより、各物体の質感に応じた特徴パターンモデルを、特徴パターンモデルDB91に格納できる。 The pattern setting unit performs the above learning using a known algorithm and sets a feature pattern model. The pattern setting unit stores the feature pattern model in the feature pattern model DB 91. By performing the above processing on various moving images (moving images obtained by shooting different objects), a feature pattern model corresponding to the texture of each object can be stored in the feature pattern model DB 91.
 なお、本明細書における「質感」とは、人(ユーザ,鑑賞者)によって知覚される感覚であって、それら感覚の中でも、動的に変化することで知覚される光沢感または素材感等の感覚を意味する。 The “texture” in the present specification is a sensation perceived by a person (user, viewer), and among those sensations, such as a glossy feeling or a material feeling perceived by dynamically changing. Mean sense.
 一例として、当該感覚は、「さらさら」、「ねばねば」、「ゆらゆら」、「ふわふわ」、「きらきら」等の擬音語(onomatopoeia)または擬態語(mimetic word)によって表される。 As an example, the sensation is expressed by onomatopoeia or mimetic words such as “sarasara”, “nebaba”, “yurayura”, “fluffy”, “kirakira”, and the like.
 このように、本明細書における質感は、必ずしも、物体の材質(例:金属または紙)を直接的に特定するものでなくともよい。但し、本明細書における質感は、一般的な意味での視覚的な質感を模擬するものであってもよい。 Thus, the texture in this specification does not necessarily have to directly specify the material of the object (eg, metal or paper). However, the texture in this specification may simulate visual texture in a general sense.
 なお、上述の図3には、動画像において表現された物体が粘度の高い液体(例:油)である場合における、第2動きベクトル集合の一例が概略的に示されている。第2動きベクトル集合に含まれる動きベクトルは、液体の流速を示す指標の1つとなる。また、当該流速は、一般的には流体の粘度(粘性)に依存する。それゆえ、動きベクトルによって、液体の粘度の大小を区別できる。このことから、動きベクトルによって、粘度に応じた質感の差異を区別できる。 Note that FIG. 3 described above schematically shows an example of the second motion vector set when the object represented in the moving image is a liquid having high viscosity (eg, oil). The motion vector included in the second motion vector set is one of indices indicating the liquid flow velocity. In addition, the flow rate generally depends on the viscosity (viscosity) of the fluid. Therefore, the magnitude of the viscosity of the liquid can be distinguished by the motion vector. From this, the difference in the texture according to the viscosity can be distinguished by the motion vector.
 図4の(a)および(b)はそれぞれ、液体の粘度と質感との関係を説明するための図である。図4の(a)には、粘度の低い液体(例:水)が示されている。人が粘度の低い液体が流れている(動いている)のを見た場合、当該液体は比較的大きい流速によって流れるため、人は「さらさら」とした感覚を知覚することが一般的である。 4 (a) and 4 (b) are diagrams for explaining the relationship between the viscosity and the texture of the liquid. FIG. 4A shows a liquid having a low viscosity (for example, water). When a person sees a low-viscosity liquid flowing (moving), the liquid generally flows at a relatively high flow rate, and thus it is common for a person to perceive a “free” sensation.
 他方、図4の(b)には、粘度の高い液体(例:油)が示されている。ユーザが粘度の高い液体が流れているのを見た場合、当該液体は比較的小さい流速によって流れるため、人は「ねばねば」とした感覚を知覚することが一般的である。 On the other hand, a liquid (eg oil) having a high viscosity is shown in FIG. When a user sees a liquid with a high viscosity flowing, the liquid flows at a relatively low flow rate, and thus it is common for a person to perceive a “sticky” sensation.
 さらに、液体は、自然な状態において、固体に比べて全体的に滑らかに動く(変形し易い)ことが一般的である。つまり、液体と固体とでは、動きのパターンが大きく異なる。また、固体であっても、例えば剛性の違いによって、動きのパターンは異なる。このことから、各物体は、質感に応じた特有の動きのパターンを有していると言える。 Furthermore, the liquid generally moves smoothly (is easily deformed) in a natural state as compared with a solid. That is, the movement pattern differs greatly between liquid and solid. Even if it is a solid, the movement pattern varies depending on, for example, the difference in rigidity. From this, it can be said that each object has a peculiar movement pattern according to the texture.
 また、物体に光が入射した場合、物体の表面が光を反射することにより、当該表面には反射光の輝度分布が形成される。人は、当該輝度分布によって、物体の光沢感(「きらきら」とした感覚)を知覚する。加えて、人は、視点の変化または物体の移動等に伴う、反射光(鏡面反射成分または拡散反射成分)の動き(変化)によって、物体の光沢感をさらに具体的に知覚する。 Also, when light is incident on an object, the surface of the object reflects the light, so that a luminance distribution of reflected light is formed on the surface. A person perceives the glossiness (sensation of “glitter”) of an object by the luminance distribution. In addition, the person perceives the glossiness of the object more specifically by the movement (change) of the reflected light (specular reflection component or diffuse reflection component) accompanying the change of the viewpoint or the movement of the object.
 このことから、反射光の動きのパターンは、各物体の質感に応じた特有のものであると言える。そこで、反射光の動きのパターンを示す動きベクトルを、物体の光沢感を示す指標の1つとして用いることもできる。 From this, it can be said that the pattern of movement of the reflected light is peculiar to the texture of each object. Therefore, a motion vector indicating the pattern of reflected light movement can also be used as one index indicating the glossiness of the object.
 このように、動画像の動きベクトルは、当該動画像において表現された物体(特に物体の表面)の質感を示す指標の1つとして用いることができる。以上の点を踏まえ、本願の発明者は、「動画像の動きベクトルに基づいて、当該動画像内において表現された物体の質感を判別(検出,推定)し、その判別結果に応じた動画像処理を行う」という斬新な技術的思想(知見)を新たに見出した。上述の質感処理部の各構成要素(例えば、信号処理部10に含まれる質感検出部13および質感補正部14)は、当該技術的思想に基づいて想到されたものである。 As described above, the motion vector of the moving image can be used as one of indices indicating the texture of the object (particularly the surface of the object) expressed in the moving image. Based on the above points, the inventor of the present application “discriminates (detects and estimates) the texture of the object represented in the moving image based on the motion vector of the moving image, and the moving image corresponding to the determination result”. We have discovered a novel technical idea (knowledge) that “process”. Each component of the texture processing unit described above (for example, the texture detection unit 13 and the texture correction unit 14 included in the signal processing unit 10) is conceived based on the technical idea.
 照合部132は、特徴パターンモデルに対する第2動きベクトル集合(動きベクトル)の関連性(一致度)を算出(評価)する。つまり、照合部132は、第2動きベクトル集合によって示される物体の質感が、特徴パターンモデルによって示される物体の質感とどの程度一致しているかを示す情報を取得する。 The collation unit 132 calculates (evaluates) the relevance (matching degree) of the second motion vector set (motion vector) to the feature pattern model. That is, the collation unit 132 acquires information indicating how much the texture of the object indicated by the second motion vector set matches the texture of the object indicated by the feature pattern model.
 以下、当該情報を、質感ID(質感判別情報)とも称する。質感IDは、照合部132によって動画像(例:動画像B)において表現された各物体の質感が判別された結果を示す情報であると理解されてよい。質感検出部13は、質感IDを質感補正部14(より具体的には、以下に述べるパラメータ設定部141)に供給する。 Hereinafter, this information is also referred to as a texture ID (texture discrimination information). The texture ID may be understood as information indicating a result of determining the texture of each object expressed in the moving image (eg, moving image B) by the matching unit 132. The texture detection unit 13 supplies the texture ID to the texture correction unit 14 (more specifically, the parameter setting unit 141 described below).
 照合部132は、2次元のベクトル集合(ベクトル列)に対する公知のパターン認識手法を用いて、質感IDを取得してよい。パターン認識手法の具体例としては、以下の3つが挙げられる。 The collation unit 132 may acquire the texture ID using a known pattern recognition method for a two-dimensional vector set (vector sequence). Specific examples of the pattern recognition method include the following three.
 (第1の方法)
 第2動きベクトル集合と特徴パターンモデルとの相関関数を計算する。例えば、以下の式(1)、すなわち、
Figure JPOXMLDOC01-appb-M000001
によって、照合部132に相関関数ΦMVinMVdatabase(x´,y´)を計算させる。照合部132は、この相関関数の値を、質感IDとして算出してよい。なお、MVinは第2動きベクトル集合(観測系列)を、MVdatabaseは特徴パターンモデルをそれぞれ表す。
(First method)
A correlation function between the second motion vector set and the feature pattern model is calculated. For example, the following equation (1):
Figure JPOXMLDOC01-appb-M000001
Thus, the collation unit 132 calculates the correlation function ΦMV in MV database (x ′, y ′). The matching unit 132 may calculate the value of the correlation function as the texture ID. Note that MV in represents a second motion vector set (observation series), and MV database represents a feature pattern model.
 第1の方法によれば、以下に述べる第2の方法および第3の方法に比べて、簡単な計算により質感IDを算出できる。このため、ハードウェア資源が比較的限られている場合(例:プロセッサの処理性能が比較的低い場合)であっても、質感IDを算出できる。それゆえ、簡単な構成によって信号処理部10を実現できる。 According to the first method, the texture ID can be calculated by a simple calculation as compared with the second method and the third method described below. For this reason, the texture ID can be calculated even when the hardware resources are relatively limited (for example, when the processing performance of the processor is relatively low). Therefore, the signal processing unit 10 can be realized with a simple configuration.
 (第2の方法)
 確率モデルを用いたマッチング手法を採用する。当該確率モデルとしては、例えばHMM(Hidden Markov Model,隠れマルコフモデル)が用いられてよい。図5は、HMMの一例を示す図である。
(Second method)
A matching method using a probabilistic model is adopted. As the probability model, for example, an HMM (Hidden Markov Model) may be used. FIG. 5 is a diagram illustrating an example of the HMM.
 図5のHMMにおいて、潜在変数をx(t)={x1,x2,x3}、観測値をy(t)={y1,y2,y3,y4}として表す。なお、tは時刻を表す記号である。aijは状態遷移確率であり、bijは出力確率である。第2の方法を用いる場合、パターン設定部は、aijおよびbijの値を、事前の学習によってあらかじめ設定する。そして、パターン設定部は、aijおよびbijの値を特徴パターンモデルDB91に格納する。 In the HMM of FIG. 5, latent variables are represented as x (t) = {x1, x2, x3}, and observed values are represented as y (t) = {y1, y2, y3, y4}. Note that t is a symbol representing time. a ij is a state transition probability, and b ij is an output probability. When the second method is used, the pattern setting unit sets the values of a ij and b ij in advance by learning in advance. Then, the pattern setting unit stores the values of a ij and b ij in the feature pattern model DB 91.
 そして、以下の式(2)、すなわち、
Figure JPOXMLDOC01-appb-M000002
に基づいて、照合部132に確率P(Y)を計算させる。照合部132は、この確率P(Y)の値を、質感IDとして算出してよい。なお、確率P(Y)は、観測系列MVin=Y=y(1),y(2),y(3),y(4)が得られる確率である。また、式(2)のXは、潜在状態系列X=x1,x2,x3である。式(2)は、確率P(Y)が、潜在状態系列Xの確率の総和として表されることを示している。
And the following formula (2), that is,
Figure JPOXMLDOC01-appb-M000002
Based on the above, the matching unit 132 is made to calculate the probability P (Y). The collation unit 132 may calculate the value of the probability P (Y) as the texture ID. The probability P (Y) is a probability that the observation series MV in = Y = y (1), y (2), y (3), y (4) is obtained. Further, X in Expression (2) is the latent state series X = x1, x2, x3. Equation (2) indicates that the probability P (Y) is expressed as the sum of the probabilities of the latent state series X.
 第2の方法によれば、上述の第1の方法に比べて、質感IDをより高精度に算出できる。例えば、物体に局所的ではない変形(つまり、連続的な伸縮を伴う変形)が生じる場合(動きベクトルの分布に伸縮が伴う場合)においても、質感IDを適切に算出できる。ハードウェア資源が比較的豊富である場合(例:プロセッサの処理性能が比較的高い場合)には、第2の方法が採用されてもよい。なお、第2の方法を採用する場合には、表示装置1の設計者によって、適切な確率モデルがあらかじめ設定される必要がある。 According to the second method, the texture ID can be calculated with higher accuracy than in the first method described above. For example, the texture ID can be appropriately calculated even when non-local deformation (that is, deformation with continuous expansion / contraction) occurs in the object (when the motion vector distribution includes expansion / contraction). When hardware resources are relatively abundant (for example, when the processing performance of the processor is relatively high), the second method may be adopted. When the second method is employed, an appropriate probability model needs to be set in advance by the designer of the display device 1.
 (第3の方法)
 ディープラーニング技術を採用する。例えば、CNN(Convolutional Neural Network)等のニューラルネットワークを用いて、パターン設定部に特徴パターンモデルをあらかじめ学習させる。そして、照合部132に、上述の観測系列MVinと当該特徴パターンモデルとの比較を行わせる。この場合、照合部132は、当該比較結果を、質感IDとして出力してよい。
(Third method)
Adopt deep learning technology. For example, using a neural network such as CNN (Convolutional Neural Network), the pattern setting unit is made to learn a feature pattern model in advance. Then, the collation unit 132 is made to compare the above-described observation series MV in with the feature pattern model. In this case, the collation unit 132 may output the comparison result as a texture ID.
 第3の方法によれば、上述の第2の方法に比べて、質感IDをさらに高精度に算出できる。特に、十分なハードウェア資源を用いて、パターン設定部に特徴パターンモデルを学習させた場合には、特に高精度な質感IDの算出を行うことが期待できる。 According to the third method, the texture ID can be calculated with higher accuracy than in the second method described above. In particular, when the feature setting model is learned by the pattern setting unit using sufficient hardware resources, it can be expected to calculate the texture ID with particularly high accuracy.
 また、第3の方法によれば、質感の種類(例:光沢感)に応じた動きベクトルの個別的な特徴を、表示装置1の設計者があらかじめ特定しなくとも、適切な学習によって特徴パターンモデルを得ることができる。それゆえ、より柔軟な方法で、広範囲な質感に応じた質感IDを取得することも可能となると期待される。但し、第3の方法を採用する場合には、特に豊富なハードウェア資源を準備することが必要である。 Further, according to the third method, the feature pattern can be obtained by appropriate learning even if the designer of the display device 1 does not specify in advance the individual features of the motion vector according to the type of texture (for example, glossiness). A model can be obtained. Therefore, it is expected that a texture ID corresponding to a wide range of textures can be acquired by a more flexible method. However, when the third method is adopted, it is necessary to prepare particularly abundant hardware resources.
 続いて、質感補正部14について説明する。質感補正部14は、パラメータ設定部141および第2補正部142(補正部)を備えている。パラメータ設定部141は、照合部132から取得した質感IDに基づいて、画質パラメータを選択(設定)する。 Subsequently, the texture correction unit 14 will be described. The texture correction unit 14 includes a parameter setting unit 141 and a second correction unit 142 (correction unit). The parameter setting unit 141 selects (sets) an image quality parameter based on the texture ID acquired from the matching unit 132.
 具体的には、パラメータ設定部141には、質感IDに対応付けられた画質パラメータのテーブルがあらかじめ設定されている。パラメータ設定部141は、当該テーブルを参照し、質感IDに応じた画質パラメータを選択する。そして、パラメータ設定部141は、選択した画質パラメータを、第2補正部142に供給する。 Specifically, in the parameter setting unit 141, a table of image quality parameters associated with the texture ID is set in advance. The parameter setting unit 141 refers to the table and selects an image quality parameter corresponding to the texture ID. Then, the parameter setting unit 141 supplies the selected image quality parameter to the second correction unit 142.
 第2補正部142は、上述の第1補正部11と同様に、公知の画質補正処理を行う機能部であってよい。但し、第2補正部142は、公知でない新たな方法による画質補正処理(質感再現を主目的にした画質補正処理)を行ってもよい。 The second correction unit 142 may be a functional unit that performs a known image quality correction process, similarly to the first correction unit 11 described above. However, the second correction unit 142 may perform image quality correction processing (image quality correction processing mainly intended for texture reproduction) by a new method that is not known.
 上述の通り、第2補正部142には、内挿画像生成部121から動画像Cが供給される。第2補正部142は、パラメータ設定部141によって選択された画質パラメータを用いて、動画像Cを処理する。以降、第2補正部142における処理後の動画像を、動画像D(出力動画像)とも称する。第2補正部142は、動画像Dを表示部70に供給する。 As described above, the moving image C is supplied from the interpolated image generation unit 121 to the second correction unit 142. The second correction unit 142 processes the moving image C using the image quality parameter selected by the parameter setting unit 141. Hereinafter, the moving image after processing in the second correction unit 142 is also referred to as a moving image D (output moving image). The second correction unit 142 supplies the moving image D to the display unit 70.
 当該構成によれば、第2補正部142は、パラメータ設定部141によって選択された画質パラメータを用いて、動画像C内において表現された物体の質感に応じた処理を行うことができる。当該画質補正処理によれば、例えば、物体の光沢感または素材感を強調できる。このように、第2補正部142は、質感をより高めた動画像Dを生成できる。それゆえ、ユーザに対して、質感がより強調された(より効果的に表現された)動画像Dを提供できる。 According to this configuration, the second correction unit 142 can perform processing according to the texture of the object expressed in the moving image C using the image quality parameter selected by the parameter setting unit 141. According to the image quality correction process, for example, it is possible to enhance the glossiness or texture of an object. As described above, the second correction unit 142 can generate the moving image D with a higher texture. Therefore, it is possible to provide the user with the moving image D in which the texture is more emphasized (expressed more effectively).
 なお、第2補正部142は、動画像Cにおいて最も多くの部分を占めている物体の質感IDに応じた画質パラメータを用いて、当該動画像Cを処理してもよい。つまり、第2補正部142は、1つの質感ID(主要な質感ID)に基づいて、動画像Cを処理してもよい。 Note that the second correction unit 142 may process the moving image C using an image quality parameter corresponding to the texture ID of the object that occupies the most part in the moving image C. That is, the second correction unit 142 may process the moving image C based on one texture ID (main texture ID).
 あるいは、第2補正部142は、複数の物体の質感IDのそれぞれに応じた画質パラメータを用いて、動画像Cを処理してもよい。例えば、第2補正部142は、上述の図3の各部分領域において、最も多くの部分を占めている物体の質感IDに応じた画質パラメータを用いてもよい。この場合、第2補正部142は、動画像Cにおいて、各部分領域に対応する各領域をそれぞれ処理する。 Alternatively, the second correction unit 142 may process the moving image C using image quality parameters corresponding to the texture IDs of the plurality of objects. For example, the second correction unit 142 may use an image quality parameter corresponding to the texture ID of an object that occupies the most part in each of the partial regions in FIG. In this case, the second correction unit 142 processes each area corresponding to each partial area in the moving image C.
 なお、パラメータ設定部141は、選択した画質パラメータを、上述の第1補正部11にも供給してよい。この場合、第1補正部11においても、上述の動画像Aに対して、第2補正部142と同様の画質補正処理を行うことができる。つまり、第1補正部11および第2補正部142のそれぞれにおいて、物体の質感に応じた処理を行うことができるので、質感をより効果的に強調できる。 The parameter setting unit 141 may also supply the selected image quality parameter to the first correction unit 11 described above. In this case, the first correction unit 11 can perform the same image quality correction process as that of the second correction unit 142 on the moving image A described above. In other words, each of the first correction unit 11 and the second correction unit 142 can perform processing according to the texture of the object, so that the texture can be more effectively emphasized.
 また、第2補正部142は、動画像Cにおいて表現された物体のテクスチャパターンを強調する処理をさらに行ってもよい。この場合、第2補正部142には、質感IDとテクスチャパターンとの対応関係を示すテーブルが、あらかじめ格納されていてよい。第2補正部142は、質感IDに応じて物体にテクスチャパターンを付与してもよい。あるいは、第2補正部142は、フィルタ処理を行って物体のテクスチャパターンを強調してもよい。 In addition, the second correction unit 142 may further perform processing for enhancing the texture pattern of the object expressed in the moving image C. In this case, the second correction unit 142 may store in advance a table indicating the correspondence relationship between the texture ID and the texture pattern. The 2nd correction part 142 may give a texture pattern to an object according to texture ID. Alternatively, the second correction unit 142 may enhance the texture pattern of the object by performing a filter process.
 また、第2補正部142は、質感IDに応じて、物体にHDR(High Dynamic Range)拡張等の特殊な処理を行ってもよい。HDR拡張によれば、物体の明るさを部分的に強調できるので、物体の所定の質感(例:光沢感)を高めることができる。 Further, the second correction unit 142 may perform special processing such as HDR (High Dynamic Range) expansion on the object according to the texture ID. According to the HDR extension, the brightness of the object can be partially enhanced, so that a predetermined texture (eg, glossiness) of the object can be enhanced.
 また、第2補正部142は、質感IDに応じて、物体および当該物体の周辺の領域を歪ませる処理を行ってもよい。当該処理によっても、物体の質感を高めることができる。以上のように、第2補正部142は、質感IDに応じて、物体の質感を高める任意の処理を行うように構成されていればよい。 Further, the second correction unit 142 may perform a process of distorting the object and the area around the object according to the texture ID. Also by this processing, the texture of the object can be enhanced. As described above, the second correction unit 142 may be configured to perform an arbitrary process for enhancing the texture of an object according to the texture ID.
 (効果)
 表示装置1における信号処理部10では、質感検出部13および質感補正部14(上述の質感処理部の構成要素)が設けられている。質感検出部13は、動画像の動きベクトルを解析することにより、当該動画像において表現された各物体の質感を判別する。そして、質感検出部13は、当該判別結果を示す質感IDを、質感補正部14に供給する。
(effect)
The signal processing unit 10 in the display device 1 includes a texture detection unit 13 and a texture correction unit 14 (components of the above-described texture processing unit). The texture detecting unit 13 determines the texture of each object expressed in the moving image by analyzing the motion vector of the moving image. Then, the texture detection unit 13 supplies the texture correction unit 14 with the texture ID indicating the determination result.
 続いて、質感補正部14は、質感IDに基づいて(質感検出部13の判別結果に応じて)、動画像を処理する。つまり、質感補正部14に、物体の質感をより効果的に表現するように、動画像を処理させることができる。それゆえ、信号処理部10によれば、動画像内において表現された物体の質感を高めることが可能となる。 Subsequently, the texture correction unit 14 processes the moving image based on the texture ID (according to the determination result of the texture detection unit 13). That is, the moving image can be processed by the texture correction unit 14 so as to more effectively express the texture of the object. Therefore, according to the signal processing unit 10, it is possible to improve the texture of the object expressed in the moving image.
 従来では、動画像において表現された物体の質感を十分に表現するためには、当該動画像の解像度を非常に高くする(例:8Kレベルの解像度にする)必要があった。あるいは、動画像の解像度が非常に高い場合であっても、非可逆圧縮によって生成された圧縮動画像データが提供されている場合には、復号部61における復号時に動画像の劣化が生じてしまう。この場合、当該劣化に起因して、上記動画像における質感の表現性が低下する。このように、従来では、動画像において質感を効果的に表現することが容易ではないという問題があった。 Conventionally, in order to sufficiently express the texture of an object expressed in a moving image, it has been necessary to extremely increase the resolution of the moving image (for example, 8K level resolution). Alternatively, even when the resolution of the moving image is very high, when compressed moving image data generated by irreversible compression is provided, the moving image deteriorates during decoding in the decoding unit 61. . In this case, due to the deterioration, the expressiveness of the texture in the moving image is lowered. As described above, conventionally, there is a problem that it is not easy to effectively express the texture in the moving image.
 他方、信号処理部10によれば、(i)動画像の解像度が必ずしも十分に高くない場合、または、(ii)復号部61における復号時に動画像の劣化が生じた場合であっても、物体の質感を効果的に表現できる。つまり、物体の質感を十分に表現できる動画像を、従来よりも簡便な構成で提供できる。 On the other hand, according to the signal processing unit 10, even if (i) the resolution of the moving image is not necessarily high enough, or (ii) the moving image deteriorates during decoding in the decoding unit 61, The texture of the can be expressed effectively. That is, a moving image that can sufficiently express the texture of an object can be provided with a simpler configuration than in the past.
 なお、図6の(a)および(b)はそれぞれ、質感補正部14における動画像処理を説明するための図である。具体的には、図6の(a)および(b)には、動画像処理が行われる前の物体と、当該動画像処理が行われた後の物体とが、それぞれ示されている。 6A and 6B are diagrams for explaining the moving image processing in the texture correction unit 14, respectively. Specifically, FIGS. 6A and 6B show an object before the moving image processing and an object after the moving image processing are performed.
 図6の(a)では、質感補正部14における動画像処理によって、物体の光沢感を高める動画像処理(例:コントラスト調整、明るさ調整、HDR拡張)が行われた場合が例示されている。図6の(a)の場合には、質感検出部13における処理において、動きベクトルが物体の光沢感を示す指標の1つとして用いられていることが理解される。 FIG. 6A illustrates a case where moving image processing (eg, contrast adjustment, brightness adjustment, HDR expansion) for enhancing the glossiness of an object is performed by moving image processing in the texture correction unit 14. . In the case of (a) in FIG. 6, it is understood that the motion vector is used as one of indexes indicating the glossiness of the object in the processing in the texture detection unit 13.
 図6の(b)では、質感補正部14における動画像処理によって、物体の「ふわふわ」とした感覚(軽さを表す素材感)を高める動画像処理(例:輪郭補正)が行われた場合が例示されている。図6の(b)の場合には、質感検出部13における処理において、動きベクトルが物体の「ふわふわ」とした感覚を示す指標の1つとして用いられていることが理解される。 In FIG. 6B, when moving image processing (eg, contour correction) is performed by the moving image processing in the texture correction unit 14 to enhance the “fluffy” feeling of the object (material feeling representing lightness). Is illustrated. In the case of FIG. 6B, it is understood that the motion vector is used as one of indexes indicating the “fluffy” feeling of the object in the processing in the texture detection unit 13.
 なお、本開示の一態様に係る動画像処理方法は、次のように表現できる。すなわち、当該動画像処理方法は、(i)動画像の動きベクトルを解析することにより、当該動画像において表現された物体の質感を判別する質感判別工程と、(ii)上記質感判別工程における判別結果に応じて上記動画像を処理する動画像処理工程と、を含んでいる。 Note that the moving image processing method according to an aspect of the present disclosure can be expressed as follows. That is, the moving image processing method includes: (i) a texture determining step of determining a texture of an object represented in the moving image by analyzing a motion vector of the moving image; and (ii) a determination in the texture determining step. A moving image processing step for processing the moving image according to the result.
 〔実施形態2〕
 実施形態2について、図7に基づいて説明すれば、以下の通りである。なお、説明の便宜上、実施形態1にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 2]
The second embodiment will be described below with reference to FIG. For convenience of explanation, members having the same functions as those described in the first embodiment are denoted by the same reference numerals and description thereof is omitted.
 図7は、実施形態2の信号処理部20(動画像処理装置)およびその周辺の構成を概略的に示す機能ブロック図である。なお、実施形態2の表示装置を表示装置2と称する。図7において、図示が省略されている箇所については、上述の図1の表示装置1と同様であるため、説明を省略する。この点については、以下に述べる各実施形態においても同様である。 FIG. 7 is a functional block diagram schematically showing the configuration of the signal processing unit 20 (moving image processing apparatus) and its periphery according to the second embodiment. Note that the display device of Embodiment 2 is referred to as a display device 2. In FIG. 7, portions not shown are the same as those of the display device 1 of FIG. This also applies to each embodiment described below.
 図7に示されるように、信号処理部20は、実施形態1の信号処理部10から、第1補正部11および内挿画像生成部121を除外した構成である。信号処理部20では、動きベクトル算出部122および質感補正部14のそれぞれには、復号部61から上述の動画像A(入力動画像)が供給される。 As shown in FIG. 7, the signal processing unit 20 has a configuration in which the first correction unit 11 and the interpolated image generation unit 121 are excluded from the signal processing unit 10 of the first embodiment. In the signal processing unit 20, the above-described moving image A (input moving image) is supplied from the decoding unit 61 to each of the motion vector calculation unit 122 and the texture correction unit 14.
 従って、信号処理部20では、動きベクトル算出部122は、動画像Aの動きベクトルの集合を、第1動きベクトル集合として質感検出部13の抽出部131に供給する。抽出部131は、実施形態1と同様に、第1動きベクトル集合から第2動きベクトル集合を抽出する。照合部132は、実施形態1と同様に、第2動きベクトル集合を特徴パターンモデルDB91に含まれる各種の特徴パターンモデルと比較する。質感検出部13の具体的な処理については同様であるため、説明を省略する。 Therefore, in the signal processing unit 20, the motion vector calculation unit 122 supplies the set of motion vectors of the moving image A to the extraction unit 131 of the texture detection unit 13 as the first motion vector set. Similar to the first embodiment, the extraction unit 131 extracts a second motion vector set from the first motion vector set. As in the first embodiment, the collation unit 132 compares the second motion vector set with various feature pattern models included in the feature pattern model DB 91. Since the specific processing of the texture detection unit 13 is the same, the description thereof is omitted.
 そして、質感補正部14は、質感検出部13の照合部132から取得した質感IDに基づいて、動画像Aを処理する。つまり、質感補正部14は、動画像Aを処理することにより動画像D(出力動画像)を生成し、当該動画像Dを表示部70に供給する。 Then, the texture correction unit 14 processes the moving image A based on the texture ID acquired from the matching unit 132 of the texture detection unit 13. That is, the texture correction unit 14 processes the moving image A to generate a moving image D (output moving image) and supplies the moving image D to the display unit 70.
 実施形態2の信号処理部20に示されるように、本開示の一態様に係る動画像処理装置では、上述の質感処理部に含まれない構成要素の一部は省略されてもよい。例えば、信号処理部20では、実施形態1の信号処理部10に対して、第1補正部11(質感が判別される前の時点における画質補正処理)、および、内挿画像生成部121(フレームレート変換処理)が省略されている。信号処理部20によれば、上述の実施形態1に比べて、動画像処理装置の構成を簡単化できる。 As shown in the signal processing unit 20 of the second embodiment, in the moving image processing apparatus according to an aspect of the present disclosure, some of the components that are not included in the above-described texture processing unit may be omitted. For example, in the signal processing unit 20, the first correction unit 11 (image quality correction processing before the texture is determined) and the interpolated image generation unit 121 (frame) are compared with the signal processing unit 10 of the first embodiment. (Rate conversion processing) is omitted. According to the signal processing unit 20, the configuration of the moving image processing apparatus can be simplified as compared with the first embodiment.
 〔実施形態3〕
 実施形態3について、図8に基づいて説明すれば、以下の通りである。図8は、実施形態3の信号処理部30(動画像処理装置)およびその周辺の構成を概略的に示す機能ブロック図である。なお、実施形態3の表示装置を表示装置3と称する。
[Embodiment 3]
The third embodiment will be described below with reference to FIG. FIG. 8 is a functional block diagram schematically showing the configuration of the signal processing unit 30 (moving image processing apparatus) and its periphery according to the third embodiment. Note that the display device of Embodiment 3 is referred to as a display device 3.
 上述のように、復号部61は、受信部60から圧縮動画像データを取得する。実施形態3では、圧縮のための動きベクトルを示す情報(動きベクトル情報)が、圧縮動画像データにあらかじめ含まれている場合を考える。なお、当該動きベクトル情報を含む圧縮動画像データのフォーマットの一例としては、MPEG4を挙げることができる。 As described above, the decoding unit 61 acquires the compressed moving image data from the receiving unit 60. In the third embodiment, it is assumed that information (motion vector information) indicating a motion vector for compression is included in advance in the compressed moving image data. An example of a format of compressed moving image data including the motion vector information can be MPEG4.
 図8に示されるように、信号処理部30は、実施形態2の信号処理部20から、動きベクトル算出部122を除外した構成である。つまり、信号処理部30では、上述の実施形態2に比べて、動画像処理装置の構成がさらに簡単化されている。 As shown in FIG. 8, the signal processing unit 30 has a configuration in which the motion vector calculation unit 122 is excluded from the signal processing unit 20 of the second embodiment. That is, in the signal processing unit 30, the configuration of the moving image processing apparatus is further simplified as compared with the second embodiment described above.
 信号処理部30では、質感補正部14に、復号部61から上述の動画像A(入力動画像)が供給される。実施形態3の質感検出部13において、抽出部131は、復号部61から上述の圧縮動画像データに含まれる動きベクトル情報を取得する。 In the signal processing unit 30, the moving image A (input moving image) is supplied from the decoding unit 61 to the texture correction unit 14. In the texture detection unit 13 of the third embodiment, the extraction unit 131 acquires motion vector information included in the above-described compressed moving image data from the decoding unit 61.
 具体的には、抽出部131は、動きベクトル情報に示されている動きベクトルの集合を、第1動きベクトル集合として取得する。そして、抽出部131は、当該動きベクトル情報(第1動きベクトル集合)から第2ベクトル集合を抽出する。そして、照合部132は、上述の各実施形態と同様に、第2動きベクトル集合を特徴パターンモデルDB91に含まれる各種の特徴パターンモデルと比較する。 Specifically, the extraction unit 131 acquires a set of motion vectors indicated in the motion vector information as a first motion vector set. Then, the extraction unit 131 extracts a second vector set from the motion vector information (first motion vector set). And the collation part 132 compares a 2nd motion vector set with the various feature pattern models contained in feature pattern model DB91 similarly to each above-mentioned embodiment.
 このように、実施形態3における質感検出部13は、圧縮動画像データに含まれる動きベクトル情報を用いて、上述の各実施形態と同様の処理を行う。すなわち、実施形態3における質感検出部13は、圧縮動画像データにあらかじめ含まれている動きベクトルを解析する。 As described above, the texture detection unit 13 in the third embodiment performs the same processing as in each of the above-described embodiments using the motion vector information included in the compressed moving image data. That is, the texture detection unit 13 according to the third embodiment analyzes a motion vector included in advance in the compressed moving image data.
 以上のように、圧縮動画像データに動きベクトル情報が含まれている場合には、本開示の一態様に係る動画像処理装置において、動きベクトルを算出する処理を省略できる。それゆえ、動画像処理装置の構成がさらに簡単となる。 As described above, when the motion vector information is included in the compressed motion image data, the motion image processing apparatus according to an aspect of the present disclosure can omit the process of calculating the motion vector. Therefore, the configuration of the moving image processing apparatus is further simplified.
 〔変形例〕
 本開示の一変形例について、図9に基づいて説明すれば、以下の通りである。図9は、本変形例の信号処理部30v(動画像処理装置)およびその周辺の構成を概略的に示す機能ブロック図である。なお、本変形例の表示装置を表示装置3vと称する。
[Modification]
A modification of the present disclosure will be described with reference to FIG. FIG. 9 is a functional block diagram schematically showing the configuration of the signal processing unit 30v (moving image processing apparatus) and its periphery according to the present modification. Note that the display device of this modification is referred to as a display device 3v.
 なお、図9では、説明の便宜上、実施形態3の信号処理部30のバリエーションの一例としての信号処理部30vが例示されている。但し、本変形例の構成は、上述の実施形態1・2、および後述の実施形態4に対して適用されてもよい。 In FIG. 9, a signal processing unit 30v as an example of a variation of the signal processing unit 30 of the third embodiment is illustrated for convenience of explanation. However, the configuration of the present modification may be applied to the above-described first and second embodiments and the later-described fourth embodiment.
 図9に示されるように、質感検出部13(より具体的には、抽出部131および照合部132のそれぞれ)には、補助情報がさらに入力されてよい。補助情報とは、動画像に含まれる動きベクトル以外の情報を意味する。当該補助情報の一例としては、物体の境界、色、およびテクスチャパターン等を示す情報が挙げられる。 As shown in FIG. 9, auxiliary information may be further input to the texture detection unit 13 (more specifically, each of the extraction unit 131 and the collation unit 132). The auxiliary information means information other than the motion vector included in the moving image. As an example of the auxiliary information, information indicating an object boundary, a color, a texture pattern, and the like can be given.
 信号処理部30vにおいて、質感検出部13は、動きベクトル情報(動きベクトル)に加えて、補助情報をさらに用いて、動画像において表現された物体の質感を判別してもよい。当該構成によれば、補助情報によって示される物体の形状および色彩等をさらに考慮して質感を判別できるので、より高精度に当該質感を判別することが可能になると期待される。 In the signal processing unit 30v, the texture detection unit 13 may further determine the texture of the object expressed in the moving image by further using auxiliary information in addition to the motion vector information (motion vector). According to this configuration, the texture can be determined by further considering the shape and color of the object indicated by the auxiliary information, and it is expected that the texture can be determined with higher accuracy.
 〔実施形態4〕
 実施形態4について、図10に基づいて説明すれば、以下の通りである。図10は、実施形態4の信号処理部40(動画像処理装置)およびその周辺の構成を概略的に示す機能ブロック図である。なお、実施形態4の表示装置を表示装置4と称する。
[Embodiment 4]
Embodiment 4 will be described below with reference to FIG. FIG. 10 is a functional block diagram schematically illustrating the configuration of the signal processing unit 40 (moving image processing apparatus) and its periphery according to the fourth embodiment. Note that the display device of Embodiment 4 is referred to as a display device 4.
 なお、図10では、説明の便宜上、実施形態3の信号処理部30のバリエーションの一例としての信号処理部40が例示されている。但し、実施形態4の構成は、上述の各実施形態および変形例のいずれに対して適用されてもよい。 In addition, in FIG. 10, the signal processing part 40 as an example of the variation of the signal processing part 30 of Embodiment 3 is illustrated for convenience of explanation. However, the configuration of the fourth embodiment may be applied to any of the above-described embodiments and modifications.
 上述の各実施形態では、動画像に含まれる1つのフレーム(例:基準フレーム)に対して算出された動きベクトルを用いて、物体の質感を判別する場合が例示されていた。例えば、質感の判別対象となるフレーム(現時点のフレーム)がN番目(Nは2以上の整数)のフレームである場合、N番目のフレームにおける動きベクトルのみを用いて、物体の質感が判別されていた。 In each of the above-described embodiments, the case where the texture of an object is determined using a motion vector calculated for one frame (eg, a reference frame) included in a moving image has been exemplified. For example, when the frame (current frame) whose texture is to be determined is the Nth frame (N is an integer equal to or greater than 2), the texture of the object is determined using only the motion vector in the Nth frame. It was.
 しかしながら、「1番目のフレーム(動画像の先頭のフレーム)における動きベクトル」から、「(N-1)番目のフレーム(現在のフレームに対する直前のフレーム)における動きベクトル」までの、合計(N-1)個の動きベクトルをさらに用いて、物体の質感が判別されてもよい。つまり、過去のフレームにおける動きベクトルの履歴をさらに用いて、物体の質感が判別されてもよい。 However, the total from the “motion vector in the first frame (the first frame of the moving image)” to the “motion vector in the (N−1) th frame (the frame immediately before the current frame)” (N− 1) The texture of an object may be determined by further using a single motion vector. That is, the texture of the object may be determined using the motion vector history in the past frame.
 図10に示されるように、実施形態4は、上述の質感処理部に、以下に述べる動きベクトル履歴92が付加された構成を例示している。一例として、実施形態4の信号処理部40では、実施形態3の信号処理部30において、記憶部90に動きベクトル履歴92がさらに格納された構成が示されている。 As shown in FIG. 10, the fourth embodiment exemplifies a configuration in which a motion vector history 92 described below is added to the above-described texture processing unit. As an example, the signal processing unit 40 of the fourth embodiment shows a configuration in which the motion vector history 92 is further stored in the storage unit 90 in the signal processing unit 30 of the third embodiment.
 動きベクトル履歴92には、1番目からN-1番目までの各フレームにおける動きベクトルの集合が格納されている。つまり、動きベクトル履歴92には、過去のフレームにおける動きベクトルの集合(第1動きベクトル集合)の履歴が格納されている。 The motion vector history 92 stores a set of motion vectors in the first to N−1th frames. That is, the motion vector history 92 stores a history of a set of motion vectors (first motion vector set) in a past frame.
 実施形態4では、復号部61は、動画像のフレームごとに、上述の動きベクトルの集合を示す動きベクトル情報を、動きベクトル履歴92にそれぞれ格納する。つまり、動きベクトル履歴92には、1番目からN-1番目までの各フレームにおける動きベクトルの集合を示す動きベクトル情報(以下、「過去の動きベクトル情報」とも称する)が格納されている。 In the fourth embodiment, the decoding unit 61 stores motion vector information indicating the set of motion vectors described above in the motion vector history 92 for each frame of a moving image. That is, the motion vector history 92 stores motion vector information (hereinafter also referred to as “past motion vector information”) indicating a set of motion vectors in the first to (N−1) th frames.
 実施形態4の質感検出部13において、抽出部131は、上述の実施形態3と同様に、現時点(N番目)のフレームの第1動きベクトル集合を取得する。具体的には、抽出部131は、復号部61から、現時点のフレームにおける動きベクトルの集合を示す動きベクトル情報を、第1動きベクトル集合として取得する。 In the texture detection unit 13 of the fourth embodiment, the extraction unit 131 acquires the first motion vector set of the current (Nth) frame as in the third embodiment. Specifically, the extraction unit 131 acquires, from the decoding unit 61, motion vector information indicating a set of motion vectors in the current frame as a first motion vector set.
 加えて、抽出部131は、動きベクトル履歴92に含まれる過去の動きベクトル情報をさらに取得する。当該構成によれば、質感検出部13は、現時点のフレームにおける動きベクトルに加えて、過去のフレームにおける動きベクトルの履歴をさらに用いて物体の質感が判別できる。 In addition, the extraction unit 131 further acquires past motion vector information included in the motion vector history 92. According to this configuration, the texture detection unit 13 can further determine the texture of the object by further using the motion vector history in the past frame in addition to the motion vector in the current frame.
 上述のように、物体の質感を特徴付ける主要な要素としては、物体の動きのパターン、または、反射光の動きのパターンが挙げられる。そこで、過去のフレームにおける動きベクトルの履歴に着目することで、上記各パターンの時間的な推移をさらに考慮できる。それゆえ、より高精度に質感を判別することが可能になると期待される。 As described above, the main elements that characterize the texture of an object include the pattern of movement of the object or the pattern of movement of reflected light. Therefore, by paying attention to the motion vector history in the past frame, the temporal transition of each pattern can be further considered. Therefore, it is expected that the texture can be discriminated with higher accuracy.
 〔ソフトウェアによる実現例〕
 表示装置1~4の制御ブロック(特に信号処理部10~40)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。
[Example of software implementation]
The control blocks (particularly the signal processing units 10 to 40) of the display devices 1 to 4 may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or a CPU (Central Processing Unit). It may be realized by software using
 後者の場合、表示装置1~4は、各機能を実現するソフトウェアであるプログラムの命令を実行するCPU、上記プログラムおよび各種データがコンピュータ(またはCPU)で読み取り可能に記録されたROM(Read Only Memory)または記憶装置(これらを「記録媒体」と称する)、上記プログラムを展開するRAM(Random Access Memory)などを備えている。そして、コンピュータ(またはCPU)が上記プログラムを上記記録媒体から読み取って実行することにより、本開示の一態様の目的が達成される。上記記録媒体としては、「一時的でない有形の媒体」、例えば、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本開示の一態様は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。 In the latter case, the display devices 1 to 4 include a CPU that executes instructions of a program that is software that realizes each function, and a ROM (Read Only Memory) in which the program and various data are recorded so as to be readable by the computer (or CPU). ) Or a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like. And the objective of 1 aspect of this indication is achieved when a computer (or CPU) reads and runs the said program from the said recording medium. As the recording medium, a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. The program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program. Note that one aspect of the present disclosure can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
 〔まとめ〕
 本開示の態様1に係る動画像処理装置(信号処理部10)は、動画像の動きベクトルを解析することにより、当該動画像において表現された物体の質感を判別する質感判別部(13)と、上記質感判別部の判別結果に応じて上記動画像を処理する動画像処理部(質感補正部14)と、を備えている。
[Summary]
A moving image processing apparatus (signal processing unit 10) according to aspect 1 of the present disclosure includes a texture determination unit (13) that determines a texture of an object represented in the moving image by analyzing a motion vector of the moving image. A moving image processing unit (texture correcting unit 14) that processes the moving image according to the determination result of the texture determining unit.
 上述のように、本願の発明者は、「動画像の動きベクトルに基づいて、当該動画像において表現された物体の質感を判別し、その判別結果に応じた動画像処理を行う」という斬新な技術的思想を新たに見出した。上記の構成は、当該技術的思想に基づいて想到されたものである。 As described above, the inventor of the present application has proposed a novel technique of “determining the texture of an object expressed in a moving image based on the motion vector of the moving image and performing moving image processing according to the determination result”. I found a new technical idea. The above configuration has been conceived based on the technical idea.
 当該構成によれば、質感判別部は、動画像の動きベクトルを解析することにより、当該動画像において表現された物体の質感を判別できる。そして、動画像処理部は、当該判別結果に応じて動画像処理を行う。つまり、動画像処理部に、物体の質感をより効果的に表現するように、動画像処理を行わせることができる。それゆえ、動画像において表現された物体の質感を高めることが可能となる。 According to this configuration, the texture determination unit can determine the texture of the object expressed in the moving image by analyzing the motion vector of the moving image. Then, the moving image processing unit performs moving image processing according to the determination result. That is, it is possible to cause the moving image processing unit to perform moving image processing so as to more effectively express the texture of the object. Therefore, it is possible to enhance the texture of the object expressed in the moving image.
 本開示の態様2に係る動画像処理装置は、上記態様1において、上記質感を表す動きベクトルを表すモデル、または、当該質感を表す動きベクトルに関連するモデルである、特徴パターンモデルがあらかじめ設定されており、上記質感判別部は、上記動画像の上記動きベクトルと上記特徴パターンモデルとを照合することにより、上記質感を判別することが好ましい。 In the moving image processing apparatus according to aspect 2 of the present disclosure, a feature pattern model that is a model representing the motion vector representing the texture or a model related to the motion vector representing the texture in the aspect 1 is set in advance. The texture determination unit preferably determines the texture by comparing the motion vector of the moving image with the feature pattern model.
 上記の構成によれば、あらかじめ設定された特徴パターンモデルに基づいて、上記質感を判別することが可能となる。 According to the above configuration, it is possible to determine the texture based on a preset feature pattern model.
 本開示の態様3に係る動画像処理装置は、上記態様2において、上記質感判別部は、上記動画像の上記動きベクトルと上記特徴パターンモデルとを照合することにより、上記特徴パターンモデルに対する上記動画像の上記動きベクトルの一致度を示す質感判別情報を、上記判別結果として生成してよい。 In the moving image processing apparatus according to aspect 3 of the present disclosure, in the aspect 2, the texture determination unit compares the motion vector of the moving image with the feature pattern model, thereby moving the moving image with respect to the feature pattern model. Material discrimination information indicating the degree of coincidence of the motion vectors of the image may be generated as the discrimination result.
 上記の構成によれば、特徴パターンモデルに対する動画像の動きベクトルの一致度(質感判別情報)によって、上記質感を評価することが可能となる。 According to the above configuration, it is possible to evaluate the texture based on the degree of coincidence (texture discrimination information) of the motion vector of the moving image with respect to the feature pattern model.
 本開示の態様4に係る動画像処理装置は、上記態様1から3のいずれか1つにおいて、上記質感判別部は、上記動画像が圧縮されたデータである圧縮動画像データにあらかじめ含まれている動きベクトルを解析することが好ましい。 The moving image processing apparatus according to Aspect 4 of the present disclosure is any one of Aspects 1 to 3, wherein the texture determination unit is included in advance in the compressed moving image data that is data obtained by compressing the moving image. It is preferable to analyze the motion vector.
 上記の構成によれば、動画像処理装置において動きベクトルを算出する処理が不要となるので、動画像処理装置の構成を簡単化することが可能となる。 According to the above configuration, since the process for calculating the motion vector is not required in the moving image processing apparatus, the configuration of the moving image processing apparatus can be simplified.
 本開示の態様5に係る表示装置は、上記態様1から4のいずれか1つに係る動画像処理装置を備えていることが好ましい。 The display device according to aspect 5 of the present disclosure preferably includes the moving image processing apparatus according to any one of aspects 1 to 4.
 上記の構成によれば、本開示の一態様に係る動画像処理装置と同様の効果を奏する。 According to the above configuration, the same effect as the moving image processing apparatus according to an aspect of the present disclosure is obtained.
 本開示の態様6に係る動画像処理方法は、動画像の動きベクトルを解析することにより、当該動画像において表現された物体の質感を判別する質感判別工程と、上記質感判別工程における判別結果に応じて上記動画像を処理する動画像処理工程と、を含んでいる。 In the moving image processing method according to the aspect 6 of the present disclosure, the motion vector of the moving image is analyzed to determine the texture of the object represented in the moving image, and the determination result in the texture determining step. And a moving image processing step for processing the moving image accordingly.
 上記の構成によれば、本開示の一態様に係る動画像処理装置と同様の効果を奏する。 According to the above configuration, the same effect as the moving image processing apparatus according to an aspect of the present disclosure is obtained.
 本開示の各態様に係る動画像処理装置は、コンピュータによって実現してもよく、この場合には、コンピュータを上記動画像処理装置が備える各部(ソフトウェア要素)として動作させることにより上記動画像処理装置をコンピュータにて実現させる動画像処理装置の制御プログラム、およびそれを記録したコンピュータ読み取り可能な記録媒体も、本開示の一態様の範疇に入る。 The moving image processing apparatus according to each aspect of the present disclosure may be realized by a computer. In this case, the moving image processing apparatus is operated by causing the computer to operate as each unit (software element) included in the moving image processing apparatus. A control program for a moving image processing apparatus for realizing the above in a computer and a computer-readable recording medium on which the control program is recorded also fall within the category of one aspect of the present disclosure.
 〔付記事項〕
 本開示の一態様は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本開示の一態様の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成できる。
[Additional Notes]
One aspect of the present disclosure is not limited to the above-described embodiments, and various modifications can be made within the scope shown in the claims, and the technical means disclosed in different embodiments can be appropriately combined. Embodiments to be included are also included in the technical scope of one aspect of the present disclosure. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
 (関連出願の相互参照)
 本出願は、2016年12月28日に出願された日本国特許出願:特願2016-256284に対して優先権の利益を主張するものであり、それを参照することにより、その内容の全てが本書に含まれる。
(Cross-reference of related applications)
This application claims the benefit of priority over Japanese patent application: Japanese Patent Application No. 2016-256284 filed on Dec. 28, 2016. Included in this document.
 1,2,3,3v,4 表示装置
 10,20,30,30v,40 信号処理部(動画像処理装置)
 13 質感検出部(質感判別部)
 14 質感補正部(動画像処理部)
1, 2, 3, 3v, 4 Display device 10, 20, 30, 30v, 40 Signal processing unit (moving image processing device)
13 Texture detection unit (material discrimination unit)
14 Texture correction unit (moving image processing unit)

Claims (7)

  1.  動画像の動きベクトルを解析することにより、当該動画像において表現された物体の質感を判別する質感判別部と、
     上記質感判別部の判別結果に応じて上記動画像を処理する動画像処理部と、を備えていることを特徴とする動画像処理装置。
    A texture discriminating unit that discriminates the texture of an object represented in the moving image by analyzing a motion vector of the moving image;
    A moving image processing apparatus comprising: a moving image processing unit that processes the moving image according to a determination result of the texture determining unit.
  2.  上記質感を表す動きベクトルを表すモデル、または、当該質感を表す動きベクトルに関連するモデルである、特徴パターンモデルがあらかじめ設定されており、
     上記質感判別部は、上記動画像の上記動きベクトルと上記特徴パターンモデルとを照合することにより、上記質感を判別することを特徴とする請求項1に記載の動画像処理装置。
    A feature pattern model, which is a model representing a motion vector representing the texture or a model related to the motion vector representing the texture, is preset,
    The moving image processing apparatus according to claim 1, wherein the texture determining unit determines the texture by comparing the motion vector of the moving image with the feature pattern model.
  3.  上記質感判別部は、上記動画像の上記動きベクトルと上記特徴パターンモデルとを照合することにより、
     上記特徴パターンモデルに対する上記動画像の上記動きベクトルの一致度を示す質感判別情報を、上記判別結果として生成することを特徴とする請求項2に記載の動画像処理装置。
    The texture discriminating unit collates the motion vector of the moving image with the feature pattern model,
    The moving image processing apparatus according to claim 2, wherein texture discrimination information indicating a degree of coincidence of the motion vector of the moving image with respect to the feature pattern model is generated as the discrimination result.
  4.  上記質感判別部は、上記動画像が圧縮されたデータである圧縮動画像データにあらかじめ含まれている動きベクトルを解析することを特徴とする請求項1から3のいずれか1項に記載の動画像処理装置。 The moving image according to any one of claims 1 to 3, wherein the texture determination unit analyzes a motion vector included in advance in compressed moving image data, which is data obtained by compressing the moving image. Image processing device.
  5.  請求項1から4のいずれか1項に記載の動画像処理装置を備えていることを特徴とする表示装置。 A display device comprising the moving image processing device according to any one of claims 1 to 4.
  6.  動画像の動きベクトルを解析することにより、当該動画像において表現された物体の質感を判別する質感判別工程と、
     上記質感判別工程における判別結果に応じて上記動画像を処理する動画像処理工程と、を含んでいることを特徴とする動画像処理方法。
    A texture determination step of determining a texture of an object represented in the moving image by analyzing a motion vector of the moving image;
    A moving image processing method comprising: a moving image processing step for processing the moving image in accordance with a determination result in the texture determination step.
  7.  請求項1に記載の動画像処理装置としてコンピュータを機能させるための制御プログラムであって、上記質感判別部および上記動画像処理部としてコンピュータを機能させるための制御プログラム。 A control program for causing a computer to function as the moving image processing apparatus according to claim 1, wherein the control program causes the computer to function as the texture determination unit and the moving image processing unit.
PCT/JP2017/036763 2016-12-28 2017-10-11 Moving-image processing device, display device, moving-image processing method, and control program WO2018123202A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016256284 2016-12-28
JP2016-256284 2016-12-28

Publications (1)

Publication Number Publication Date
WO2018123202A1 true WO2018123202A1 (en) 2018-07-05

Family

ID=62707956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/036763 WO2018123202A1 (en) 2016-12-28 2017-10-11 Moving-image processing device, display device, moving-image processing method, and control program

Country Status (1)

Country Link
WO (1) WO2018123202A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020022362A1 (en) * 2018-07-24 2020-01-30 国立研究開発法人国立精神・神経医療研究センター Motion detection device, feature detection device, fluid detection device, motion detection system, motion detection method, program, and recording medium
JP2021010109A (en) * 2019-07-01 2021-01-28 日本放送協会 Frame rate conversion model learning device, frame rate conversion device, and programs thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004261272A (en) * 2003-02-28 2004-09-24 Oki Electric Ind Co Ltd Cenesthetic device, motion signal generation method and program
JP2008257382A (en) * 2007-04-03 2008-10-23 Nippon Telegr & Teleph Corp <Ntt> Movement detection device, movement detection method and movement detection program
JP2012104018A (en) * 2010-11-12 2012-05-31 Hitachi Kokusai Electric Inc Image processing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004261272A (en) * 2003-02-28 2004-09-24 Oki Electric Ind Co Ltd Cenesthetic device, motion signal generation method and program
JP2008257382A (en) * 2007-04-03 2008-10-23 Nippon Telegr & Teleph Corp <Ntt> Movement detection device, movement detection method and movement detection program
JP2012104018A (en) * 2010-11-12 2012-05-31 Hitachi Kokusai Electric Inc Image processing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAWABE, TAKAHIRO ET AL.: "Science and control of texture recognition - Exploration of motion information in images that brings a liquid texture", NTT GIJUTU JOURNAL, vol. 26, no. 9, 1 September 2014 (2014-09-01), pages 27 - 31 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020022362A1 (en) * 2018-07-24 2020-01-30 国立研究開発法人国立精神・神経医療研究センター Motion detection device, feature detection device, fluid detection device, motion detection system, motion detection method, program, and recording medium
JP2021010109A (en) * 2019-07-01 2021-01-28 日本放送協会 Frame rate conversion model learning device, frame rate conversion device, and programs thereof
JP7274367B2 (en) 2019-07-01 2023-05-16 日本放送協会 Frame rate conversion model learning device and frame rate conversion device, and their programs

Similar Documents

Publication Publication Date Title
US11544831B2 (en) Utilizing an image exposure transformation neural network to generate a long-exposure image from a single short-exposure image
US10755391B2 (en) Digital image completion by learning generation and patch matching jointly
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
CN108921225B (en) Image processing method and device, computer equipment and storage medium
US9785865B2 (en) Multi-stage image classification
CN111869220B (en) Electronic device and control method thereof
CN106682632B (en) Method and device for processing face image
US10430694B2 (en) Fast and accurate skin detection using online discriminative modeling
US8605957B2 (en) Face clustering device, face clustering method, and program
US10742990B2 (en) Data compression system
JP2020518191A (en) Quantization parameter prediction maintaining visual quality using deep neural network
EP3185176A1 (en) Method and device for synthesizing an image of a face partially occluded
CN111402143A (en) Image processing method, device, equipment and computer readable storage medium
KR20170047167A (en) Method and apparatus for converting an impression of a face in video
US10943115B2 (en) Processing image data to perform object detection
CN109413510B (en) Video abstract generation method and device, electronic equipment and computer storage medium
CN114339409B (en) Video processing method, device, computer equipment and storage medium
US20220156987A1 (en) Adaptive convolutions in neural networks
US11893056B2 (en) Using interpolation to generate a video from static images
CN110503002B (en) Face detection method and storage medium
WO2018123202A1 (en) Moving-image processing device, display device, moving-image processing method, and control program
CN113379877A (en) Face video generation method and device, electronic equipment and storage medium
US20220164934A1 (en) Image processing method and apparatus, device, video processing method and storage medium
CN114529785A (en) Model training method, video generation method and device, equipment and medium
KR20190001444A (en) Motion prediction method for generating interpolation frame and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17887880

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17887880

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP