WO2014125842A1 - 画像処理装置、画像処理方法、画像処理用コンピュータプログラムおよび画像処理用コンピュータプログラムを格納した情報記録媒体 - Google Patents
画像処理装置、画像処理方法、画像処理用コンピュータプログラムおよび画像処理用コンピュータプログラムを格納した情報記録媒体 Download PDFInfo
- Publication number
- WO2014125842A1 WO2014125842A1 PCT/JP2014/000827 JP2014000827W WO2014125842A1 WO 2014125842 A1 WO2014125842 A1 WO 2014125842A1 JP 2014000827 W JP2014000827 W JP 2014000827W WO 2014125842 A1 WO2014125842 A1 WO 2014125842A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- boundary line
- depth value
- region
- image frame
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
- H04N13/264—Image signal generators with monoscopic-to-stereoscopic image conversion using the relative movement of objects in two video frames or fields
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20096—Interactive definition of curve of interest
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20101—Interactive definition of point of interest, landmark or seed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present invention relates to an image processing apparatus, in particular, an apparatus for performing image processing used when converting 2D video to 3D video, an image processing method using the apparatus, and an image processing computer that can be read and executed by the apparatus.
- the present invention relates to a program and an information recording medium storing the program.
- 3D video uses binocular parallax that occurs because the human eye is about 65 mm away from side to side and the video perceived by the left and right eyes is different. Since the human left eye and right eye see different 2D images, when these 2D images are transmitted from the ganglion cells on the retina surface via the optic nerve to the brain center, the brain fuses these 2D images, Recognize as a 3D image.
- 3D image technology divides two types of 2D images recorded by two camera lenses into a left eye image and a right eye image, and the human left and right eyes. It is a technology that expresses the three-dimensional effect by providing each video.
- a stereo camera equipped with two camera lenses is very expensive, and the method of arranging the stereo cameras, the method of adjusting the distance, angle and focus between each camera, the geometric problem and color due to the camera arrangement
- 3D video can be generated by moving only the binocular parallax corresponding to predetermined depth information for each object of the original 2D video. That is, in order to convert 2D video into 3D video, a process of generating a depth map in the 2D video is required.
- the depth map can be expressed as a gray scale value between 0 and 255 for each pixel as a map indicating a three-dimensional distance to an object in the 2D video. The greater the depth value (meaning a brighter color), the closer the distance from the image viewing position.
- accurate depth map generation is performed by a professional engineer visually confirming the overlap of each object and the overlap between the object and the background, while checking the outline of the object and the inside of the object in approximately pixels. This is an operation to divide the region to be three-dimensionalized along the outline of the predetermined region.
- a watershed algorithm is known as one of the region dividing methods used for extracting a target region in an image.
- This algorithm considers grayscale information (such as brightness) in an image as altitude in terms of terrain, and divides the image so that when the terrain is filled with water, there is a boundary between the water collected in different depressions. It is a way to do.
- grayscale information such as brightness
- it is also possible to divide an object in each frame (image) constituting a 2D video into a large number of regions (for example, see Patent Documents 1 and 2).
- the above prior art has the following problems.
- the quality of the depth map also changes depending on the level of the operator's technique, there is a problem that the quality of the final 3D video is likely to fluctuate.
- the area division method using the watershed algorithm is advantageous in terms of cost compared to manual work by a special engineer because each area is automatically divided by software.
- the watershed algorithm when used, there is a problem that excessive division (overdivision) is performed for the generation of 3D video.
- a method of suppressing this overdivision for example, a method of adding a region integration process to the watershed algorithm and considering the same region as long as the color difference is within a certain threshold value so as not to form a boundary in the watershed region is also known. .
- Patent Document 2 using an anisotropic diffusion filter, the tangential direction of edges between objects in each image frame of 2D video, or between an object and a background image, or between regions in the same object
- a method of performing smoothing processing with different degrees in the vertical direction of the edge removing noise without destroying the shape of the edge, erasing a pattern unnecessary for area division, and suppressing overdivision.
- overdivision suppression methods it is difficult to generate a 3D video frame that gives a natural stereoscopic effect.
- An object of the present invention is to solve the above-described problems, that is, to realize image processing that enables generation of a 3D video that makes a natural three-dimensional feel simple and with little quality difference. .
- the present inventor succeeded in developing a completely new overdivision suppression method using a part of a conventionally known watershed algorithm.
- Overdivision of the region when using the watershed algorithm has been regarded as a problem in the past, and there are methods to suppress it.
- such a conventional overdivision suppression method is suitable for processing a still image of a living organ imaged by an endoscope or the like, but is not suitable for processing for converting a 2D moving image into a 3D moving image.
- the still image like the former must draw the unevenness of the details of the living organ with a certain degree of accuracy while suppressing excessive division.
- An image processing apparatus includes an image frame reading unit that reads one or more image frames out of a plurality of image frames constituting a moving image, and an area boundary line in the read image frame
- Area boundary line information receiving means for receiving the information of the area, and area dividing means for extending the divided area starting from a predetermined point on the area boundary line and dividing the inside and outside of the area boundary line by the dividing line connecting the approximate points of brightness
- An opening processing means for leaving a first divided line between two region boundary lines among the divided lines and opening a second divided line other than the first divided line of the divided lines; Separation means for separating the unit into regions surrounded by lines, and a depth value representing the degree of perspective of the region is given to the region surrounded by the first division line.
- a depth value providing means comprises at least.
- the image processing apparatus further includes start point generation means for generating a plurality of start points on the region boundary line.
- the region boundary line information receiving unit is further configured to receive information on a region boundary line instructed for an object other than the background.
- Background boundary line information receiving means for receiving information on a background boundary line instructed for the background, and a second depth value for giving a depth value indicating the perspective of the area to the area surrounded by the background boundary line
- an object / background synthesizing unit that synthesizes the adding unit, the object to which the depth value is given by the first depth value giving unit, and the background to which the depth value is given by the second depth value giving unit.
- the image processing apparatus further includes a plurality of objects and / or background areas constituting the background in the image frame, and the position of the object and / or the background area in the background.
- Location point presence / absence discriminating means for determining the presence of a location point indicating the position, and when there is a location point, specifying the position of a predetermined part of the background area constituting multiple objects and / or the background
- a depth value determining means for determining a depth value representing the degree of perspective of the object and / or the background area based on the position specified by the position specifying means of the object predetermined site.
- the first depth value giving means and / or the second depth value giving means Imparting depth values for objects and / or background area.
- An image processing apparatus includes a first image frame in which a region boundary line has been generated among a plurality of image frames constituting a moving image, and is time-sequentially than the first image frame.
- An image processing apparatus capable of automatically generating a region boundary line in a second image frame existing later, the coordinates of a first feature point existing in the region boundary line of the first image frame
- a first feature point receiving means for receiving a value
- a second feature point specifying means for specifying a coordinate value of a second feature point corresponding to a coordinate value of the first feature point in the second image frame
- a first feature point An area boundary line automatic generation unit that automatically generates a new area boundary line corresponding to the area boundary line of the first image frame in the second image frame based on the movement information to the second feature point; .
- the coordinate value of the first feature point is calculated based on the approximation degree of at least one of the color and brightness of the pixel.
- Second feature point searching means for searching for a corresponding second feature point is further provided.
- An image processing apparatus includes: an image frame number designation accepting unit that accepts designation of the number of image frames to be processed by the region boundary line automatic creating unit; and a new region boundary line automatically generated Frame number determining means for determining whether or not the number of image frames that have been subjected to the processing has reached the specified number of image frames, and the frame number determining means reaches the specified number of image frames.
- the first feature point accepting unit, the second feature point specifying unit, and the region boundary line automatic generating unit are caused to execute each process until it is determined that they have been performed.
- An image processing apparatus includes a first image frame to which a depth value has been assigned in an area surrounded by a first division line among a plurality of image frames constituting a moving image.
- An image that can automatically generate a depth value in a region corresponding to a region surrounded by the first division line in the second image frame existing in time series after the first image frame A processing apparatus, wherein a depth value assigned to an area surrounded by a first division line is assigned to one or more first pixels existing in an area surrounded by the first division line in the first image frame.
- a pixel depth value assigning means, a pixel movement position tracking means for tracking to which pixel in the second image frame the first pixel has moved, and an area in the second image frame, the movement of the first pixel The second pixel after In the region formed, and a depth value automatic generation means for automatically generating a depth value assigned by pixel depth value assigning means further comprises.
- An image processing method is an image processing method executed using the image processing apparatus having the highest concept described above, and one or more of a plurality of image frames constituting a moving image.
- An image frame reading step for reading the image frame
- an area boundary line information receiving step for receiving the information on the area boundary line
- a separation step, in a region surrounded by the first division line includes a first depth value assigning step that imparts depth value representative of the distance degree of the region, at least.
- the image processing method further includes a starting point generation step of generating a plurality of starting points on the region boundary line.
- the region boundary line information receiving step is further a step of receiving information on a region boundary line instructed for an object other than the background.
- a background boundary line information receiving step for receiving information on a background boundary line instructed for the background, and a second depth value for giving a depth value indicating the perspective of the area to the area surrounded by the background boundary line
- At least an object / background synthesis step for synthesizing the object to which the depth value is given by the first depth value giving step and the background to which the depth value is given by the second depth value giving step.
- the image processing method further includes a plurality of objects and / or background areas constituting the background in the image frame, and the position of the object and / or the background area in the background.
- Location point presence / absence determination step for determining the presence of a location point indicating the position, and when there is a location point, the position of a predetermined portion of the background region constituting the plurality of objects and / or the background is specified.
- a depth value determining step for determining a depth value representing the perspective of the object and / or the background area based on the position specified by the position specifying step of the object predetermined site and the position specified by the position specifying step of the object predetermined site. Including a first depth value giving step and / or a second Is the value imparted step, it imparts depth values for objects and / or background area.
- An image processing method includes a first image frame in which a region boundary line has been generated among a plurality of image frames constituting a moving image, and is time-sequentially than the first image frame.
- An image processing method capable of automatically generating a region boundary line in a second image frame existing later, the coordinates of a first feature point existing in the region boundary line of the first image frame
- a first feature point receiving step for receiving a value
- a second feature point specifying step for specifying a coordinate value of a second feature point corresponding to a coordinate value of the first feature point in the second image frame
- a first feature point A region boundary line automatic generation step for automatically generating a new region boundary line corresponding to the region boundary line of the first image frame in the second image frame based on the movement information to the second feature point; To include.
- the coordinate value of the first feature point is calculated based on the approximation degree of at least one of the color and brightness of the pixel.
- the method further includes a second feature point search step for searching for a corresponding second feature point.
- An image processing method includes an image frame number designation receiving step for accepting designation of the number of image frames to be processed by the region boundary line automatic creating step, and a new region boundary line automatically generating A frame number determining step for determining whether or not the number of image frames for which the processing to be performed has reached the specified number of image frames, and the frame number determining step reaches the specified number of image frames.
- the first feature point receiving step, the second feature point specifying step, and the region boundary line automatic generating step are executed until it is determined that it has been done.
- a first image frame to which a depth value has been assigned in a region surrounded by the first division line among a plurality of image frames constituting a moving image.
- An image that can automatically generate a depth value in a region corresponding to a region surrounded by the first division line in the second image frame existing in time series after the first image frame A processing method, wherein a depth value assigned to an area surrounded by a first divided line is assigned to one or more first pixels existing in an area surrounded by the first divided line in the first image frame.
- a pixel depth value assigning step a pixel movement position tracking step for tracking to which pixel in the second image frame the first pixel has moved, and an area in the second image frame, wherein the movement of the first pixel
- the second after A region consisting of Kuseru further including a depth value automatically generating step of automatically generating a depth value assigned by pixel depth value assignment step.
- An image processing computer program is a computer program that is read and executed by a computer, and one or more image frames of a plurality of image frames constituting a moving image are stored in the computer.
- Image frame reading means for reading
- area boundary line information receiving means for receiving area boundary line information in the read image frame, extending the divided area starting from a predetermined point on the area boundary line, and approximating the brightness
- Area dividing means for dividing the inside and outside of the area boundary line by the dividing line connecting the two, the first dividing line existing between the two area boundary lines among the dividing lines is left, and other than the first dividing line among the dividing lines
- An opening processing means for opening the second divided line, and a unit of the area surrounded by the first divided line
- Away separating means in a region surrounded by the first division line
- the first depth value assigning means for assigning a depth value representative of the distance degree of the area, the function of each means, to further execute.
- An image processing computer program further causes a computer to execute a function of starting point generation means for generating a plurality of starting points on a region boundary line.
- An image processing computer program uses a region boundary line information receiving unit as a unit for receiving information on a region boundary line instructed for an object other than a background, and A background boundary line information receiving means for receiving information on a background boundary line instructed for the background, and a second depth for giving a depth value representing the perspective of the area to the area surrounded by the background boundary line Functions of an object / background synthesizing unit that synthesizes an object to which a depth value is given by the depth value giving unit and the first depth value giving unit and a background to which the depth value is given by the second depth value giving unit Is further executed.
- An image processing computer program includes a computer having a plurality of objects and / or background areas constituting a background in an image frame, and the positions and / or backgrounds of objects in the background.
- Location point presence / absence determining means for determining the presence of a location point indicating the position of the inner region, the position of a predetermined portion of the background inner region constituting the plurality of objects and / or the background when the location point exists
- a depth value determining means for determining a depth value indicating a degree of perspective of the object and / or the background area based on the position specified by the position specifying means of the object predetermined portion for specifying
- Each function is further executed, and the first depth value assigning hand And / or the function of the second depth value providing means, to impart the depth values for objects and / or background area.
- the computer program for image processing includes a first image frame in which a region boundary line has been generated among a plurality of image frames constituting a moving image, and is more than the first image frame.
- An image processing computer program capable of automatically generating a region boundary line in a second image frame that exists later in time series, and present in the computer within the region boundary line of the first image frame
- First feature point receiving means for receiving the coordinate value of the first feature point
- second feature point specifying means for specifying the coordinate value of the second feature point corresponding to the coordinate value of the first feature point in the second image frame
- Based on the movement information from the first feature point to the second feature point a new region boundary line corresponding to the region boundary line of the first image frame is automatically generated in the second image frame.
- the functions of the region boundary line automatic generation means for, to further execute.
- An image processing computer program allows a computer to execute a first feature based on an approximation degree of at least one of the color and brightness of a pixel prior to the processing of the second feature point specifying unit.
- a function of second feature point searching means for searching for a second feature point corresponding to the coordinate value of the point is further executed.
- An image processing computer program includes: an image frame number designation accepting unit that accepts designation of the number of image frames to be processed by the region boundary line automatic creating unit; and a new region boundary.
- Each function of the frame number discriminating means for discriminating whether or not the number of image frames for which the process for automatically generating lines has reached the specified number of image frames is further executed, and specified by the frame number discriminating means.
- the first feature point receiving unit, the second feature point specifying unit, and the region boundary line automatic generating unit are caused to execute each process until it is determined that the number of image frames has been reached.
- An image processing computer program provides a computer with a first depth value assigned to a region surrounded by a first division line among a plurality of image frames constituting a moving image.
- a depth value is automatically generated in an area corresponding to an area surrounded by a first division line in a second image frame that exists after the first image frame in time series.
- a computer program for image processing that enables the image processing to be applied to an area surrounded by the first division line to one or more first pixels existing in the area surrounded by the first division line in the first image frame.
- a pixel depth value assigning means for assigning the depth value
- a pixel movement position tracking means for tracking which pixel in the second image frame the first pixel has moved to
- a second image frame Each of the functions of the depth value automatic generation means for automatically generating the depth value assigned by the pixel depth value assignment means in the area constituted by the second pixel after the movement of the first pixel. And let it run further.
- An information recording medium that stores a computer program for image processing is an information recording medium that stores a computer program that is read and executed by a computer.
- Image frame reading means for reading one or more image frames of image frames
- area boundary line information receiving means for receiving area boundary line information within the read image frame
- An area dividing means for extending the divided area as a starting point and dividing the inside / outside of the area boundary line by a dividing line connecting the approximate points of lightness, the first dividing line existing between two area boundary lines among the dividing lines Opening process for opening the second divided line other than the first divided line among the divided lines.
- Means separating means for separating into units of the area surrounded by the first dividing line, and first depth value giving means for giving a depth value representing the perspective of the area to the area surrounded by the first dividing line
- An information recording medium storing an image processing computer program for further executing the functions of the means.
- An information recording medium storing an image processing computer program causes an image processing computer to further execute a function of starting point generation means for generating a plurality of starting points on a region boundary line.
- An information recording medium storing a program.
- the area boundary line information receiving means is a means for receiving information on an area boundary line designated for an object other than the background.
- the background boundary line information receiving means for receiving information on the background boundary line instructed to the background in the image frame to the computer, and the depth value representing the perspective of the area in the area surrounded by the background boundary line
- Second depth value assigning means for assigning an object, an object for which a depth value is given by the first depth value assigning means, and a background for which a depth value is given by the second depth value assigning means
- An information recording medium storing an image processing computer program for further executing each function of the combining means.
- An information recording medium storing an image processing computer program includes a computer having a plurality of objects and / or background areas constituting a background in an image frame, and an object in the background Location point presence / absence determining means for determining the existence of a location point indicating the position of the region and / or the area in the background, and in the background constituting a plurality of objects and / or the background when the location point exists
- a depth value representing the degree of perspective of the object and / or the background area is determined based on the position specified by the position specifying unit of the object predetermined part for specifying the position of the predetermined part of the region and the position specifying unit of the object predetermined part.
- each function of the depth value determination means An information recording medium storing a computer program for image processing to be performed, and a depth value for an object and / or background area is set by the function of the first depth value giving means and / or the second depth value giving means. Grant.
- An information recording medium storing an image processing computer program includes a first image frame in which a region boundary line has been generated among a plurality of image frames constituting a moving image, and An information recording medium storing an image processing computer program capable of automatically generating a region boundary line in a second image frame existing in time series after the first image frame, , First feature point accepting means for accepting the coordinate value of the first feature point existing within the region boundary line of the first image frame; in the second image frame, the second feature point corresponding to the coordinate value of the first feature point; A second feature point specifying means for specifying a coordinate value, based on movement information from the first feature point to the second feature point, a region boundary of the first image frame in the second image frame.
- the functions of the region boundary line automatic generation means for automatically generating a new region boundary line corresponding to in a further information recording medium storing an image processing computer program to be executed.
- An information recording medium storing an image processing computer program provides a computer with an approximation degree of at least one of pixel color and brightness prior to the processing of the second feature point specifying means.
- the information recording medium stores an image processing computer program for further executing the function of second feature point searching means for searching for a second feature point corresponding to the coordinate value of the first feature point.
- An information recording medium storing an image processing computer program receives a designation of the number of image frames for accepting designation of the number of image frames to be processed by the region boundary line automatic generation means. Means for further executing each function of the frame number determination means for determining whether or not the number of image frames for which processing for automatically generating a new region boundary line has reached the specified number of image frames; An image processing computer that causes the first feature point accepting unit, the second feature point specifying unit, and the region boundary line automatic generating unit to execute each process until the number determining unit determines that the specified number of image frames has been reached.
- An information recording medium storing a program.
- An information recording medium storing an image processing computer program has a computer that stores a depth value in a region surrounded by a first division line among a plurality of image frames constituting a moving image.
- the first image frame to which is assigned and the depth is automatically deepened to an area corresponding to the area surrounded by the first division line in the second image frame existing in time series after the first image frame.
- An information recording medium storing an image processing computer program capable of generating a depth value, wherein one or more first pixels existing in an area surrounded by a first division line in a first image frame , A pixel depth value assigning means for assigning a depth value given to an area surrounded by the first division line, and tracking to which pixel in the second image frame the first pixel has moved. Pixel movement position tracking means, automatically generating a depth value assigned by the pixel depth value assignment means in an area in the second image frame, which is composed of the second pixel after the movement of the first pixel.
- An information recording medium storing an image processing computer program for further executing each function of the depth value automatic generation means.
- FIG. 1 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
- FIG. 2 shows an example of one image frame constituting a moving image.
- FIG. 3 shows a state in which the operator of the image processing apparatus of FIG. 1 has drawn an area boundary line on the image frame of FIG.
- FIG. 4 shows a state in which region division is performed in the image frame of FIG.
- FIG. 5 shows a state in which the separation process and the first depth value giving process are performed on the image frame of FIG.
- FIG. 6 is a flowchart for performing image processing for 3D video on image frames constituting a moving image by the image processing apparatus of FIG.
- FIG. 7 shows a flow of an application example executed by the image processing apparatus of FIG. FIG.
- FIG. 8 shows an example of a suitable image for explaining the automatic assignment of depth values executed by the image processing apparatus of FIG.
- FIG. 9 shows a flowchart of a preferred process for automatically assigning a depth value by the image processing apparatus of FIG.
- FIG. 10 is a schematic diagram of an image processing apparatus according to the second embodiment of the present invention.
- FIG. 11 is a diagram illustrating an example of processing using the image processing apparatus of FIG.
- FIG. 12 shows a diagram following FIG.
- FIG. 13 is a diagram for explaining in detail an image processing method using the image processing apparatus of FIG.
- FIG. 14 is a flowchart for explaining the processing flow of the image processing method using the image processing apparatus of FIG.
- FIG. 15 is a schematic diagram of an image processing apparatus according to the third embodiment of the present invention.
- FIG. 16 is a diagram illustrating an example of processing using the image processing apparatus of FIG.
- FIG. 17 is a flowchart for explaining the processing flow of the image processing method using the image processing apparatus of FIG.
- Image processing device 15 Image frame reading unit (image frame reading means) 16 Area boundary line information receiving unit (area boundary line information receiving means) 17 Region dividing unit (region dividing means) 18 Aperture processing section (aperture processing means) 19 Separation part (separation means) 20 1st depth value provision part (1st depth value provision means) 21 Start point generation unit (start point generation means) 22 Background boundary line information receiving unit (background boundary line information receiving means) 23 2nd depth value provision part (2nd depth value provision means) 25.
- Object / background composition part object / background composition means
- Location / point presence / absence discriminator location / point presence / absence discriminating means
- Position specifying part of object predetermined part position specifying means of object predetermined part
- Depth value determination unit depth value determination means
- images image frames or frames
- Background 42 Singer Object
- 43-45 person object 51 to 59 Area boundary line
- Starting point 70
- First division line one of the division lines
- Second division line one of the division lines
- 80 images (image frames or frames)
- Background 82-86 Object 82a-86a Predetermined part Location point A Head (area) B Arm (area) C Chest (region) DF area 100, 200 Image processing apparatus (computer)
- Selection data accepting unit including image frame number designation accepting means)
- 1st feature point reception part (1st feature point reception means)
- 2nd feature point search part 2nd feature point search means
- Second feature point specifying unit second feature point specifying means
- Second feature point specifying unit 156
- FIG. 1 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
- the image processing apparatus 1 includes an input unit 10, a storage unit 11, an external memory loading unit 12, a communication unit 13, an interface 14, an image frame reading unit 15, a region boundary line information receiving unit 16, and a region dividing unit. 17, opening processing unit 18, separation unit 19, first depth value giving unit 20, starting point generation unit 21, background boundary line information receiving unit 22, second depth value giving unit 23, object / background separation unit 24, object A background synthesizing unit 25, a location / point presence / absence determining unit 26, a position specifying unit 27 of an object predetermined part, and a depth value determining unit 28 are provided. These components are displayed separately according to the function of each component, and do not necessarily mean physically separated hardware.
- the input unit 10 is a part where an operator who operates the image processing apparatus 1 performs various inputs, and includes a keyboard, a pointing device, a touch operation panel using a touch sensor, and the like.
- the storage unit 11 is an area for storing various types of information such as image processing computer programs according to all the embodiments including this embodiment, such as a read-only ROM, a readable / writable RAM, an EEPROM, or a hard disk. It is comprised with various storage means. For example, the moving image is stored in the storage unit 11.
- the external memory loading unit 12 inserts or connects a portable information recording medium 30 such as a CD-R, a USB memory (flash memory), an MD, or a flexible disk to the image processing apparatus 1, and the information recording medium 30 This is the part that becomes the entrance of the information stored in.
- the information recording medium 30 can store the computer program for image processing according to the embodiment of the present invention.
- the information recording medium 30 may store a moving image.
- the communication unit 13 is a part that performs wireless or wired communication with the outside of the image processing apparatus 1 and receives information from the outside or transmits information to the outside. If the communication unit 13 is a part that performs wireless communication, an antenna or the like is also included.
- the interface 14 is a part serving as a connection port with the outside of the image processing apparatus 1, and includes a light reception unit such as an infrared reception unit in addition to a physical connection port with a communication line typified by an optical fiber. For example, when a moving image is stored in an external server, the communication unit 13 may install moving image data from the interface 14 via the Internet and store the data in the storage unit 11.
- Communication unit 13 image frame reading unit 15, region boundary line information receiving unit 16, region dividing unit 17, aperture processing unit 18, separation unit 19, first depth value giving unit 20, starting point generation unit 21, background boundary line information Receiving unit 22, second depth value assigning unit 23, object / background separation unit 24 and object / background composition unit 25, location / point presence / absence determination unit 26, position specifying unit 27 for object predetermined part and depth value determination unit 28 A part or all of these are constituted by a processing device such as a CPU or MPU.
- the image frame reading unit 15 is a part that functions as an image frame reading unit that reads one or more image frames among a plurality of image frames constituting a moving image.
- the image frame is also simply referred to as “image” or “frame”, and means each still image constituting the moving image.
- the moving image is a 2D video as an example, but is interpreted in a broad sense so as to include a 3D video.
- a moving image is configured by continuously displaying frames at a speed of one sheet every 30 msec.
- the image frame reading unit 15 may read one frame displayed for 10 seconds at a time, or may read only one frame.
- One or more frames read by the image frame reading unit 15 are stored in the image frame reading unit 15.
- the image frame reading unit 15 serves as a processing device such as a CPU and a storage device such as a RAM.
- the frame read by the image frame reading unit 15 may be stored in the storage unit 11.
- the image frame reading unit 15 has only a function of a processing device such as a CPU.
- the image frame reading unit 15 executes the reading process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the region boundary line information receiving unit 16 is a part that functions as a region boundary line information receiving unit that receives information on region boundary lines in the image frame read by the image frame reading unit 15.
- the region boundary line means that an operator who creates an image to be used for 3D video by using the image processing apparatus 1 is an object (for example, a person, an object) in a frame, a predetermined region in the object, and / or a background. These lines are drawn outside and inside the outline of a predetermined area (for example, a cloud if the background is sky, or a light that is inserted in water if the background is underwater). Such a line is a set of dots (points).
- the information on the region boundary line is preferably the coordinates of the dots constituting the region boundary line.
- the region boundary line information receiving unit 16 Accepts the coordinates of the dots that make up.
- the reception of the area boundary line information receiving unit 16 may be executed by a special operation by an operator, or may be automatically executed at regular intervals during line creation.
- the area boundary line information receiving unit 16 stores the received information therein when serving as a processing device such as a CPU and a storage device such as a RAM.
- the area boundary line information receiving unit 16 when the area boundary line information receiving unit 16 has only a function of a processing device such as a CPU, the received information may be stored in the storage unit 11.
- the area boundary line information receiving unit 16 executes the receiving process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the region dividing unit 17 is a part that functions as a region dividing unit that expands a divided region starting from a predetermined point on the region boundary line and divides the inside and outside of the region boundary line by a dividing line that connects the approximate points of brightness.
- the starting point may be all points constituting the region boundary line, or points set at predetermined intervals on the region boundary line.
- the area dividing unit 17 preferably creates a dividing line while expanding the area from the starting point based on a watershed algorithm. The area dividing unit 17 sets the areas on both sides separated by the area boundary line as the division target.
- the area dividing unit 17 is configured by a processing device such as a CPU.
- the region dividing unit 17 executes region division based on the watershed algorithm, it is preferable to execute the processing while reading the image processing computer program having the watershed algorithm stored in the storage unit 11 or the information recording medium 30.
- the opening processing unit 18 leaves the first divided line existing between two region boundary lines in the divided lines, and opens the second divided line other than the first divided line in the divided lines. Is a site that functions as The dividing line before the opening process is a closed line.
- the opening processing unit 18 leaves the first division line between the two region boundary lines and opens the other division lines (second division lines). As a result, only the first divided line passing through the outline of the object existing between the two area boundary lines drawn by the operator is maintained in a closed state, and the remaining divided lines are opened.
- the opening processing unit 18 is configured by a processing device such as a CPU. When the aperture processing unit 18 performs the aperture processing, it is preferable to execute the processing while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the separation unit 19 is a part that functions as a separation unit that separates the unit of the region surrounded by the first division line. That is, the separation unit 19 executes a process of dividing an object in each frame into several regions in order to create a 3D video. Thus, one object has a plurality of areas having different depth information.
- the separation unit 19 is configured by a processing device such as a CPU. When the separation unit 19 executes the separation process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the separation unit 19 preferably has a function of separating an area divided by a background boundary line (details will be described later) traced along the boundary of the background. When the separation unit 19 executes the separation process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the first depth value giving unit 20 is a part that functions as a first depth value giving unit that gives a depth value representing the degree of perspective of the area to the area surrounded by the first division line. In the case where the depth value is given on the basis of the brightness, the larger the value, the closer to the viewer.
- the depth value may be given manually by the operator judging from the characteristics of the object, or may be automatically given when there is a perspective reference on the background side, as will be described later. .
- the first depth value assigning unit 20 is a function for assigning the depth value to the object when the depth value representing the perspective degree of the object is determined based on the position of the predetermined part of the object.
- Have The 1st depth value provision part 20 is comprised by processing apparatuses, such as CPU. When the first depth value assigning unit 20 executes the assigning process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the starting point generation unit 21 is a part that functions as a starting point generation unit that generates a plurality of starting points on the region boundary line.
- the starting point is a starting point for starting the region division.
- the starting point may be an arbitrary point on the region boundary line or a point separated from the arbitrary point by a predetermined distance.
- the arbitrary point is preferably arbitrarily selected by the starting point generation unit 21.
- the starting point generation method is not limited to the above method, and another method may be used. For example, a corner point on the region boundary line may be used as the starting point.
- the start point generation unit 21 executes the start point generation process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30. Further, the starting point generation unit 21 is not an essential part and may be omitted. In this case, for example, the region division process is performed starting from all points constituting the region boundary line.
- the background boundary line information receiving unit 22 is a part that functions as background boundary line information receiving means for receiving background boundary line information instructed by the operator for the background in the image frame.
- the background boundary line is a line drawn by tracing the outline of several areas constituting the background by the operator in order to divide the background into several areas. Such a line is a set of dots (points). For this reason, the information on the background boundary line is, for example, the coordinates of dots constituting the background boundary line.
- the background boundary line information receiving unit 22 receives the coordinates of the dots constituting the line.
- the reception of the background boundary line information receiving unit 22 may be executed by a special operation by an operator, or may be automatically executed at regular intervals during line creation.
- the background boundary line information receiving unit 22 stores received information therein when it serves as a processing device such as a CPU and a storage device such as a RAM.
- the background boundary line information receiving unit 22 has only a function of a processing device such as a CPU, the received information may be stored in the storage unit 11.
- the background boundary line information receiving unit 22 executes the receiving process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the background boundary line information receiving unit 22 is not an essential part and may be omitted. For example, when it is not necessary to divide the background into a plurality of regions, the background boundary line information receiving unit 22 is not necessarily required.
- the second depth value giving unit 23 is a part that functions as a second depth value giving unit that gives a depth value representing the perspective of the area to the area surrounded by the background boundary line.
- the depth value may be given manually by the operator, judging from the characteristics of a predetermined area in the background, or automatically if there is a perspective reference on the background side, as will be described later. May be. That is, when the depth value representing the degree of perspective of the background area is determined based on the position of the predetermined portion of the background area that constitutes the background, the second depth value assigning unit 23 determines that the background value is within the background area.
- the 2nd depth value provision part 23 is comprised by processing apparatuses, such as CPU.
- processing apparatuses such as CPU.
- the second depth value assigning unit 23 executes the assigning process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the 2nd depth value provision part 23 is not an essential site
- the object / background separation unit 24 is a part that functions as an object / background separation unit that separates an object other than the background in the image frame from the background.
- the object / background separation unit 24 separates each object and a plurality of regions in each object (method A), and separates the background and the plurality of regions in the background from a method (method B and This is effective when separating by
- the object / background separation unit 24 preferably separates the processed image and the background after performing the process of separating the object and the area inside the object using the method A.
- the object / background separation unit 24 may instruct the boundary between the object and the background by the operator, and then separate the background from the image frame based on the instruction.
- the object / background separation unit 24 executes the separation process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the object / background synthesis unit 25 includes the object to which the depth value is given by the first depth value giving unit 20 and the background to which the depth value is given by the second depth value giving unit 23 (including the region in the background). This is a part that functions as an object / background combining means for combining.
- the object / background composition unit 25 separates each object and a plurality of regions in each object (method A), and separates the background and the plurality of regions in the background from a method (method B and This is effective when separating by
- the object / background composition unit 25 performs, for example, a process for separating an object and its internal area using the technique A, and performs a process for separating the background and its internal area using a technique B in parallel therewith.
- the object after the separation process and the background after the separation process are synthesized.
- the object / background synthesis unit 25 executes the synthesis process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the object / background composition unit 25 does not require any 3D image processing for the background, and is not necessarily required when 3D image processing is required only for an object other than the background. .
- the object / background separation unit 24 may integrate the function into the object / background synthesis unit 25. Further, the object / background separation unit 24 is not provided, and two identical image frames are prepared by copying or the like, and separation processing (separation processing by the method A) is performed on an object in one image frame and an area inside the object. Then, another separation process (separation process by the method B) is performed on the background in another image frame and the area inside the image frame, and the object / background combining unit 25 performs the separation process on each object and background. You may make it synthesize
- the location point presence / absence discriminating unit 26 includes a plurality of objects and / or background areas constituting the background in the image frame, and a location point indicating the position of the object and / or the background area in the background. It is a part that functions as a location / point presence / absence discriminating means for discriminating what to do.
- the in-background area is an area constituting the background and is an area different from the object.
- a location point refers to a point, line, or surface that serves as a reference for the degree of perspective of two or more objects or background areas. There can be various location points. For example, a boundary line between a wall and a floor in the background can be suitably exemplified.
- the presence / absence of the location point may be automatically determined based on the brightness or darkness in the image. However, when the operator designates the location point, the location point presence / absence determination unit 26 determines that the location point exists. You may decide. When the location / point presence / absence determining unit 26 executes the determination process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the position specifying unit 27 of the predetermined object part is a part that functions as a position specifying unit of the predetermined object part for specifying the position of the predetermined part of the plurality of objects and / or the background area when the location point exists.
- the predetermined part is not particularly limited, but refers to, for example, an outermost part such as an upper part, a lower part, or a side part of the object or the background area.
- the predetermined part may vary depending on the type of the image, the object in the image, the type of the background area, and the like. For this reason, preferably, the operator of the image processing apparatus 1 manually determines the predetermined part.
- the operator designates the toes of the object as a predetermined part.
- the position specifying unit 27 of the predetermined object part calculates the distance from the location point to the toes of each object, and specifies the position of the toes.
- the position specifying unit 27 of the predetermined object portion executes the position specifying process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- the depth value determination unit 28 functions as a depth value determination unit that determines a depth value indicating the degree of perspective of the object and / or the background area based on the position specified by the position specifying unit 27 of the object predetermined part. It is a part. In the above example, once the position of each human foot is determined, the degree of perspective from the location point can be calculated. When the depth value is set within the range of 0 to 255, the depth value determination unit 28 assigns a numerical value within the range of 0 to 255 as the depth value based on the context of each person and the position from the location point. To do. It is preferable to set the relationship between the position of the predetermined part and the depth value in advance by a calculation formula or allocation using a table. When the depth value determination unit 28 executes the depth value determination process, it is preferable to execute the process while reading the image processing computer program stored in the storage unit 11 or the information recording medium 30.
- FIG. 2 shows an example of one image frame constituting a moving image.
- FIG. 3 shows a state in which the operator of the image processing apparatus of FIG. 1 has drawn an area boundary line on the image frame of FIG.
- FIG. 4 shows a state in which region division is performed in the image frame of FIG.
- FIG. 5 shows a state in which the separation process and the first depth value giving process are performed on the image frame of FIG.
- FIG. 6 is a flowchart for performing image processing for 3D video on image frames constituting a moving image by the image processing apparatus of FIG.
- Step 101 Image frame reading step
- the image processing apparatus 1 reads one or more image frames constituting a moving image from the storage unit 11 by the function of the image frame reading unit 15.
- the image frame is, for example, an image 40 as shown in FIG.
- the image 40 mainly includes a background 41, a singer 42, and other persons 43, 44, and 45.
- the background 41 there are a singer 42 and persons 43, 44, and 45 (the perspective relationship between the persons 43, 44, and 45 may be unknown) in order from the position closest to the viewer side of the image 40.
- Step 102 region boundary line information reception step
- the operator of the image processing apparatus 1 gives the singer 42 and the persons 43, 44, and 45 a sense of perspective with respect to the background 41, and also has a sense of perspective with respect to a plurality of parts within the object called the singer 42
- the operator divides the singer 42 into three regions of head A, arm B, and chest C, and for the persons 43, 44, and 45, one region D, E, and F, respectively.
- Region boundary lines 51, 52, 53, 54, 55, 56, 57, 58, 59 are drawn (see FIG. 3).
- the area boundary line information receiving unit 16 of the image processing apparatus 1 receives the coordinate data of the dots constituting the area boundary lines 51 to 59.
- the area boundary line 51 is a line traced outside the outline 50 of the singer 42 and the person 44.
- the region boundary line 51 is a line that traces part of the outside of the contour of the chest C.
- the region boundary line 52 is a line that traces the inside of the outline of the head A from the head of the singer 42 to the shoulder.
- the region boundary line 52 is a line that traces a part on the outside of the outline of the arm B and a part on the outside of the outline of the person 44.
- the region boundary line 53 is a line tracing the inside of the outline of the arm B.
- the region boundary line 54 is a line tracing the inside of the contour of the chest C.
- the region boundary line 55 is a line that traces the inside of the outline of the person 44.
- the region boundary line 56 is a line tracing the outside of the outline 50 of the person 43.
- the region boundary line 57 is a line tracing the inside of the outline 50 of the person 43.
- the region boundary line 58 is a line tracing the outside of the outline 50 of the person 45.
- the region boundary line 59 is a line that traces the inside of the outline 50 of the person 45.
- Step 103 Start point generation step
- the starting point generation unit 21 of the image processing apparatus 1 generates starting points 60 for starting the region division processing at predetermined intervals on the region boundary lines 51 to 59.
- the predetermined interval is interpreted broadly to include not only the meaning of separating a certain distance but also the meaning of separating different distances.
- the starting point 60 is displayed only on a part of each of the region boundary lines 51 and 52. However, the starting point 60 is generated over the entire length of all the region boundary lines 51 to 59.
- the starting point 60 may be generated by selecting an arbitrary point on each region boundary line 51 or the like by the starting point generating unit 21 and generating other starting points 60 one after another based on the arbitrary point. .
- the operator may select an arbitrary point on each region boundary line 51 and the like, and the starting point generation unit 21 may generate other starting points 60 one after another based on the arbitrary point.
- the generation method of the starting point 60 is not limited to one type, and various methods can be employed.
- Step 104 Region division step
- the area dividing unit 17 of the image processing device 1 expands the divided area from the starting point 60 toward the inside and outside of each area boundary line 51 and the like by using the dividing line that connects the approximate points of brightness.
- FIG. 4 shows a state in which many closed dividing lines (white thin lines) 70 and 75 are formed in the image 40.
- the region boundary line 51 and the like are shown in black, which is substantially the same as the background color, in FIG. 4 so as not to be confused with the division lines 70 and 75.
- the dividing lines 70 and 75 can be preferably formed by a watershed algorithm.
- the dividing lines 70 and 75 correspond to contour lines when compared with topography, and are lines connecting pixels having the same brightness. As can be seen from FIG.
- the dividing lines 70 and 75 are formed with very high precision, and therefore express the unevenness of the screen in detail.
- such division is overdivision from the viewpoint of producing a 3D video. Therefore, after the next processing, processing based on the region boundary lines 51 to 59 drawn by the operator is performed to correct overdivision.
- Step 105 Aperture processing step
- the opening processing unit 18 of the image processing apparatus 1 opens a closed dividing line 75 (also referred to as a second dividing line 75) in other areas except for the area sandwiched between the two area boundary lines 51, 52 and the like. Perform the process. As a result, the regions sandwiched between the two region boundary lines 51 and 52, that is, the contours 50 of the head A, the arm B, and the chest C of the singer 42, and the contours 50 of the persons 43, 44, and 45 are further illustrated. Only the dividing line 70 (also referred to as the first dividing line) in the existing area remains closed. In this opening process, only the area surrounded by the closed dividing line 70 is used for 3D video, and other areas (areas surrounded by the dividing line 75) are excluded from the processing target for 3D video. It is processing.
- Step 106 Separation step by closed line
- the separation unit 19 of the image processing apparatus 1 separates objects for each area surrounded by the first division line 70.
- the singer 42 is separated into three regions: a head A (also referred to as region A), an arm B (also referred to as region B), and a chest C (also referred to as region C).
- the persons 43, 44, and 45 become a region D, a region E, and a region F, respectively, and are separated from the background 41.
- FIG. 5 shows only the regions A and B among the separated regions A to F in white.
- the first depth value assigning unit 20 of the image processing apparatus 1 assigns a depth value representing the degree of perspective of each region to the regions A to F surrounded by the first division line 70.
- the depth value is not particularly limited as long as it is a numerical value that quantifies the degree of perspective in the depth direction of the screen of the image 40.
- the depth value (depth value) can be represented by a numerical value of 0 to 255 on the basis of gray scale information (preferably lightness). It is preferable to assign a depth value to each of the regions A to F so that the depth value increases from the back of the image 40 toward the front.
- FIG. 5 shows a state in which depth values are given to the regions A to F.
- the depth value of the region B is the largest, the arm B is closest to the front in the image 40.
- the depth values of the areas E and F are the smallest, the persons 44 and 45 are at the back.
- these depth values are manually input by the operator looking at the image 40, and are given by the first depth value assigning unit 20 of the image processing apparatus 1 that has received the input.
- the depth value may be automatically given as will be described later, without manual input by the operator. Details thereof will be described later.
- FIG. 7 shows a flow of an application example executed by the image processing apparatus of FIG.
- step 102A (S102A) in FIG. 7 is limited to the above-described step 102 (S102) in that it is a process of accepting region boundary line data for an object (such as a person) other than the background. It is processing.
- the object / background separation unit 24 in the image processing apparatus 1 separates the object that has undergone the processing up to step 107 and the surrounding background, and removes the background (step 108). .
- Step 201 Image frame reading step
- the image processing apparatus 1 reads the same image frame as the image frame read in step 101 from the storage unit 11 by the function of the image frame reading unit 15 in parallel with the above steps 101 to 108.
- the background boundary line information receiving unit 22 in the image processing apparatus 1 receives information on the background boundary line instructed for the background from the operator in the read image frame.
- the background boundary line is a line traced around the background and an area (intra-background area) that can be identified with the background.
- an area intra-background area
- the background boundary line information receiving unit 22 receives information on the background boundary line.
- Step 203 Separation step by closed background boundary line
- the separation unit 19 recognizes and separates a plurality of regions divided by the closed background boundary line.
- the background becomes a plurality of regions having different depths by subsequent processing.
- the 2nd depth value provision part 23 provides the depth value showing the perspective degree of the area
- the depth value is not particularly limited as long as it is a numerical value that quantifies the degree of perspective in the depth direction of the screen.
- the depth value (depth value) can be represented by a numerical value of 0 to 255 on the basis of gray scale information (preferably lightness).
- gray scale information preferably lightness
- the depth value is given by the second depth value assigning unit 23 of the image processing apparatus 1 that receives the input by manually inputting the depth value while viewing the background situation.
- the depth value may be automatically given regardless of the manual input by the operator.
- Step 205 Object separation / removal step
- the object / background separation unit 24 in the image processing apparatus 1 separates the background subjected to the process up to step 204 and the object other than the background, and removes the object.
- Step 206 Object / background synthesis step
- the object / background synthesis unit 25 synthesizes the object after step 108 and the background after step 205 to recreate one image frame.
- FIG. 8 shows an example of a suitable image frame for explaining the automatic assignment of depth values executed by the image processing apparatus of FIG.
- FIG. 9 shows a flowchart of a preferred process for automatically assigning a depth value by the image processing apparatus of FIG.
- the location / point presence / absence determination unit 26 determines the presence / absence of a location point based on, for example, a criterion of whether or not there is a portion where the brightness changes rapidly in the background. As a modification, the location point presence / absence determining unit 26 can determine the presence / absence of the designated location point after the operator designates the location point.
- a background 81 and objects (humans in this example) 82 to 86 exist will be described.
- the background 81 is composed of a floor and a wall. In this example, there is a lightness level difference at the boundary between the floor and the wall.
- the location point presence / absence determining unit 26 recognizes this step as the location point 87 and determines that the location point 87 exists. As an example of this modification, the operator views the image 80 and designates the boundary between the floor and the wall as the location point 87. Based on this designation, the location point presence / absence determining unit 26 determines that the location point 87 exists. It can also be determined.
- Step 302 Step of specifying the position of the predetermined object part
- the distances L1 to L5 are calculated, and the position of the predetermined part is specified.
- Step 303 Depth value determination step
- the depth value determination unit 28 determines the depth value based on the position specified in step 302. In the example shown in FIG. 8, the depth value is determined assuming that the shorter the distance from the location point 87 to the toes 82a to 86a of the objects 82 to 86, the deeper the position from the screen.
- Step 304 Depth value giving step
- the first depth value assigning unit 20 assigns each depth value determined in step 303 to each of the objects 82 to 86.
- the second depth value assigning unit 23 assigns a depth value to each background area. To do.
- Step 305 Depth value acceptance determination step by manual
- the first depth value assigning unit 20 and / or the second depth value providing unit 23 has received the depth value from the operator's manual. Is determined.
- the first depth value assigning unit 20 and / or the second depth value assigning unit 23 proceeds to step 304 and sets the depth value for each object and / or each background area. Is granted to.
- the process returns to step 301.
- An embodiment of a computer program for image processing according to the present invention is a program that is read and executed by an image processing apparatus 1 (referred to as a computer), and the computer includes a plurality of image frames that constitute a moving image.
- An image frame reading unit 15 for reading one or more image frames, a region boundary line information receiving unit 16 for receiving region boundary line information in the read image frame, and dividing a predetermined point on the region boundary line as a starting point
- a region dividing unit 17 that expands the region and divides the inside and outside of the region boundary line by a dividing line that connects the approximate points of lightness, leaving the first divided line between two region boundary lines among the divided lines, Opening processing unit 18 for opening a second divided line other than the first divided line among the divided lines, the first divided line
- Each function of the separation unit 19 that separates into units of the enclosed region and the first depth value assigning unit 20 that assigns a depth value indicating the perspective of the region to the region enclosed by the first division line is executed.
- the computer program is stored in the information recording medium 30 and can be distributed independently of the computer.
- the computer program is stored in the server, the server is accessed from the computer via a line such as the Internet, the computer program is downloaded from the server, the computer program is executed by the computer, and the computer May function as the image processing apparatus 1. The same applies to subsequent computer programs.
- the embodiment of the computer program for image processing according to the present invention may be a program that causes a computer to further execute the function of the starting point generation unit 21 that generates a plurality of starting points on the region boundary line.
- the region boundary line information receiving unit 16 is configured to receive information on region boundary lines instructed for objects other than the background.
- a background boundary line information receiving unit 16 that receives information on a background boundary line instructed for the background, and a depth value that represents the degree of perspective of the region is given to the region surrounded by the background boundary line.
- Object / background synthesis for synthesizing the object to which the depth value is given by the second depth value giving unit 23 and the first depth value giving unit 20 and the background to which the depth value is given by the second depth value giving unit 23
- a program for executing each function of the unit 25 may be used.
- the computer further includes a background area constituting a plurality of objects and / or background in the image frame, and the position and / or position of the object in the background.
- a location point presence / absence determining unit 26 that determines that a location point indicating the position of the background area exists, and a plurality of objects and / or predetermined background areas constituting the background when the location point exists.
- a depth value representing the degree of perspective of the object and / or the background area is determined.
- the depth value assigning unit 20 and / or the second depth value assigning unit 23 may be a program to impart depth values for objects and / or background area.
- Embodiments The present invention is not limited to the image processing apparatus, the image processing method, the image processing computer program, and the information recording medium storing the computer program according to the above-described embodiments, and can be implemented with various modifications. It is.
- the background boundary line information receiving unit 22, the second depth value assigning unit 23, the object / background separation unit 24, and the object / background synthesis unit 25 are not essential components for the image processing device 1, but are included in the image processing device 1. It does not have to be provided. In this case, when 3D processing is required for the background and the background area, image processing similar to that for the object can be performed. Further, the location / point presence / absence discriminating unit 26, the position specifying unit 27 of the predetermined object portion, and the depth value determining unit 28 are not essential components for the image processing apparatus 1 and may not be provided in the image processing apparatus 1. . In this case, the depth value given to the object or the background area can be set manually by the operator.
- each of the components 16 to 28 and the steps executed by these components 16 to 28 are input by an operator's manual only for a part of still images (also referred to as key frames) constituting the moving image. It is not necessary to have a function for all still images.
- FIG. 10 is a schematic diagram of an image processing apparatus according to the second embodiment of the present invention.
- An image processing apparatus 100 includes an input unit 10, a storage unit 11, an external memory loading unit 12, a communication unit 13, an interface 14, an image frame reading unit 15, and a region boundary provided in the first embodiment.
- Line information receiving unit 16 area dividing unit 17, opening processing unit 18, separation unit 19, first depth value providing unit 20, starting point generating unit 21, background boundary line information receiving unit 22, and second depth value providing unit 23 ,
- the image processing apparatus 100 further includes a selection data receiving unit 150, a first feature point receiving unit 151, a second feature point searching unit 152, a second feature point specifying unit 153, and a moving position extraction.
- the second feature point search unit 152 is not an essential component and may not be provided.
- the first frame number determination unit 157 is not an essential configuration and may not be provided. As described in the first embodiment, each of the above-described components is displayed classified by function, and does not necessarily mean physically separated hardware.
- the image processing apparatus 100 includes a first image frame in which a region boundary line has been generated among a plurality of image frames constituting a moving image, and a second image frame that is present in time series after the first image frame. It is a device that makes it possible to automatically generate a region boundary line in the inside.
- the selection data receiving unit 150 generates a territorial boundary line (also referred to as a “roto brush”) for an object (for example, a part of a person such as a person or an arm) in the first image frame, and then surrounds the area boundary line by the user. It is a part that receives a signal specifying the specified part. As an option, the selection data receiving unit 150 also serves as a part that receives designation of the number of frames for executing the function of automatically generating the region boundary line.
- a territorial boundary line also referred to as a “roto brush”
- the first feature point accepting unit 151 is a part that functions as first feature point accepting means for accepting the coordinate value of the first feature point existing in the region boundary line of the first image frame.
- the second feature point search unit 152 Prior to the processing of the second feature point specifying unit 153, which will be described later, the second feature point search unit 152 is provided with a first feature point corresponding to the coordinate value of the first feature point based on the approximation degree of at least one of the color and brightness of the pixel.
- This is a part that functions as second feature point searching means for searching for two feature points.
- the search for the second feature point is performed from the viewpoint of searching for a pixel having at least one of color and brightness of one or more pixels within the range of the first feature point. It is more accurate to find the closest pixel in terms of both color and brightness, but depending on the situation, the second feature point can also be searched for in terms of color alone or brightness alone to speed up the search. .
- the first feature point is composed of a plurality of pixels, information on the plurality of pixels is combined to search for the second feature point that is closest to the first feature point.
- the second feature point specifying unit 153 is a part that functions as second feature point specifying means for specifying the coordinate value of the second feature point corresponding to the coordinate value of the first feature point in the second image frame.
- the second feature point specifying unit 153 determines the second feature point based on the search by the second feature point searching unit 152.
- the moving position extracting unit 154 extracts position information (preferably coordinates, but distance and direction other than coordinates may be adopted) of one or more pixels constituting the second feature point. It is.
- the function of the movement position extraction unit 154 may be provided to the second feature point identification unit 153 so that the movement position extraction unit 154 is not provided separately.
- the region boundary line coordinate value calculation unit 155 adds the position information extracted by the moving position extraction unit 154 to the coordinates of a plurality of pixels constituting the region boundary line in the first image frame, and thereby adds the second image frame. This is a part for calculating the coordinates of a plurality of pixels constituting a new region boundary line (that is, a region boundary line slightly changed from the region boundary line in the first image frame).
- the region boundary line coordinate value calculation unit 155 may have the function of the region boundary line coordinate value calculation unit 155 so that the region boundary line coordinate value calculation unit 155 is not separately provided.
- the region boundary line automatic generation unit 156 Based on the movement information from the first feature point to the second feature point, the region boundary line automatic generation unit 156 creates a new region boundary corresponding to the region boundary line of the first image frame in the second image frame. This is a part that functions as a region boundary line automatic generation means for automatically generating a line.
- the first frame number discriminating unit 157 performs a process of automatically generating a new region boundary line when the selection data receiving unit 150 receives the designation of the number of frames for executing the region boundary line automatic generation function. This is a part that functions as a frame number determining means for determining whether or not the number of image frames has reached the designated number of image frames. Until the first frame number determination unit 157 determines that the specified number of image frames has been reached, at least the first feature point reception unit 151, the second feature point specification unit 153, and the region boundary line automatic generation unit 156 include Execute each process.
- FIG. 11 is a diagram illustrating an example of processing using the image processing apparatus of FIG.
- FIG. 12 shows a diagram following FIG.
- FIG. 11A shows a state before a territorial boundary line (roto brush) is generated in the chemistry teacher (one of objects) 161 in one screen example 160.
- FIG. 11B in the figure shows a state after executing the roto brush.
- the user When performing the roto brush, the user performs an operation of checking the check box 162 in the screen example 160. Next, the user manually draws the roto brush 163 around the outside of the chemistry teacher 161.
- FIG. 12 when the user designates the key of the tracking window 165, another screen 170 shown in FIG. 12 is displayed.
- On the screen 170 there is an area 171 for designating the number of image frames for automatically generating a roto brush. For example, if “3” is input to the area 171, automatic generation of a roto brush can be executed up to the third image frame including the currently operated screen example 160.
- the user designates a first feature point (also referred to as “region of interest”) 171.
- the image processing apparatus 100 searches for a second feature point that approximates the first feature point 171. Next, this function will be described in detail.
- FIG. 13 is a diagram for explaining in detail an image processing method using the image processing apparatus of FIG.
- FIG. 14 is a flowchart for explaining the processing flow of the image processing method using the image processing apparatus of FIG.
- the image processing method of this embodiment includes a first image frame in which a region boundary line has been generated among a plurality of image frames constituting a moving image, and exists after the first image frame in time series. In this method, it is possible to automatically generate an area boundary line in the second image frame.
- a plurality of steps are performed between step 102 (S102) and step 103 (S103) in FIG. It is shown.
- this step is a step of receiving selection data for the target tracking frame. More specifically, this step includes an image frame number designation accepting step for accepting designation of the number of image frames for executing processing by the region boundary line automatic generation step described in detail later, and includes an area 171 in FIG. This corresponds to the step of receiving the numerical value input in.
- the first feature point receiving step is a step of receiving a coordinate value of a first feature point (also referred to as “region of interest”) existing within the region boundary line of the first image frame.
- a first feature point also referred to as “region of interest”
- the frame 1 is a frame in which the user manually performs the roto brush, and the roto brushes 183 and 184 (that is, the area boundary line 183 and the area boundary line 184) are already drawn outside and inside the outline of the object 181.
- FIG. 13B frame 2 displayed immediately after frame 1 in time series is displayed.
- there is an object 181a (indicated by a thin solid line in frame 2 in 13B) in which object 181 in frame 1 (indicated by a dotted line in frame 2 in 13B) has moved slightly to the right.
- Step 1022 Second feature point search step
- the first feature point (nose 182 in FIG. 13) is determined based on the approximation degree of at least one of the color and brightness of the pixel prior to the processing of the second feature point specifying step described later. This is a step of searching for a second feature point corresponding to the coordinate value.
- the second feature point searching step is a step of specifying the coordinate value of the second feature point corresponding to the coordinate value of the first feature point in the second image frame (corresponding to frame 2 in 13B of FIG. 13).
- the nose 182 as the first feature point moves in the direction of arrow A in the frame 2.
- specification part 153 specifies the nose 182a in the flame
- Step 1024 Movement position extraction step
- the movement position extraction unit 154 extracts the coordinates of one or more pixels constituting the second feature point (nose 182a).
- Step 1025 New region boundary line coordinate value calculation step
- the new region boundary line coordinate value calculation step is extracted by the moving position extraction step into the coordinates of a plurality of pixels constituting the region boundary lines 183 and 184 in the first image frame (frame 1 as a key frame).
- This is a step of calculating the coordinate values of a plurality of pixels constituting new area boundary lines 183a and 184a in the second image frame (frame 2 as the next frame) by adding the position information.
- the direction and distance in which the nose 182 moves to the nose 182a are added to the coordinates of the pixels constituting the region boundary lines 183 and 184, and the new pixels constituting the region boundary lines 183a and 184a are added. Calculate the coordinate values.
- Step 1026 Region boundary line automatic generation step
- the first image frame (frame 2) is included in the second image frame (frame 2) based on the movement information from the first feature point (nose 182) to the second feature point (nose 182a).
- 1) is a step of automatically generating new region boundary lines 183a and 184a corresponding to the region boundary lines 183 and 184, and each pixel having the new coordinates calculated and calculated by the new region boundary line coordinate value calculation step. This is a step of performing a tying process.
- Step 1027 Frame number discrimination step
- the frame number determination step is a step of determining whether or not the number of image frames that have been subjected to the process of automatically generating new region boundary lines 183 and 184 has reached the specified number of image frames. If the specified number of frames has not been reached, the process proceeds to step 1021 (S1021), the key frame is switched to frame 2, and the same processing from step 1021 (S1021) is performed. In this case, since the coordinates of the nose 182a as the second feature point have already been specified in the designation of the first feature point in step 1021 (S1021), the first feature point accepting unit 151 performs a new designation from the user. The nose 182a is accepted as the first feature point without waiting.
- the processing of step 1022 (S1022) to step 1027 (S1027) is executed for the next frame (frame 3 in 13C of FIG. 13).
- step S1022 when step S1024 is included in step S1023 and step S1025 is included in step 1026, the process can proceed to step 1026 (S1026) after step 1023 (S1023). Therefore, at least the first feature point receiving step, the second feature point specifying step, and the region boundary line automatic generating step can be executed until it is determined by the frame number determining step that the designated number of image frames has been reached. .
- step 1027 when the processing for the designated number of frames is completed, the process proceeds to step 103 (S103).
- An embodiment of a computer program for image processing according to the present invention is a program that is read and executed by an image processing apparatus 100 (referred to as a computer), and the computer includes a plurality of image frames constituting a moving image.
- An image frame reading unit 15 for reading one or more image frames
- a region boundary line information receiving unit 16 for receiving region boundary line information in the read image frame, and dividing a predetermined point on the region boundary line as a starting point
- a region dividing unit 17 that expands the region and divides the inside and outside of the region boundary line by a dividing line that connects the approximate points of lightness, leaving the first divided line between two region boundary lines among the divided lines
- An opening processing unit 18 for opening a second divided line other than the first divided line among the divided lines, the first divided line
- a separation unit 19 that separates the units surrounded by a region
- a first depth value assigning unit 20 that assigns a depth value representing the degree of perspective of the region to the region surrounded by the first division line
- the computer program is stored in the information recording medium 30 and can be distributed independently of the computer.
- the computer program is stored in the server, the server is accessed from the computer via a line such as the Internet, the computer program is downloaded from the server, the computer program is executed by the computer, and the computer May function as the image processing apparatus 100. The same applies to subsequent computer programs.
- the computer prior to the processing of the second feature point specifying unit 153, the computer (image processing apparatus 100) approximates at least one of the color and brightness of the pixel.
- This is a program for further executing the function of the second feature point search unit 152 that searches for the second feature point corresponding to the coordinate value of the first feature point based on the degree.
- the computer designates the number of image frames for accepting designation of the number of image frames to be processed by the region boundary line automatic generation unit 156.
- Selection data reception unit 150 as reception means, a first frame number determination unit that determines whether or not the number of image frames that have been processed to automatically generate a new region boundary line has reached a specified number of image frames 157, and a program for further executing each of the functions, and at least the first feature point receiving unit 151 and the second feature point until the first frame number determining unit 157 determines that the specified number of image frames has been reached.
- the specifying unit 153 and the region boundary line automatic generation unit 156 are caused to execute each process.
- Embodiments The present invention is not limited to the image processing apparatus, the image processing method, the image processing computer program, and the information recording medium storing the computer program according to the above-described embodiments, and can be implemented with various modifications. It is.
- the second feature point search unit 152 as the second feature point search means is not based on the approximation degree of at least one of the color and brightness of the pixel, but based on the information of other pixels such as color shading, A second feature point corresponding to the coordinate value of the first feature point may be searched.
- the selection data receiving unit 150 may not function as an image frame number designation receiving unit. In that case, the first frame number determining unit 157 that functions as the frame number determining unit may not be provided. However, even if the selection data receiving unit 150 does not function as an image frame number designation receiving unit, the first frame number determination unit 157 is provided so that the process is terminated with a predetermined number of image frames. May be.
- FIG. 15 is a schematic diagram of an image processing apparatus according to the third embodiment of the present invention.
- the image processing apparatus 200 includes an input unit 10, a storage unit 11, an external memory loading unit 12, a communication unit 13, an interface 14, an image frame reading unit 15, and a region boundary provided in the first embodiment.
- Line information receiving unit 16 area dividing unit 17, opening processing unit 18, separation unit 19, first depth value providing unit 20, starting point generating unit 21, background boundary line information receiving unit 22, and second depth value providing unit 23 ,
- the image processing apparatus 200 further includes a condition accepting unit 191, a pixel depth value assigning unit 192, a pixel movement position tracking unit 193, a depth value automatic generating unit 194, and a second frame number determination in addition to the above-described components.
- Part 195 is provided.
- the selection data receiving unit 150, the first feature point receiving unit 151, and the second feature point searching unit provided in the image processing apparatus 100 according to the second embodiment. 152, a second feature point identification unit 153, a movement position extraction unit 154, a region boundary line coordinate value calculation unit 155, a region boundary line automatic generation unit 156, and a first frame number determination unit 157 may be provided.
- the components 150 to 157 peculiar to the second embodiment are indicated by regions surrounded by dotted lines.
- each of the above-described components is displayed classified according to function, and is not necessarily physically separated hardware. Does not mean.
- the image processing apparatus 200 includes a first image frame to which a depth value has been assigned in a region surrounded by the first division line among a plurality of image frames constituting a moving image, and the time is greater than that of the first image frame. This is a device that can automatically generate a depth value in a region corresponding to a region surrounded by a first division line in a second image frame that exists in series afterward.
- the condition accepting unit 191 is a part that accepts a condition for automatically generating a depth value, and also serves as a part that accepts designation of the number of frames for executing the depth value automatic generating function.
- the pixel depth value assigning unit 192 is a depth given to the region surrounded by the first division line to one or more first pixels existing in the region surrounded by the first division line in the first image frame. This is a part that functions as a pixel depth value assigning means for assigning values. For example, if there are 100 pixels in the area surrounded by the first division line, the pixel depth value assignment unit 192 is assigned to the area surrounded by the first division line. Assign a depth value.
- the pixel movement position tracking unit 193 is a part that functions as a pixel movement position tracking unit that tracks (tracks) to which pixel in the second image frame the first pixel has moved.
- the pixel movement position tracking unit 193 determines which pixel (first pixel) in the second image frame the 100 first pixels are. 2 pixels) is searched individually. The search is performed from the viewpoint of searching for the second pixel that is closest to at least one of color and brightness for each first pixel. It is more accurate to find the second pixel that is closest to both the first pixel in terms of color and lightness, but depending on the situation, to speed up the search, You can also explore.
- the depth value automatic generation unit 194 is a depth value assigned by the pixel depth value assigning unit to an area in the second image frame, which is an area constituted by the second pixel after the movement of the first pixel. This is a part that functions as a depth value automatic generation means for automatically generating. Thereby, for example, when there are 99 second pixels corresponding to 100 pixels in the first image frame, the same depth value as the first pixel is generated in the 99 second pixels. This means that the same depth value is given to the region constituted by 99 second pixels.
- key frames for example, the first image frame
- the time series of key frames For the next frame the depth value can be automatically assigned without manually performing the operation of assigning at least the depth value of the region boundary line and the depth value.
- the second frame number determination unit 195 performs processing for automatically generating the depth value of the image frame that has been executed. This is a part for determining whether or not the number has reached the designated number of image frames. Until the second frame number determining unit 195 determines that the designated number of image frames has been reached, at least the pixel depth value assigning unit 192, the pixel movement position tracking unit 193, and the depth value automatic generating unit 194 include Execute the process.
- FIG. 16 is a diagram illustrating an example of processing using the image processing apparatus of FIG.
- FIG. 17 is a flowchart for explaining the processing flow of the image processing method using the image processing apparatus of FIG.
- the image processing method includes a first image frame having a depth value assigned to a region surrounded by a first division line among a plurality of image frames constituting a moving image, and the first image An image processing method capable of automatically generating a depth value in an area corresponding to an area surrounded by a first division line in a second image frame existing in time series after a frame.
- FIG. 17 shows a plurality of steps performed after step 106 (S106) of FIG. Note that the processing up to step 106 (S106) in FIG. 6 and the processing in step 107 (S107) in FIG. 17 are not all image frames constituting the moving image, but one or more image frames (referred to as “key frames”). 17 and after step 1070 in FIG. 17 can be performed on the image frames after the key frame or on the image frame between the key frames.
- 16A is an image frame 202 showing the grasped hand 201.
- the grasped hand 201 moves to an area 203 indicated by a dot (frame 204). See movement in).
- Each pixel constituting the image of the grasped hand 201 can be tracked to which pixel in the next image frame has moved, and can be identified as having moved to the region 203.
- 16B shows a situation in which the cylinder 205 drawn in the frame 1 is moved to the position of the cylinder 206 (lower right diagonal) in the next frame (frame 2).
- a person can easily guess that the cylinder 205 has moved to the cylinder 206 if judged by eyes. But the computer can't make that guess without any clues. Therefore, an optical flow algorithm is used to determine which pixel in the frame 2 each pixel constituting the cylinder 205 in the frame 1 has moved to.
- This algorithm is a program that selects all the pixels in the frame 2 on the basis of pixel information such as the color or brightness of each pixel constituting the cylinder 205 and selects the pixel closest to the pixel information. With this program, it is possible to grasp where the cylinder 205 has moved in the frame 2.
- the area after the movement of the cylinder 205 is the cylinder 206. Since the depth value is given to each pixel constituting the cylinder 205, the same depth value can also be given to each pixel constituting the cylinder 206. As a result, a depth value can be automatically given to the cylinder 206. For example, if the cylinder 205 is composed of 10 divided areas, the cylinder 206 also has 10 divided areas, and each divided area of the cylinder 206 is given the same depth value as each divided area of the cylinder 206. can do.
- the advantage of the image processing method using the image processing apparatus 200 according to the third embodiment is that it is not necessary to perform region division processing within or between objects for image frames after the key frame. As long as the key frame is divided into regions and given depth values, the image frame after the key frame can be searched for the position after movement in pixel units. For this reason, in the example of 16B, it is not necessary to border the area
- Step 107 First depth value giving step
- this step is a step of assigning a depth value to each region (a divided region of the object) in the frame 1 that is a key frame.
- the depth value is given as judged by the human eye in the manner described in the first embodiment.
- Step 1070 Condition reception step
- This step is the above-described process executed by the condition receiving unit 191.
- Step 1071 Pixel depth value assignment step
- This step is executed by the pixel depth value assigning unit 192, and one or more first pixels existing in the region surrounded by the first division line in the first image frame are regions surrounded by the first division line. This is a step of assigning the depth value assigned to.
- Step 1072 Pixel movement position tracking step This step is executed by the pixel movement position tracking unit 193 to track to which pixel in the second image frame the first pixel has moved.
- Step 1073 Post-movement pixel depth value giving step
- This step is a step of assigning the depth value of the pixel in the first image frame corresponding to the new pixel that can be searched in the second image frame.
- This step can be executed by the depth value automatic generation unit 194, but a component (post-movement pixel depth value assigning unit) different from the depth value automatic generation unit 194 is provided, and the post-movement pixel depth value. It may be executed by the assigning unit.
- Step 1074 Depth value automatic generation step
- This step is executed by the depth value automatic generation unit 194, and assigns the pixel depth value to an area in the second image frame, which is constituted by the second pixel after the movement of the first pixel.
- This is a step of automatically generating the depth value assigned by the step.
- the same depth value can be given to the region constituted by the second pixel based on the depth value of the second pixel given through the post-movement pixel depth value giving step. . That is, the depth value automatic generation unit 194 and the depth value automatic generation step executed by the depth value automatic generation unit 194 are performed based on the depth value assigned to the first pixel or the depth assigned to the second pixel. It does not matter whether it is performed based on the value.
- the second frame number determination step is a step of determining whether or not the number of image frames for which the process of automatically generating the depth value has reached the designated number of image frames. If the specified number of frames has not been reached, the process proceeds to step 1071 (S1071), the key frame is switched from the previous image frame to the next image frame in time series, and the same processing from step 1071 (S1071) is performed. . In this case, since the pixel depth value assignment in step 1071 (S1071) has already been performed in the previous process, the pixel depth value assignment unit 192 accepts the depth value specified in the previous process as it is. . Thereafter, the processing from step 1072 (S1072) to step 1075 (S1075) is executed for the next image frame.
- Such a series of processing is automatically performed until processing of a specified number of frames is completed. Therefore, until it is determined by the second frame number determination step that the specified number of image frames has been reached, at least a pixel depth value assignment step, a pixel movement position tracking step, a post-movement pixel depth value giving step (next step) It may be included in the automatic depth value generation step), and the automatic depth value generation step can be executed.
- step 1075 when the processing for the designated number of frames is completed, the processing ends.
- An embodiment of a computer program for image processing according to the present invention is a program that is read and executed by an image processing apparatus 200 (referred to as a computer), and the computer has a plurality of image frames constituting a moving image.
- An image frame reading unit 15 for reading one or more image frames
- a region boundary line information receiving unit 16 for receiving region boundary line information in the read image frame
- a region dividing unit 17 that expands the region and divides the inside and outside of the region boundary line by a dividing line that connects the approximate points of lightness, leaving the first divided line between two region boundary lines among the divided lines
- An opening processing unit 18 for opening a second divided line other than the first divided line among the divided lines, the first divided line
- a separation unit 19 that separates the units surrounded by a region
- a first depth value assigning unit 20 that assigns a depth value representing the degree of perspective of the region to the region surrounded by the first division line
- the computer program for image processing further includes functions of a condition accepting unit 191, a second frame number determining unit 195, and a post-movement pixel depth value providing unit for executing a post-movement pixel depth value providing step. Can also be executed.
- the computer program is stored in the information recording medium 30 and can be distributed independently of the computer.
- the computer program is stored in the server, the server is accessed from the computer via a line such as the Internet, the computer program is downloaded from the server, the computer program is executed by the computer, and the computer May function as the image processing apparatus 200.
- Embodiments The present invention is not limited to the image processing apparatus, the image processing method, the image processing computer program, and the information recording medium storing the computer program according to the above-described embodiments, and can be implemented with various modifications. It is.
- the pixel movement position tracking unit 193 as the pixel movement position tracking means and the pixel movement position tracking step executed by the pixel movement position tracking unit are based on other pixel information such as pixel color and lightness, not pixel color and brightness. May be tracked.
- the components in the image processing apparatuses 1, 100, and 200 in all the embodiments including this embodiment may be combined in any way unless it is impossible. Further, the order of the steps executed by the constituent units may be changed in any way except where it is impossible. For example, in the second embodiment, step S1027 (see FIG. 14) may be moved after step S107 to complete the provision of depth values in units of frames, and then the process may return to next step S1021. .
- the present invention can be used for production of 3D video.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
15 画像フレーム読み込み部(画像フレーム読み込み手段)
16 領域境界ライン情報受付部(領域境界ライン情報受付手段)
17 領域分割部(領域分割手段)
18 開口処理部(開口処理手段)
19 分離部(分離手段)
20 第1深さ値付与部(第1深さ値付与手段)
21 起点生成部(起点生成手段)
22 背景境界ライン情報受付部(背景境界ライン情報受付手段)
23 第2深さ値付与部(第2深さ値付与手段)
25 オブジェクト・背景合成部(オブジェクト・背景合成手段)
26 ロケーション・ポイント有無判別部(ロケーション・ポイント有無判別手段)
27 オブジェクト所定部位の位置特定部(オブジェクト所定部位の位置特定手段)
28 深さ値決定部(深さ値決定手段)
40 画像(画像フレームまたはフレーム)
41 背景
42 歌手(オブジェクト)
43~45 人物(オブジェクト)
51~59 領域境界ライン
60 起点
70 第1分割ライン(分割ラインの1つ)
75 第2分割ライン(分割ラインの1つ)
80 画像(画像フレームまたはフレーム)
81 背景
82~86 オブジェクト
82a~86a 所定部位
87 ロケーション・ポイント
A 頭部(領域)
B 腕(領域)
C 胸部(領域)
D~F 領域
100,200 画像処理装置(コンピュータ)
150 選択データ受付部(画像フレーム数指定受付手段を含む)
151 第1特徴点受付部(第1特徴点受付手段)
152 第2特徴点探索部(第2特徴点探索手段)
153 第2特徴点特定部(第2特徴点特定手段)
156 領域境界ライン自動生成部(領域境界ライン自動生成手段)
157 第1フレーム数判別部(フレーム数判別手段)
182 鼻(第1特徴点の一例)
182a 鼻(第2特徴点の一例)
183,184 領域境界ライン
183,184 領域境界ライン
183a,184a (新たな)領域境界ライン
183b,184b (新たな)領域境界ライン
192 ピクセル深さ値割り当て部(ピクセル深さ値割り当て手段)
193 ピクセル移動位置追跡部(ピクセル移動位置追跡手段)
194 深さ値自動生成部(深さ値自動生成手段)
1.画像処理装置および画像処理方法
画像処理装置1は、画像フレーム読み込み部15の機能により、動画を構成する1または2以上の画像フレームを記憶部11から読み込む。画像フレームは、例えば、図2に示すような画像40である。画像40には、主に、背景41、歌手42、その他の人物43,44,45が存在する。背景41には、画像40の看者側に最も近い位置から順に、歌手42、人物43,44,45(人物43,44,45間の遠近関係は不明であっても良い)が存在する。
この例では、画像処理装置1の操作者は、歌手42、人物43,44,45を背景41に対して遠近感を持たせ、かつ歌手42というオブジェクト内においてさらに複数の部位に対して遠近感を持たせる処理を行うものとする。この趣旨から、操作者は、歌手42を頭部A、腕B、胸部Cの3つの領域に分け、人物43,44,45に対しては、それぞれ1つの領域D,E,Fとするべく、領域境界ライン51,52,53,54,55,56,57,58,59を描く(図3を参照)。操作者が画像40上にこれら領域境界ライン51~59を描くと、画像処理装置1の領域境界ライン情報受付部16は、これら領域境界ライン51~59を構成するドットの座標データを受け付ける。領域境界ライン51は、歌手42および人物44の輪郭50の外側をなぞったラインである。領域境界ライン51は、同時に、胸部Cの輪郭の外側の一部もなぞったラインとなっている。領域境界ライン52は、歌手42の頭部から肩にいたる頭部Aの輪郭の内側をなぞったラインである。領域境界ライン52は、同時に、腕Bの輪郭の外側の一部および人物44の輪郭の外側の一部もなぞったラインとなっている。領域境界ライン53は、腕Bの輪郭の内側をなぞったラインである。領域境界ライン54は、胸部Cの輪郭の内側をなぞったラインである。領域境界ライン55は、人物44の輪郭の内側をなぞったラインである。領域境界ライン56は、人物43の輪郭50の外側をなぞったラインである。領域境界ライン57は、人物43の輪郭50の内側をなぞったラインである。領域境界ライン58は、人物45の輪郭50の外側をなぞったラインである。領域境界ライン59は、人物45の輪郭50の内側をなぞったラインである。
画像処理装置1の起点生成部21は、領域境界ライン51~59上に、所定間隔で、領域分割の処理を開始する起点60を生成する。ここで、所定間隔とは、一定の距離を隔てる意味のみならず、異なる距離を隔てる意味も含むように広義に解釈される。図3では、理解しやすいように、起点60を領域境界ライン51,52の各一部分のみに表示しているが、起点60は、全ての領域境界ライン51~59の全長に亘って生成される。起点60の生成は、起点生成部21により各領域境界ライン51等上の任意の点を選出し、当該任意の点を基準に、他の起点60を次々と生成していくようにしても良い。また、操作者が各領域境界ライン51等の上の任意の点を選び、起点生成部21が、当該任意の点を基準に、他の起点60を次々と生成していくようにしても良い。このように、起点60の生成方法は、1種類に限られず、種々の方法を採用できる。
画像処理装置1の領域分割部17は、起点60から各領域境界ライン51等の内側および外側に向かって分割領域を拡張していくように、明度の近似点をつなぐ分割ラインにより領域境界ラインの内外を分割する。図4は、画像40に、多くの閉鎖状の分割ライン(白色の細線)70,75が形成された状態を示している。領域境界ライン51等は、分割ライン70,75と混同しないように、図4では、背景色と略同一の黒色で示されている。分割ライン70,75は、分水嶺アルゴリズムによって好適に形成できる。分割ライン70,75は、地形で例えると、等高線に相当し、明度の同一なピクセルを繋いだラインである。図4から明らかなように、分割ライン70,75は、非常に高精度に形成されているため、画面の凹凸を細かく表現している。しかし、このような分割は、3D映像を作製するという観点では、過分割である。このため、次の処理以降、操作者の描いた領域境界ライン51~59に基づく処理を行い、過分割を補正する。
画像処理装置1の開口処理部18は、2本の領域境界ライン51,52等に挟まれた領域を除くその他の領域にある閉鎖状の分割ライン75(第2分割ライン75ともいう)を開口する処理を行う。この結果、2本の領域境界ライン51,52等に挟まれた領域、すなわち、歌手42の頭部A、腕Bおよび胸部Cの各輪郭50、さらには人物43,44,45の輪郭50の存在する領域にある分割ライン70(第1分割ラインともいう)のみが閉鎖状態で残る。この開口処理は、閉鎖状の分割ライン70によって囲まれた領域のみを3D映像用に用い、その他の領域(分割ライン75で囲まれた領域)については3D映像用の処理対象から除外するための処理である。
画像処理装置1の分離部19は、第1分割ライン70で囲まれた領域ごとにオブジェクトを分離する。この結果、歌手42は、頭部A(領域Aともいう)、腕B(領域Bともいう)および胸部C(領域Cともいう)の3つの領域に分離される。人物43,44,45は、それぞれ、領域D、領域Eおよび領域Fとなり、背景41から分離される。図5は、分離された領域A~Fの内、領域A,Bのみを白塗りで示す。
画像処理装置1の第1深さ値付与部20は、第1分割ライン70で囲まれた領域A~Fに、各領域の遠近度合いを表す深さ値を付与する。深さ値は、画像40の画面の奥行き方向に対する遠近度合いを定量化する数値であれば、特に制限されない。例えば、深さ値(デプス・バリュー)は、グレースケール情報(好適には、明度)を基準として、0~255の数値で表すことができる。領域A~Fのそれぞれに対して、画像40の奥から手前に向かって深さ値が大きくなるように深さ値を付与するのが好ましい。図5は、領域A~Fに深さ値を付与した状態を示す。領域Bの深さ値が最も大きいことから、画像40では、腕Bが最も手前にある。一方、領域E,Fの深さ値が最も小さいことから、人物44,45が最も奥にある。この実施の形態では、これら深さ値は、操作者が画像40を見て、マニュアルにて入力し、その入力を受けた画像処理装置1の第1深さ値付与部20が付与する。ただし、深さ値は、操作者によるマニュアル入力によらず、後述するように自動付与するようにしても良い。その詳細は、後述する。
画像処理装置1は、上記ステップ101~108と併行して、画像フレーム読み込み部15の機能により、ステップ101にて読み込んだ画像フレームと同じ画像フレームを記憶部11から読み込む。
次に、画像処理装置1中の背景境界ライン情報受付部22は、読み込まれた画像フレーム内において、操作者から背景を対象として指示された背景境界ラインの情報を受け付ける。背景境界ラインは、操作者が背景およびその背景と同一視できる領域(背景内領域)の周囲をなぞったラインである。例えば、背景中に、雲が存在し、その背景に飛行機が存在する例で説明すると、飛行機をオブジェクトとすると、雲は、背景と同一視することができる背景内領域に相当する。操作者は、雲の周囲に、背景境界ラインを描くと、背景境界ライン情報受付部22は、その背景境界ラインの情報を受け付ける。
分離部19は、閉鎖状の背景境界ラインによって分けられた複数の領域を認識し、分離する。背景は、この後の処理によって複数の深度の異なる領域になる。
第2深さ値付与部23は、背景境界ラインで囲まれた領域に、その領域の遠近度合いを表す深さ値を付与する。深さ値は、画面の奥行き方向に対する遠近度合いを定量化する数値であれば、特に制限されない。例えば、深さ値(デプス・バリュー)は、グレースケール情報(好適には、明度)を基準として、0~255の数値で表すことができる。例えば、画面の奥から手前に向かって深さ値が大きくなるように深さ値を付与するのが好ましい。この実施の形態では、深さ値は、操作者が背景の状況を見て、マニュアルにて入力し、その入力を受けた画像処理装置1の第2深さ値付与部23が付与する。ただし、深さ値は、操作者によるマニュアル入力によらず、自動付与するようにしても良い。
ステップ204の処理が完了すると、画像処理装置1中のオブジェクト・背景分離部24は、ステップ204までの処理を行った背景と、その背景以外のオブジェクトとを分離し、オブジェクトを除去する。
最後に、オブジェクト・背景合成部25は、ステップ108後のオブジェクトと、ステップ205後の背景とを合成して、1つの画像フレームを再形成する。
ロケーション・ポイント有無判別部26は、例えば、背景内に明度が急激に変化している部分があるか否かという基準から、ロケーション・ポイントの有無を判別する。また、変形例として、操作者がロケーション・ポイントを指定後、ロケーション・ポイント有無判別部26が指定されたロケーション・ポイントの有無を判別することもできる。図8に示す画像80において、背景81とオブジェクト(この例では、人間)82~86とが存在する例で説明する。背景81は、床と壁から構成されている。この例では、床と壁との境界に明度の段差が存在する。ロケーション・ポイント有無判別部26は、この段差をロケーション・ポイント87として認定し、ロケーション・ポイント87が存在すると判別する。この変形例として、操作者が画像80を見て、床と壁との境界をロケーション・ポイント87として指定し、この指定に基づき、ロケーション・ポイント有無判別部26は、ロケーション・ポイント87が存在すると判別することもできる。
ステップ301において、ロケーション・ポイント87が存在する場合、続いて、オブジェクト所定部位の位置特定部27は、オブジェクト82~86の足先(=所定部位)82a~86aとロケーション・ポイント87との間の距離L1~L5を算出し、所定部位の位置を特定する。
続いて、深さ値決定部28は、ステップ302によって特定された位置に基づいて深さ値を決定する。図8に示す例では、ロケーション・ポイント87から各オブジェクト82~86の足先82a~86aまでの距離が短いほど、画面から奥方向に位置するものとして、深さ値を決定する。
続いて、第1深さ値付与部20は、ステップ303によって決定された各深さ値を各オブジェクト82~86に付与する。オブジェクト82~86に代えて、複数の背景内領域に対して深さ値を付与する必要のある場合には、第2深さ値付与部23が各背景内領域に対して深さ値を付与する。
ステップ301の判別の結果、ロケーション・ポイントが存在しない場合には、第1深さ値付与部20および/または第2深さ値付与部23は、操作者のマニュアルにより深さ値を受け付けたか否かを判別する。その結果、受け付けた場合には、引き続き、第1深さ値付与部20および/または第2深さ値付与部23は、ステップ304に進み、深さ値を各オブジェクトおよび/または各背景内領域に対して付与する。一方、受け付けていない場合には、ステップ301に戻る。
本発明は、上述の実施の形態に係る画像処理装置、画像処理方法、画像処理用コンピュータプログラムおよびそのコンピュータプログラムを格納した情報記録媒体に限定されず、種々変更して実施可能である。
次に、本発明に係る第2の実施の形態について、図面を参照しながら説明する。
このステップは、この実施の形態では、ターゲット・トラッキング・フレームの選択データを受け付けるステップである。より具体的には、このステップは、これ以降に詳述する領域境界ライン自動生成ステップによる処理を実行する画像フレームの数の指定を受け付ける画像フレーム数指定受付ステップを含み、図12中の領域171に入力した数値を受け付けるステップに相当する。
第1特徴点受付ステップは、第1画像フレームの領域境界ライン内に存在する第1特徴点(「関心領域」とも称する)の座標値を受け付けるステップである。図13中の13Aでは、キーフレームとなるフレーム1に、オブジェクト(人)181が存在する。ここで、第1特徴点として鼻182を例示する。フレーム1は、ユーザがロトブラシをマニュアルで行ったフレームであり、すでに、オブジェクト181の輪郭の外側および内側に、ロトブラシ183,184(すなわち、領域境界ライン183および領域境界ライン184)が描かれている。図13中の13Bでは、フレーム1の時系列的に直後に表示されるフレーム2が表示されている。フレーム2では、フレーム1内のオブジェクト181(13B中のフレーム2において点線で示す)が右にわずかに動いたオブジェクト181a(13B中のフレーム2において細い実線で示す)が存在する。
第2特徴点探索ステップは、後述の第2特徴点特定ステップの処理に先立ち、ピクセルの色および明度の少なくともいずれか1つの近似度合いに基づき、第1特徴点(図13では、鼻182)の座標値に対応する第2特徴点を探索するステップである。
第2特徴点探索ステップは、第2画像フレーム(図13の13Bでは、フレーム2に相当)において、第1特徴点の座標値に対応する第2特徴点の座標値を特定するステップである。第1特徴点としての鼻182は、フレーム2では、矢印Aの方向に移動している。このため、第2特徴点特定部153は、フレーム2中の鼻182aを第2特徴点として特定することになる。
移動位置抽出ステップは、移動位置抽出部154によって、第2特徴点(鼻182a)を構成する1または2以上のピクセルの座標を抽出するステップである。
新たな領域境界ライン座標値算出ステップは、第1画像フレーム(キーフレームとしてのフレーム1)内の領域境界ライン183,184を構成する複数のピクセルの座標に、上記移動位置抽出ステップによって抽出された位置情報を加算して、第2画像フレーム(次のフレームとなるフレーム2)内の新たな領域境界ライン183a,184aを構成する複数のピクセルの座標値を算出するステップである。このステップでは、領域境界ライン183,184を構成する各ピクセルの座標に対して、鼻182が鼻182aに移動する方向と距離を加算し、新たな領域境界ライン183a,184aを構成する各ピクセルの座標値を算出する。
領域境界ライン自動生成ステップは、第1特徴点(鼻182)から第2特徴点(鼻182a)への移動情報に基づいて、第2画像フレーム(フレーム2)内に、第1画像フレーム(フレーム1)の領域境界ライン183,184に対応する新たな領域境界ライン183a,184aを自動生成するステップであり、新たな領域境界ライン座標値算出ステップによって算出して求めた新しい座標を持つ各ピクセルを結ぶ処理を行うステップである。
フレーム数判別ステップは、新たな領域境界ライン183,184を自動生成する処理を実行した画像フレームの数が、指定された画像フレーム数に到達したか否かを判別するステップである。指定したフレーム数に達していなければ、ステップ1021(S1021)に移行し、キーフレームをフレーム2に切り替え、ステップ1021(S1021)以降の同様の処理を行う。この場合、ステップ1021(S1021)における第1特徴点の指定は、すでに第2特徴点としての鼻182aの座標が特定されているため、第1特徴点受付部151は、ユーザからの新たな指定を待つことなく、鼻182aを第1特徴点として受け付ける。ステップ1022(S1022)~ステップ1027(S1027)の処理は、次のフレーム(図13の13Cにおいて、フレーム3)に対して実行される。
本発明は、上述の実施の形態に係る画像処理装置、画像処理方法、画像処理用コンピュータプログラムおよびそのコンピュータプログラムを格納した情報記録媒体に限定されず、種々変更して実施可能である。
次に、本発明に係る第3の実施の形態について、図面を参照しながら説明する。
このステップは、この実施の形態では、キーフレームとなるフレーム1中の各領域(オブジェクトの分割領域)に深さ値を付与するステップである。このステップでは、第1の実施の形態で説明した要領で、人の目で判断して深さ値が付与される。
このステップは、条件受付部191によって実行される前述の処理である。
このステップは、ピクセル深さ値割り当て部192によって実行され、第1画像フレーム内の第1分割ラインで囲まれた領域に存在する1以上の第1ピクセルに、第1分割ラインで囲まれた領域に付与された深さ値を割り当てるステップである。
このステップは、ピクセル移動位置追跡部193によって実行され、第1ピクセルが第2画像フレーム内のどのピクセルに移動したかを追跡するステップである。
このステップは、第2画像フレーム内において探索できた新たなピクセルに、これに対応する第1画像フレーム内のピクセルの深さ値を付与するステップである。このステップは、深さ値自動生成部194によって実行可能であるが、深さ値自動生成部194とは別の構成部(移動後ピクセル深さ値付与部)を設け、移動後ピクセル深さ値付与部によって実行されても良い。
このステップは、深さ値自動生成部194によって実行され、第2画像フレーム内の領域であって、第1ピクセルの移動後の第2ピクセルで構成される領域に、前述のピクセル深さ値割り当てステップによって割り当てられた深さ値を自動生成するステップである。また、このステップにおいて、前述の移動後ピクセル深さ値付与ステップを経て付与された第2ピクセルの深さ値に基づき、第2ピクセルで構成される領域に同じ深さ値を付与することもできる。すなわち、深さ値自動生成部194およびそれによって実行される深さ値自動生成ステップは、第1ピクセルに割り当てられた深さ値に基づいて行われるか、それとも第2ピクセルに付与された深さ値に基づいて行われるかは、問わない。
第2フレーム数判別ステップは、深さ値を自動生成する処理を実行した画像フレームの数が、指定された画像フレーム数に到達したか否かを判別するステップである。指定したフレーム数に達していなければ、ステップ1071(S1071)に移行し、キーフレームを先の画像フレームから時系列的に次の画像フレームに切り替え、ステップ1071(S1071)以降の同様の処理を行う。この場合、ステップ1071(S1071)におけるピクセル深さ値の割り当ては、すでに前の処理で行われているため、ピクセル深さ値割り当て部192は、前の処理で特定された深さ値をそのまま受け付ける。その後、ステップ1072(S1072)~ステップ1075(S1075)の処理は、当該次の画像フレームに対して実行される。
本発明は、上述の実施の形態に係る画像処理装置、画像処理方法、画像処理用コンピュータプログラムおよびそのコンピュータプログラムを格納した情報記録媒体に限定されず、種々変更して実施可能である。
Claims (20)
- 動画を構成する複数の画像フレームの内の1若しくは2以上の画像フレームを読み込む画像フレーム読み込み手段と、
読み込まれた上記画像フレーム内において、領域境界ラインの情報を受け付ける領域境界ライン情報受付手段と、
上記領域境界ライン上の所定の点を起点として分割領域を拡張し、明度の近似点をつなぐ分割ラインにより上記領域境界ラインの内外を分割する領域分割手段と、
上記分割ラインの内、2本の上記領域境界ラインの間に存在する第1分割ラインを残し、上記分割ラインの内の上記第1分割ライン以外の第2分割ラインを開口させる開口処理手段と、
上記第1分割ラインで囲まれた領域の単位に分離する分離手段と、
上記第1分割ラインで囲まれた領域に、その領域の遠近度合いを表す深さ値を付与する第1深さ値付与手段と、
を、少なくとも備える画像処理装置。 - 前記起点を前記領域境界ライン上に複数生成する起点生成手段を、さらに備えることを特徴とする請求項1に記載の画像処理装置。
- 前記領域境界ライン情報受付手段は、背景以外のオブジェクトを対象として指示された領域境界ラインの情報を受け付ける手段であり、
上記画像フレーム内において、上記背景を対象として指示された背景境界ラインの情報を受け付ける背景境界ライン情報受付手段と、
上記背景境界ラインで囲まれた領域に、その領域の遠近度合いを表す深さ値を付与する第2深さ値付与手段と、
前記第1深さ値付与手段によって深さ値を付与した上記オブジェクトと、上記第2深さ値付与手段によって深さ値を付与した上記背景とを合成するオブジェクト・背景合成手段と、
を、少なくとも備えることを特徴とする請求項1または請求項2に記載の画像処理装置。 - 前記画像フレーム内に複数のオブジェクトおよび/または背景を構成する背景内領域があり、かつ背景において上記オブジェクトの位置および/または上記背景内領域の位置を示すロケーション・ポイントが存在することを判別するロケーション・ポイント有無判別手段と、
上記ロケーション・ポイントが存在する場合に、上記複数のオブジェクトおよび/または上記背景を構成する背景内領域の所定部位の位置を特定するオブジェクト所定部位の位置特定手段と、
上記オブジェクト所定部位の位置特定手段によって特定された位置に基づき、上記オブジェクトおよび/または上記背景内領域の遠近度合いを表す深さ値を決定する深さ値決定手段と、
をさらに備え、
前記第1深さ値付与手段および/または前記第2深さ値付与手段は、上記オブジェクトおよび/または上記背景内領域に対して前記深さ値を付与することを特徴とする請求項1から請求項3のいずれか1項に記載の画像処理装置。 - 動画を構成する複数の画像フレームの内、前記領域境界ラインを生成済みの第1画像フレームが存在し、かつ前記第1画像フレームよりも時系列的に後に存在する第2画像フレーム内に自動的に前記領域境界ラインを生成することを可能とする画像処理装置であって、
前記第1画像フレームの前記領域境界ライン内に存在する第1特徴点の座標値を受け付ける第1特徴点受付手段と、
前記第2画像フレームにおいて、前記第1特徴点の座標値に対応する第2特徴点の座標値を特定する第2特徴点特定手段と、
前記第1特徴点から前記第2特徴点への移動情報に基づいて、前記第2画像フレーム内に、前記第1画像フレームの前記領域境界ラインに対応する新たな領域境界ラインを自動生成する領域境界ライン自動生成手段と、
を、さらに備えることを特徴とする請求項1から請求項4のいずれか1項に記載の画像処理装置。 - 前記第2特徴点特定手段の処理に先立ち、ピクセルの色および明度の少なくともいずれか1つの近似度合いに基づき、前記第1特徴点の座標値に対応する前記第2特徴点を探索する第2特徴点探索手段を、さらに備えることを特徴とする請求項5に記載の画像処理装置。
- 前記領域境界ライン自動生成手段による処理を実行する前記画像フレームの数の指定を受け付ける画像フレーム数指定受付手段と、
前記新たな領域境界ラインを自動生成する処理を実行した前記画像フレームの数が、指定された前記画像フレーム数に到達したか否かを判別するフレーム数判別手段と、
を、さらに備え、
前記フレーム数判別手段によって、指定された前記画像フレーム数に到達したと判別されるまで、前記第1特徴点受付手段、前記第2特徴点特定手段および前記領域境界ライン自動生成手段は各処理を実行することを特徴とする請求項5または請求項6に記載の画像処理装置。 - 動画を構成する複数の画像フレームの内、前記第1分割ラインで囲まれた領域に前記深さ値を付与済みの第1画像フレームが存在し、かつ前記第1画像フレームよりも時系列的に後に存在する第2画像フレーム内の前記第1分割ラインで囲まれた領域に対応する領域に自動的に前記深さ値を生成することを可能とする画像処理装置であって、
前記第1画像フレーム内の前記第1分割ラインで囲まれた領域に存在する1以上の第1ピクセルに、前記第1分割ラインで囲まれた領域に付与された前記深さ値を割り当てるピクセル深さ値割り当て手段と、
前記第1ピクセルが前記第2画像フレーム内のどのピクセルに移動したかを追跡するピクセル移動位置追跡手段と、
前記第2画像フレーム内の領域であって、前記第1ピクセルの移動後の第2ピクセルで構成される領域に、前記ピクセル深さ値割り当て手段によって割り当てられた前記深さ値を自動生成する深さ値自動生成手段と、
を、さらに備えることを特徴とする請求項1から請求項7のいずれか1項に記載の画像処理装置。 - 請求項1に記載の画像処理装置を用いて実行する画像処理方法であって、
動画を構成する複数の画像フレームの内の1若しくは2以上の画像フレームを読み込む画像フレーム読み込みステップと、
読み込まれた上記画像フレーム内において、領域境界ラインの情報を受け付ける領域境界ライン情報受付ステップと、
上記領域境界ライン上の所定の点を起点として分割領域を拡張し、明度の近似点をつなぐ分割ラインにより上記領域境界ラインの内外を分割する領域分割ステップと、
上記分割ラインの内、2本の上記領域境界ラインの間に存在する第1分割ラインを残し、上記分割ラインの内の上記第1分割ライン以外の第2分割ラインを開口させる開口処理ステップと、
上記第1分割ラインで囲まれた領域の単位に分離する分離ステップと、
上記第1分割ラインで囲まれた領域に、その領域の遠近度合いを表す深さ値を付与する第1深さ値付与ステップと、
を、少なくとも実行する画像処理方法。 - コンピュータに読み込まれて実行されるコンピュータプログラムであって、
コンピュータに、
動画を構成する複数の画像フレームの内の1若しくは2以上の画像フレームを読み込む画像フレーム読み込み手段、
読み込まれた上記画像フレーム内において、領域境界ラインの情報を受け付ける領域境界ライン情報受付手段、
上記領域境界ライン上の所定の点を起点として分割領域を拡張し、明度の近似点をつなぐ分割ラインにより上記領域境界ラインの内外を分割する領域分割手段、
上記分割ラインの内、2本の上記領域境界ラインの間に存在する第1分割ラインを残し、上記分割ラインの内の上記第1分割ライン以外の第2分割ラインを開口させる開口処理手段、
上記第1分割ラインで囲まれた領域の単位に分離する分離手段、
上記第1分割ラインで囲まれた領域に、その領域の遠近度合いを表す深さ値を付与する第1深さ値付与手段、
の各手段の機能を実行させる画像処理用コンピュータプログラム。 - 前記コンピュータに、前記起点を前記領域境界ライン上に複数生成する起点生成手段の機能を、さらに実行させることを特徴とする請求項10に記載の画像処理用コンピュータプログラム。
- 前記領域境界ライン情報受付手段は、背景以外のオブジェクトを対象として指示された領域境界ラインの情報を受け付ける手段であり、
前記コンピュータに、
上記画像フレーム内において、上記背景を対象として指示された背景境界ラインの情報を受け付ける背景境界ライン情報受付手段、
上記背景境界ラインで囲まれた領域に、その領域の遠近度合いを表す深さ値を付与する第2深さ値付与手段、
前記第1深さ値付与手段によって深さ値を付与した上記オブジェクトと、上記第2深さ値付与手段によって深さ値を付与した上記背景とを合成するオブジェクト・背景合成手段、
の各機能を実行させることを特徴とする請求項10または請求項11に記載の画像処理用コンピュータプログラム。 - 前記コンピュータに、
前記画像フレーム内に複数のオブジェクトおよび/または背景を構成する背景内領域があり、かつ背景において上記オブジェクトの位置および/または上記背景内領域の位置を示すロケーション・ポイントが存在することを判別するロケーション・ポイント有無判別手段、
上記ロケーション・ポイントが存在する場合に、上記複数のオブジェクトおよび/または上記背景を構成する背景内領域の所定部位の位置を特定するオブジェクト所定部位の位置特定手段、
上記オブジェクト所定部位の位置特定手段によって特定された位置に基づき、上記オブジェクトおよび/または上記背景内領域の遠近度合いを表す深さ値を決定する深さ値決定手段、
の各機能を実行させ、
前記第1深さ値付与手段および/または前記第2深さ値付与手段は、上記オブジェクトおよび/または上記背景内領域に対して前記深さ値を付与することを特徴とする請求項10から請求項12のいずれか1項に記載の画像処理用コンピュータプログラム。 - 動画を構成する複数の画像フレームの内、前記領域境界ラインを生成済みの第1画像フレームが存在し、かつ前記第1画像フレームよりも時系列的に後に存在する第2画像フレーム内に自動的に前記領域境界ラインを生成することを可能とする画像処理用コンピュータプログラムであって、
前記コンピュータに、
前記第1画像フレームの前記領域境界ライン内に存在する第1特徴点の座標値を受け付ける第1特徴点受付手段、
前記第2画像フレームにおいて、前記第1特徴点の座標値に対応する第2特徴点の座標値を特定する第2特徴点特定手段、
前記第1特徴点から前記第2特徴点への移動情報に基づいて、前記第2画像フレーム内に、前記第1画像フレームの前記領域境界ラインに対応する新たな領域境界ラインを自動生成する領域境界ライン自動生成手段、
の各機能を実行させることを特徴とする請求項10から請求項13のいずれか1項に記載の画像処理用コンピュータプログラム。 - 前記コンピュータに、
前記第2特徴点特定手段の処理に先立ち、ピクセルの色および明度の少なくともいずれか1つの近似度合いに基づき、前記第1特徴点の座標値に対応する前記第2特徴点を探索する第2特徴点探索手段の機能を、さらに実行させることを特徴とする請求項14に記載の画像処理用コンピュータプログラム。 - 前記コンピュータに、
前記領域境界ライン自動生成手段による処理を実行する前記画像フレームの数の指定を受け付ける画像フレーム数指定受付手段、
前記新たな領域境界ラインを自動生成する処理を実行した前記画像フレームの数が、指定された前記画像フレーム数に到達したか否かを判別するフレーム数判別手段、
の各機能を、さらに実行させ、
前記フレーム数判別手段によって、指定された前記画像フレーム数に到達したと判別されるまで、前記第1特徴点受付手段、前記第2特徴点特定手段および前記領域境界ライン自動生成手段に各処理を実行させることを特徴とする請求項14または請求項15に記載の画像処理用コンピュータプログラム。 - 動画を構成する複数の画像フレームの内、前記第1分割ラインで囲まれた領域に前記深さ値を付与済みの第1画像フレームが存在し、かつ前記第1画像フレームよりも時系列的に後に存在する第2画像フレーム内の前記第1分割ラインで囲まれた領域に対応する領域に自動的に前記深さ値を生成することを可能とする画像処理用コンピュータプログラムであって、
前記コンピュータに、
前記第1画像フレーム内の前記第1分割ラインで囲まれた領域に存在する1以上の第1ピクセルに、前記第1分割ラインで囲まれた領域に付与された前記深さ値を割り当てるピクセル深さ値割り当て手段、
前記第1ピクセルが前記第2画像フレーム内のどのピクセルに移動したかを追跡するピクセル移動位置追跡手段、
前記第2画像フレーム内の領域であって、前記第1ピクセルの移動後の第2ピクセルで構成される領域に、前記ピクセル深さ値割り当て手段によって割り当てられた前記深さ値を自動生成する深さ値自動生成手段、
の各機能を実行させることを特徴とする請求項10から請求項16のいずれか1項に記載の画像処理用コンピュータプログラム。 - コンピュータに読み込まれて実行されるコンピュータプログラムを格納した情報記録媒体であって、
コンピュータに、
動画を構成する複数の画像フレームの内の1若しくは2以上の画像フレームを読み込む画像フレーム読み込み手段、
読み込まれた上記画像フレーム内において、領域境界ラインの情報を受け付ける領域境界ライン情報受付手段、
上記領域境界ライン上の所定の点を起点として分割領域を拡張し、明度の近似点をつなぐ分割ラインにより上記領域境界ラインの内外を分割する領域分割手段、
上記分割ラインの内、2本の上記領域境界ラインの間に存在する第1分割ラインを残し、上記分割ラインの内の上記第1分割ライン以外の第2分割ラインを開口させる開口処理手段、
上記第1分割ラインで囲まれた領域の単位に分離する分離手段、
上記第1分割ラインで囲まれた領域に、その領域の遠近度合いを表す深さ値を付与する第1深さ値付与手段、
の各手段の機能を実行させる画像処理用コンピュータプログラムを格納した情報記録媒体。 - 動画を構成する複数の画像フレームの内、前記領域境界ラインを生成済みの第1画像フレームが存在し、かつ前記第1画像フレームよりも時系列的に後に存在する第2画像フレーム内に自動的に前記領域境界ラインを生成することを可能とする画像処理用コンピュータプログラムを格納した情報記録媒体であって、
前記コンピュータに、
前記第1画像フレームの前記領域境界ライン内に存在する第1特徴点の座標値を受け付ける第1特徴点受付手段、
前記第2画像フレームにおいて、前記第1特徴点の座標値に対応する第2特徴点の座標値を特定する第2特徴点特定手段、
前記第1特徴点から前記第2特徴点への移動情報に基づいて、前記第2画像フレーム内に、前記第1画像フレームの前記領域境界ラインに対応する新たな領域境界ラインを自動生成する領域境界ライン自動生成手段、
の各機能を実行させることを特徴とする請求項18に記載の画像処理用コンピュータプログラムを格納した情報記録媒体。 - 動画を構成する複数の画像フレームの内、前記第1分割ラインで囲まれた領域に前記深さ値を付与済みの第1画像フレームが存在し、かつ前記第1画像フレームよりも時系列的に後に存在する第2画像フレーム内の前記第1分割ラインで囲まれた領域に対応する領域に自動的に前記深さ値を生成することを可能とする画像処理用コンピュータプログラムを格納した情報記録媒体であって、
前記コンピュータに、
前記第1画像フレーム内の前記第1分割ラインで囲まれた領域に存在する1以上の第1ピクセルに、前記第1分割ラインで囲まれた領域に付与された前記深さ値を割り当てるピクセル深さ値割り当て手段、
前記第1ピクセルが前記第2画像フレーム内のどのピクセルに移動したかを追跡するピクセル移動位置追跡手段、
前記第2画像フレーム内の領域であって、前記第1ピクセルの移動後の第2ピクセルで構成される領域に、前記ピクセル深さ値割り当て手段によって割り当てられた前記深さ値を自動生成する深さ値自動生成手段、
の各機能を実行させることを特徴とする請求項18または請求項19に記載の画像処理用コンピュータプログラムを格納した情報記録媒体。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201480021836.8A CN105359518B (zh) | 2013-02-18 | 2014-02-18 | 图像处理装置和图像处理方法 |
JP2015500155A JP6439214B2 (ja) | 2013-02-18 | 2014-02-18 | 画像処理装置、画像処理方法、画像処理用コンピュータプログラムおよび画像処理用コンピュータプログラムを格納した情報記録媒体 |
US14/827,714 US9723295B2 (en) | 2013-02-18 | 2015-08-17 | Image processing device, image processing method, image processing computer program, and information recording medium whereupon image processing computer program is stored |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-028615 | 2013-02-18 | ||
JP2013028615 | 2013-02-18 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/827,714 Continuation US9723295B2 (en) | 2013-02-18 | 2015-08-17 | Image processing device, image processing method, image processing computer program, and information recording medium whereupon image processing computer program is stored |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014125842A1 true WO2014125842A1 (ja) | 2014-08-21 |
Family
ID=51353853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/000827 WO2014125842A1 (ja) | 2013-02-18 | 2014-02-18 | 画像処理装置、画像処理方法、画像処理用コンピュータプログラムおよび画像処理用コンピュータプログラムを格納した情報記録媒体 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9723295B2 (ja) |
JP (1) | JP6439214B2 (ja) |
CN (1) | CN105359518B (ja) |
WO (1) | WO2014125842A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2537142A (en) * | 2015-04-09 | 2016-10-12 | Nokia Technologies Oy | An arrangement for image segmentation |
US20230162306A1 (en) * | 2015-02-06 | 2023-05-25 | Sunrun, Inc. | Systems and methods for generating permit sets |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10578758B2 (en) * | 2015-03-19 | 2020-03-03 | Exxonmobil Upstream Research Company | Sequence pattern characterization |
US10269136B2 (en) * | 2015-04-29 | 2019-04-23 | Hewlett-Packard Development Company, L.P. | System and method for processing depth images which capture an interaction of an object relative to an interaction plane |
US10552966B2 (en) * | 2016-03-07 | 2020-02-04 | Intel Corporation | Quantification of parallax motion |
CN107545595B (zh) * | 2017-08-16 | 2021-05-28 | 歌尔光学科技有限公司 | 一种vr场景处理方法及vr设备 |
CN109819675B (zh) * | 2017-09-12 | 2023-08-25 | 松下知识产权经营株式会社 | 图像生成装置以及图像生成方法 |
CN109672873B (zh) * | 2017-10-13 | 2021-06-29 | 中强光电股份有限公司 | 光场显示设备及其光场影像的显示方法 |
JP2019168479A (ja) * | 2018-03-22 | 2019-10-03 | キヤノン株式会社 | 制御装置、撮像装置、制御方法、プログラム、および、記憶媒体 |
KR20200095873A (ko) * | 2019-02-01 | 2020-08-11 | 한국전자통신연구원 | 인물 영역 추출 방법, 이를 이용하는 영상 처리 장치 및 인물 영역 추출 시스템 |
CN110675425B (zh) * | 2019-08-22 | 2020-12-15 | 腾讯科技(深圳)有限公司 | 一种视频边框识别方法、装置、设备及介质 |
CN112487424A (zh) * | 2020-11-18 | 2021-03-12 | 重庆第二师范学院 | 一种计算机处理系统及计算机处理方法 |
CN113223019B (zh) * | 2021-05-21 | 2024-03-26 | 深圳乐居智能电子有限公司 | 一种清扫区域的分区方法、装置及清扫设备 |
CN113989717A (zh) * | 2021-10-29 | 2022-01-28 | 北京字节跳动网络技术有限公司 | 视频图像处理方法、装置、电子设备及存储介质 |
CN114903439A (zh) * | 2022-05-09 | 2022-08-16 | 李妍寸心 | 一种辅助整形美容注射填充物用的血管显影装置 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008277932A (ja) * | 2007-04-26 | 2008-11-13 | Hitachi Ltd | 画像符号化方法及びその装置 |
JP2011223566A (ja) * | 2010-04-12 | 2011-11-04 | Samsung Electronics Co Ltd | 画像変換装置及びこれを含む立体画像表示装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006080239A1 (ja) | 2005-01-31 | 2006-08-03 | Olympus Corporation | 画像処理装置、顕微鏡システム、及び領域特定プログラム |
EP2199981A1 (en) * | 2008-12-09 | 2010-06-23 | Koninklijke Philips Electronics N.V. | Image segmentation |
WO2011081623A1 (en) * | 2009-12-29 | 2011-07-07 | Shenzhen Tcl New Technology Ltd. | Personalizing 3dtv viewing experience |
WO2011138472A1 (es) * | 2010-05-07 | 2011-11-10 | Telefonica, S.A. | Método de generación de mapas de profundidad para conversión de imágenes animadas 2d en 3d |
CN102404594B (zh) * | 2011-10-31 | 2014-02-12 | 庞志勇 | 基于图像边缘信息的2d转3d的方法 |
CN102883174B (zh) * | 2012-10-10 | 2015-03-11 | 彩虹集团公司 | 一种2d转3d的方法 |
-
2014
- 2014-02-18 CN CN201480021836.8A patent/CN105359518B/zh not_active Expired - Fee Related
- 2014-02-18 JP JP2015500155A patent/JP6439214B2/ja not_active Expired - Fee Related
- 2014-02-18 WO PCT/JP2014/000827 patent/WO2014125842A1/ja active Application Filing
-
2015
- 2015-08-17 US US14/827,714 patent/US9723295B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008277932A (ja) * | 2007-04-26 | 2008-11-13 | Hitachi Ltd | 画像符号化方法及びその装置 |
JP2011223566A (ja) * | 2010-04-12 | 2011-11-04 | Samsung Electronics Co Ltd | 画像変換装置及びこれを含む立体画像表示装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230162306A1 (en) * | 2015-02-06 | 2023-05-25 | Sunrun, Inc. | Systems and methods for generating permit sets |
GB2537142A (en) * | 2015-04-09 | 2016-10-12 | Nokia Technologies Oy | An arrangement for image segmentation |
Also Published As
Publication number | Publication date |
---|---|
CN105359518B (zh) | 2017-03-08 |
JP6439214B2 (ja) | 2018-12-19 |
JPWO2014125842A1 (ja) | 2017-02-02 |
CN105359518A (zh) | 2016-02-24 |
US9723295B2 (en) | 2017-08-01 |
US20160100152A1 (en) | 2016-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6439214B2 (ja) | 画像処理装置、画像処理方法、画像処理用コンピュータプログラムおよび画像処理用コンピュータプログラムを格納した情報記録媒体 | |
JP4966431B2 (ja) | 画像処理装置 | |
US9710912B2 (en) | Method and apparatus for obtaining 3D face model using portable camera | |
US8340422B2 (en) | Generation of depth map for an image | |
US10636156B2 (en) | Apparatus and method for analyzing three-dimensional information of image based on single camera and computer-readable medium storing program for analyzing three-dimensional information of image | |
KR101556992B1 (ko) | 얼굴 성형 시뮬레이션을 이용한 3차원 스캔 시스템 | |
CN111480183B (zh) | 用于产生透视效果的光场图像渲染方法和系统 | |
US20200258309A1 (en) | Live in-camera overlays | |
WO2013054462A1 (ja) | ユーザーインタフェース制御装置、ユーザーインタフェース制御方法、コンピュータプログラム、及び集積回路 | |
KR20080076611A (ko) | 모델링 방법 및 장치 | |
JP2024100835A (ja) | 拡張現実マップキュレーション | |
Ward et al. | Depth director: A system for adding depth to movies | |
KR101165017B1 (ko) | 3차원 아바타 생성 시스템 및 방법 | |
KR102110459B1 (ko) | 3차원 이미지 생성 방법 및 장치 | |
US9208606B2 (en) | System, method, and computer program product for extruding a model through a two-dimensional scene | |
JP2023172882A (ja) | 三次元表現方法及び表現装置 | |
WO2020197655A1 (en) | Action classification based on manipulated object movement | |
CN107016730A (zh) | 一种虚拟现实与真实场景融合的装置 | |
US20140098246A1 (en) | Method, Apparatus and Computer-Readable Recording Medium for Refocusing Photographed Image | |
KR101071911B1 (ko) | 3차원 입체 영상 생성 방법 | |
JP2020513123A (ja) | イメージに動的エフェクトを適用する方法および装置 | |
KR20160081841A (ko) | 2차원 디지털 이미지 기반의 3차원 디지털 이미지 오브젝트 자동 추출 시스템 및 추출 방법 | |
KR101121979B1 (ko) | 입체 영상 변환 방법 및 입체 영상 변환 장치 | |
JP7078564B2 (ja) | 画像処理装置及びプログラム | |
KR20130003992A (ko) | 3차원 영상 데이터 생성 방법 및 이를 위한 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480021836.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14752014 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015500155 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14752014 Country of ref document: EP Kind code of ref document: A1 |