KR101759799B1 - Method for providing 3d image - Google Patents

Method for providing 3d image Download PDF

Info

Publication number
KR101759799B1
KR101759799B1 KR1020150188791A KR20150188791A KR101759799B1 KR 101759799 B1 KR101759799 B1 KR 101759799B1 KR 1020150188791 A KR1020150188791 A KR 1020150188791A KR 20150188791 A KR20150188791 A KR 20150188791A KR 101759799 B1 KR101759799 B1 KR 101759799B1
Authority
KR
South Korea
Prior art keywords
image
point
dimensional image
information
background
Prior art date
Application number
KR1020150188791A
Other languages
Korean (ko)
Other versions
KR20170078965A (en
Inventor
박재범
Original Assignee
(주)태원이노베이션
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)태원이노베이션 filed Critical (주)태원이노베이션
Priority to KR1020150188791A priority Critical patent/KR101759799B1/en
Publication of KR20170078965A publication Critical patent/KR20170078965A/en
Application granted granted Critical
Publication of KR101759799B1 publication Critical patent/KR101759799B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a method of providing a three-dimensional image, and a method of providing a three-dimensional image according to an embodiment of the present invention includes inputting image information composed of a series of frames for a rotating object; A background removing step of removing a background other than the object from the image information; And generating a three-dimensional image of the object using the image information from which the background is removed.

Description

{METHOD FOR PROVIDING 3D IMAGE}

The present invention relates to a method and system for providing a three-dimensional image. More particularly, to a method and system for providing a three-dimensional image of an object using an image taken while rotating the object at various angles.

The traditional method of advertisement of commercial products is to describe images and descriptions of goods on newspapers, TV or internet.

However, in the above-described advertisement method, only the image of the product that the advertiser has photographed can be shown to the consumer, so that the consumer who watches the advertisement can not confirm the product at the desired point of time.

Also, as a result of checking the purchased product based on the advertisement viewed by the consumer, the satisfaction of the purchase was lowered, and even the product was returned.

Accordingly, in order to solve the above-described problems, there is a need for a method of allowing a consumer to check a product at various points while rotating the product as desired, and to explain the product according to each point of view.

Korean Patent Publication No. 10-2010-0013671 (published on Feb. 10, 2010) Korean Published Patent No. 10-2012-0048888 (published on May 16, 2012)

It is an object of the present invention to provide a three-dimensional image generation system and a three-dimensional image providing method.

Another object of the present invention is to provide a three-dimensional image generation system capable of photographing while rotating an object at various angles.

It is still another object of the present invention to provide a three-dimensional image generation system suitable for aligning a center point of a target object.

It is still another object of the present invention to provide a three-dimensional image providing method which can continuously display a plurality of images taken while rotating a target object at various angles at a desired speed and angle by a user to show them as a three-dimensional image.

It is still another object of the present invention to provide a three-dimensional image providing method which can remove a background from a plurality of images taken while rotating an object at various angles and provide only a three-dimensional image of the object.

It is still another object of the present invention to provide a method of providing a three-dimensional image capable of providing a tag in accordance with a rotating object.

These and other objects of the present invention can be achieved by a method and system for providing a three-dimensional image according to the present invention.

According to an embodiment of the present invention, there is provided a three-dimensional image providing method comprising: inputting image information composed of a series of frames for a rotating object; A background removing step of removing a background other than the object from the image information; And generating a three-dimensional image of the object using the image information from which the background is removed.

The background removal step includes a sharpening step of sharpening the original image; A binarization step of converting the apparent image into a monochrome image and binarizing the image; An outline detection step of detecting an outline of the object in the binarized image; And a background separating step of separating the background and the object from the original image using the detected outline.

And a blur processing step of blurring the original image by performing blur processing before the sharpening step. In the sharpening step, a sharp image can be formed by increasing the contrast and hue difference of the blurred image.

And an inner filling step of estimating an outline in the binarized image between the binarization step and the outline detection step and then filling an empty space in the estimated outline.

A tag for outputting a comment on a specific point of the object specified by the user on a display, a comment point for a point at which the tag is connected to the specific point, and a comment line connecting the comment and the comment point, And an editing step of generating metadata including movement information for moving together with the movement of the terminal.

The movement information may be a parameter equation of a sphere derived from the coordinates of the specific point specified by the user and the center point of the object.

And a display step of outputting one of the three-dimensional images to the display and changing the currently output frame to another frame according to a user's control.

The display step may display the description point within the diameter D of the explanatory point connecting the left end LE to the right end RE where the explanatory point is located in the three-dimensional image information.

 The displaying step may display the description point along the parabola connecting the left point (LP), the right point (RP) and the relay point (TP) between which the explanatory point is located in the three-dimensional image information.

In the display step, when the user drags the screen left and right, the explanatory point is displayed at the coordinate position obtained by rotating the explanatory point coordinate on the entire spatial coordinate by a longitude angle, and when the user drags the screen up and down, This position can be displayed.

The present invention also includes a three-dimensional image providing application stored on a medium for executing a method of providing a three-dimensional image according to an embodiment of the present invention to a smart device such as a smart phone, a computer,

The present invention relates to a turntable which can rotate a target object by 360 degrees. A camera mount for mounting a camera for photographing the object; A control unit for controlling the turntable; And a communication unit for transmitting a control signal between the control unit and the turntable and between the controller and the camera, wherein the camera mount is rotated so that the camera can rotate around the object, And the rotation axis is position-adjustable so as to correspond to the center point of the object.

It is possible to photograph the object while rotating the object at various angles and to delete the background from the captured image, thereby providing only a three-dimensional image of the object, and generating a three-dimensional image capable of providing a tag according to a rotating object System and a method for providing a three-dimensional image.

1 is a system configuration diagram of a three-dimensional image generation system according to an embodiment of the present invention.
2 is a view showing a turntable and a camera mount of a three-dimensional image generation system according to an embodiment of the present invention.
3 is an exemplary diagram illustrating a method of shooting an image of a target object using a three-dimensional image generation system according to an embodiment of the present invention.
4 is a view showing an exemplary image taken by a three-dimensional image generation system according to an embodiment of the present invention.
5 is a flowchart of a method for providing a three-dimensional image according to an embodiment of the present invention.
FIG. 6 is a flowchart illustrating a background removing step in a method of providing a three-dimensional image according to an exemplary embodiment of the present invention. Referring to FIG.
7 is a view showing an image changed according to the image sharpening step.
8 is a view showing an image changed according to the image binarization step.
9 is a view showing an image changed according to the outline detection step.
10 is a view showing an image changed according to the background separating step.
11 is a view showing the tag setting and image display steps.
12 is a diagram illustrating an exemplary explanatory point moving method of a method for providing a three-dimensional image according to an embodiment of the present invention.
13 is a diagram illustrating another exemplary explanatory point moving method of the method for providing a three-dimensional image according to an embodiment of the present invention.
FIG. 14 is a diagram illustrating another exemplary explanatory point moving method of the three-dimensional image providing method according to the embodiment of the present invention.

Hereinafter, a three-dimensional image generating system and a three-dimensional image providing method according to the present invention will be described in detail with reference to the accompanying drawings.

In the following description, only parts necessary for understanding a three-dimensional image generating system and a three-dimensional image providing method according to an embodiment of the present invention will be described, and descriptions of other parts may be omitted so as not to disturb the gist of the present invention.

In addition, terms and words used in the following description and claims should not be construed to be limited to ordinary or dictionary meanings, but are to be construed in a manner consistent with the technical idea of the present invention As well as the concept.

Throughout the specification, when an element is referred to as "comprising ", it means that it can include other elements as well, without excluding other elements unless specifically stated otherwise. Also, the terms " part, "" module," and " module ", etc. in the specification mean a unit for processing at least one function or operation and may be implemented by hardware or software or a combination of hardware and software have.

The three-dimensional image generation system according to an embodiment of the present invention continuously provides a series of images of an object photographed by a photographing device (hereinafter, "camera") such as a smart phone or a digital camera at a desired speed and angle So that the user can rotate the three-dimensional image of the target object.

A system configuration diagram of a three-dimensional image generation system according to an embodiment of the present invention is shown in FIG.

1, a three-dimensional image generating system according to an embodiment of the present invention includes a turntable 10, a camera mount 20, a controller 30, and a communication unit 40.

As shown in FIG. 2, the turntable unit 10 rotates 360 degrees according to a signal from the control unit as a place where the user puts an object to be photographed.

The camera mount portion 20 is a portion in which a camera for photographing a target object placed on the turntable portion 10 is mounted and a direction (hereinafter referred to as "A direction") in which the camera is moved toward or away from the turntable portion 10, And the position of the camera mounted in the direction B orthogonal to the direction A in order to position the camera toward the center of the object and adjust the position of the rotation axis O. [

The adjustment in the A direction is to make the camera approach or move away from the target object according to the size of the target object, and the adjustment in the B direction can be performed by adjusting the center point of the target object (center of the sphere when the target object is viewed as a sphere) And adjustment in the C direction is performed so that the rotation axis O of the camera mount portion is aligned with the center point of the object.

In this way, when the rotation axis O of the camera for photographing the object and the stationary part for rotating the camera is aligned with the center point of the object, the three-dimensional image provided later is a feeling that the user observes while rotating the object fixed at the center point Can be provided to the user.

Therefore, it is preferable that the user adjusts the camera mount portion so that the rotation axis O of the camera processing portion and the camera look at the center point of the object before shooting the object.

The control unit 30 is a device for controlling the turntable unit 10 and / or the camera mount unit 20 so that the camera 200 can photograph an object to be rotated. The control method of the control unit for photographing the object will be described in more detail below.

The communication unit 40 serves to transmit a control signal to the control unit 30 so as to control the turntable unit 10 and / or the camera mount unit 20.

And also serves as a signal transmission between the camera 200 and the control unit 30. [ For this purpose, the communication unit can use wireless communication such as Bluetooth communication for communication with a smart phone and infrared communication for communication with a digital camera.

In addition, the communication unit has a protocol corresponding to a camera photographing method (for example, a photograph photographing method, a video photographing method), and each protocol is divided into a receive protocol and a transmission protocol. And informs that the rotation of the turntable is completed through the transmission protocol.

A method of photographing a series of images of a target object to be photographed by a three-dimensional image generating system according to an embodiment of the present invention will now be described in more detail.

The three-dimensional image generation system according to an exemplary embodiment of the present invention provides buttons such as a mode selection button, a start button, a shooting button, and the like, and recognizes and processes information input by the user using the buttons.

The mode selection button is a button for selecting a mode such as a camera photographing mode, a camera image capturing mode, a manual mode, and the like, and the control of the control unit is changed as follows according to which mode is selected.

When the camera photographing mode is selected, the control unit 30 rotates the turntable at a predetermined rotation angle (for example, 10 degrees) according to an input signal of the start button, and instructs the communication unit 40 to transmit the photographing transmission signal To send a command signal to send a total of 36 command signals in a set, so that the camera can take pictures of objects rotated at intervals of 10 degrees. Where 10 degrees may be set to a smaller angle to provide a more precise three-dimensional image or a larger angle to increase data processing speed as a preset exemplary angle.

When the camera image photographing mode is selected, the control unit 30 sends a photographing transmission signal to the camera through the communication unit 40 according to an input signal of the start button, rotates the turntable 360 degrees, Allows you to capture images of objects.

The manual mode is a mode in which the user can control the shooting and rotation of the turntable. When the user presses the shooting button, the control unit sends the shooting transmission signal to the communication unit and controls the turntable to move 10 degrees by pressing the start button.

Also, as shown in FIG. 3, the angle of the camera mount 20 is adjusted to a predetermined angle from 0 to 90 degrees corresponding to the ground surface, while rotating the turntable 360 degrees at each angle as described above, Repeat the process.

That is, the camera is set to rotate at 10 degree intervals in the camera photographing mode, and when the camera mount 20 is moved at an angle of 15 degrees as shown in FIG. 3, a total of 254 frames of images are captured.

Also, as shown in FIG. 4, a plurality of frame images for a target object may be prepared in the same manner as in the camera photographing mode as shown in FIG.

Hereinafter, a method of generating and displaying a three-dimensional image using such image information will be described in detail.

The method of providing a three-dimensional image according to an exemplary embodiment of the present invention will be described with reference to a block diagram of a method for providing a three-dimensional image according to an exemplary embodiment of the present invention. Is not limited to the order of these blocks, and some blocks may occur in different orders or concurrently with other blocks as shown and described herein, and may include various other branches, flow paths, and blocks May be implemented. Also, not all illustrated blocks may be required for implementation of the methods described herein.

FIG. 5 is a flowchart of a method for providing a three-dimensional image according to an embodiment of the present invention.

5, a method of providing a three-dimensional image according to an exemplary embodiment of the present invention includes an image information input step S100, a background removal step S200, a three-dimensional image generation step S300, an image display step S400 ).

In addition, the 3D image providing application according to an embodiment of the present invention may include an image information input step S100, a background removal step S200, a three-dimensional image creation step S300, and an image display step S400, Let smart device run.

First, the image information input step S100 is a step in which image information composed of a series of frames for a rotating object is input.

The image information composed of a series of frames for the object to be rotated may be a series of images of the object photographed by the three-dimensional image generation system according to the embodiment of the present invention described above.

The image information input step may include an image capturing step of capturing an image of a target object, a step of extracting a plurality of images from the moving image of the target object, or a step of receiving already generated image information.

The background removal step (S200) is a step of removing a background other than the object from the input image information.

Since the image generated by the three-dimensional image generation system according to the embodiment of the present invention is an image taken while the camera is rotating around the object, when the image is directly displayed, the user does not see the object while moving It gives the user the feeling of moving around the object.

Accordingly, the method of providing a three-dimensional image according to an embodiment of the present invention provides an effect of allowing a user to observe an object, such as a globe, by removing a background other than the object in the input image information.

The background removal step will be described in more detail with reference to FIGS. 6 to 10.

6, the background removal step S200 includes an image sharpening step S210, an image binarization step S220, an outline detection step S230, and a background separation step S240 ).

The image sharpening step is a step of making the image more distinct than the original image taken to separate the object and the background.

In order to make the image clear, as shown in FIG. 7, a blurring process is performed on the entire original image to slightly blur the image, and then a clear image can be obtained by increasing contrast and color difference.

Next, in the image binarization step (S220), the image clarified in the image sharpening step is converted into a monochrome image and binarized (see Fig. 8).

Next, in the outline detection step (S230), the outline of the object is detected from the binarized image. Here, as shown in FIG. 8, in the binarized image, there is also a portion binarized to the same color as the outline in the outline of the object. If these parts are left as they are, holes are created in the final image of the object whose background is removed.

In order to prevent such a hole phenomenon, it is more preferable to estimate the outline in the binarized image as shown in FIG. 9, to fill the empty space in the estimated outline, and then to detect the outline of the object.

Next, in the background separating step (S240), the background and the object are separated from the original image by using the detected outline. In other words, as shown in FIG. 10, since the detected area inside the outline is the object, it is overlapped with the original image, and then the image of the outline area detected is deleted to remove the background from the original image and only the image can do.

Referring again to FIG. 5, in a method of providing a three-dimensional image according to an exemplary embodiment of the present invention, a series of images from which a background is removed is generated as a three-dimensional image (S300). Here, the meaning of creation means not to convert a two-dimensional image into a three-dimensional image, but to store a plurality of continuously captured two-dimensional images of one object as a three-dimensional image of one object do.

The series of images thus stored are sequentially displayed in accordance with the input of the user in the image display step S400, thereby providing a feeling as if the user views the three-dimensional image of the target object.

That is, when the user moves the touched finger by using the touch function of the smartphone screen as shown in FIG. 11, the three-dimensional image providing application according to the embodiment of the present invention is displayed on the screen So that a series of images for a target object such as a bar is successively displayed on the screen.

At this time, a predetermined number of frames are successively output according to the moving distance of the touched finger, that is, the dragged distance, and by controlling the speed at which the continuous frame is outputted according to the dragged speed, If you turn it, you can give it the same feeling you see.

The three-dimensional image providing method and application according to an embodiment of the present invention can be very useful when the user has to make a purchase decision without directly viewing an object to be purchased, such as an Internet shopping.

In other words, to make the product look good, the seller had to decide on the purchase of the product by looking at several images taken at some angle and position. On the other hand, when the method and application for providing three-dimensional images according to an embodiment of the present invention are used, consumers can observe the products in all positions and directions desired by the consumers, You will be able to significantly reduce the rate of return after you receive it.

Further, the method and the application for providing a three-dimensional image according to an embodiment of the present invention may further include an editing step for further enhancing the function of the e-catalog.

The editing step is a step of editing the generated image information.

In editing the image information, the present invention utilizes the metadata to correct the image and easily modify or delete the corrected image in the future. The metadata is also referred to as attribute information, and refers to data separately assigned to the content according to a certain rule.

In the present invention, the metadata includes tag and movement information.

The tag T denotes a comment output together with an image of the object. The tag T includes a text box T1 into which information on a product is input, an explanation point T3 as a point indicated by the information, And a description line (T2) which is a line connecting the text box and the description point.

As described above, since the position of the photographed object is rotated in each frame of the image information in adding the information of the target object to the image information using the tag, the description point T3 and the The explanation line T2 also needs to be moved in accordance with the rotation of the object.

There is a method of setting the position of the explanatory point T3 for each frame of the image information by the user in order to align the explanatory point T3 and the explanatory line T2 with the object to be rotated. However, Direct location explanations are time-consuming and cumbersome and need a way to solve them.

Fig. 12 shows a description point specifying method using the diameter of the object.

The application according to an embodiment of the present invention identifies all the frames of the object stored in the image information and defines the distance from the left end LE to the right end RE where the description point is located as the diameter D of the point of explanation , And the explanation point T3 can generate movement information to move within the range of the diameter according to the angular velocity of the object.

However, in the case of the explanation point specifying method using the diameter as described above, the position of the explanation point can be precisely predicted when the object is cylindrical and the image information is photographed from the front, but in other cases, the prediction position may be shifted.

13 shows a description point designation method using the three-point designation method.

When a left point (LP), a right point (RP), and a relay point (TP) between the left point (LP) and the right point (RP) of the explanatory point formed on the rotating object are specified in the image information, 200 may generate a parabola connecting the three points to derive a trajectory R of the explanatory point, and the explanatory point may generate movement information to move the trajectory according to the angular velocity of the object.

Although FIGS. 12 and 13 illustrate the movement of the explanatory point when the user moves the screen left and right, it can be understood that the same applies to the case where the user moves the screen up and down.

Further, in order to move the explanation point more precisely, when the user touches the explanatory point for connecting the tag in the displayed object, the parameter equation of the sphere is calculated by using the coordinates of the current explanatory point and the coordinates of the object center point in the image 14, when the user drags left and right, coordinates of the explanatory point are obtained by rotating the explanatory point coordinates on the entire spatial coordinates by a longitude angle, and when the user drags the pointer vertically downward, And then display the explanation point at the corresponding position.

At this time, if the calculated coordinates of the explanatory point are located at an angle that is not displayed, it is possible to control so that the explanatory point is not displayed on the screen, and only the explanatory points that can be displayed on the screen and the tag connected thereto can be displayed.

The apparatus for generating three-dimensional images and the method for providing three-dimensional images according to the present invention have been described in detail with reference to the accompanying drawings. It will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as hereinafter claimed.

10: turntable part 20: camera mount part
30: control unit 40: communication unit
100: three-dimensional image generation system 200: camera

Claims (10)

An image information input step of inputting image information composed of a series of frames to a rotating object;
A background removing step of removing a background other than the object from the image information;
Generating a three-dimensional image of the object using the image information from which the background is removed; And
A tag including a text box into which a description of characteristic points of the object is input, a description point representing the characteristic point, a description line connecting the description point and the text box, and a series of frames output And an edit step of generating metadata including the movement information for moving the explanatory point on the display,
The editing step includes:
Generating movement information by using a parameter equation of a sphere derived by using a coordinate of one feature point and a center point of the target object when one position information about the feature point is input,
When two pieces of position information about the feature points are inputted, the movement information is generated by a straight line connecting two feature points,
Wherein when three pieces of position information about the feature points are input, the movement information is generated by a parabola connecting three feature points.
The method according to claim 1,
The background removal step
A sharpening step of sharpening the original image;
A binarization step of converting the apparent image into a monochrome image and binarizing the image;
An outline detection step of detecting an outline of the object in the binarized image; And
A background separating step of separating the background and the object from the original image using the detected outline;
And providing the three-dimensional image.
3. The method of claim 2,
Further comprising a blur processing step of blurring the original image by performing blur processing before the sharpening step,
Wherein the sharpening step increases the contrast and hue difference of the blurred image to produce a clear image.
3. The method of claim 2,
Between the binarization step and the outline detection step
And an inner filling step of estimating an outline in the binarized image and then filling an empty space in the estimated outline.
delete delete delete The method according to claim 1,
Further comprising a display step of outputting one of the three-dimensional images to a display and changing a currently output frame to another frame according to a user's control,
The display step
When the movement information is generated by the parameter equation of the sphere, when the user drags the screen left and right, the explanatory point is displayed at the coordinate position obtained by rotating the explanatory point coordinate on the whole space coordinate by the longitude angle, A description point is displayed at a coordinate position obtained by rotating by a latitude angle,
When the user moves the screen to the left or right when the movement information is generated as a straight line, displays the explanation point along the straight line,
And when the user moves the screen to the left or right, when the movement information is generated as a parabola, displays the explanation point along the parabola.
A three-dimensional image providing application stored on a recording medium for executing a method for providing a three-dimensional image according to any one of claims 1 to 8 to a smart device.

delete
KR1020150188791A 2015-12-29 2015-12-29 Method for providing 3d image KR101759799B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150188791A KR101759799B1 (en) 2015-12-29 2015-12-29 Method for providing 3d image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150188791A KR101759799B1 (en) 2015-12-29 2015-12-29 Method for providing 3d image

Publications (2)

Publication Number Publication Date
KR20170078965A KR20170078965A (en) 2017-07-10
KR101759799B1 true KR101759799B1 (en) 2017-08-01

Family

ID=59355168

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150188791A KR101759799B1 (en) 2015-12-29 2015-12-29 Method for providing 3d image

Country Status (1)

Country Link
KR (1) KR101759799B1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102129458B1 (en) * 2017-11-22 2020-07-08 한국전자통신연구원 Method for reconstructing three dimension information of object and apparatus for the same
KR102220237B1 (en) * 2019-04-17 2021-02-25 주식회사 태산솔루젼스 3D Modularization and Method of CT Image Information for the Restoration of Cultural Heritage
KR102418735B1 (en) * 2020-08-14 2022-07-11 주식회사 글로벌코딩연구소 Crops dealing method based on big data in smart farm and the system thereof
KR102517887B1 (en) * 2022-11-22 2023-04-04 (주)동광사우 Milti-angle shooting device for vision recognition system by deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001257878A (en) * 2000-03-09 2001-09-21 Oki Electric Ind Co Ltd Image processor
JP2015007952A (en) * 2013-06-24 2015-01-15 由田新技股▲ふん▼有限公司 Device and method to detect movement of face to create signal, and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001257878A (en) * 2000-03-09 2001-09-21 Oki Electric Ind Co Ltd Image processor
JP2015007952A (en) * 2013-06-24 2015-01-15 由田新技股▲ふん▼有限公司 Device and method to detect movement of face to create signal, and computer readable storage medium

Also Published As

Publication number Publication date
KR20170078965A (en) 2017-07-10

Similar Documents

Publication Publication Date Title
US10586395B2 (en) Remote object detection and local tracking using visual odometry
US11257233B2 (en) Volumetric depth video recording and playback
CN110022470B (en) Method and system for training object detection algorithm using composite image and storage medium
US9591237B2 (en) Automated generation of panning shots
KR101759799B1 (en) Method for providing 3d image
KR101590256B1 (en) 3d image creating method using video photographed with smart device
EP3097689B1 (en) Multi-view display control for channel selection
EP3008888B1 (en) Imaging system and imaging method, and program
EP3451285B1 (en) Distance measurement device for motion picture camera focus applications
US11288871B2 (en) Web-based remote assistance system with context and content-aware 3D hand gesture visualization
KR101556158B1 (en) The social service system based on real image using smart fitting apparatus
CN105474070A (en) Head mounted display device and method for controlling the same
KR102450236B1 (en) Electronic apparatus, method for controlling thereof and the computer readable recording medium
KR101703013B1 (en) 3d scanner and 3d scanning method
KR101643917B1 (en) The smart fitting apparatus based real image
US20230153897A1 (en) Integrating a product model into a user supplied image
JPWO2017022291A1 (en) Information processing device
US10860166B2 (en) Electronic apparatus and image processing method for generating a depth adjusted image file
US20130033487A1 (en) Image transforming device and method
JP2015231114A (en) Video display device
KR20180070082A (en) Vr contents generating system
EP3386204A1 (en) Device and method for managing remotely displayed contents by augmented reality
KR20230016781A (en) A method of producing environmental contents using AR/VR technology related to metabuses
CN103295023A (en) Method and device for displaying augmented reality information
KR20180113944A (en) Vr contents generating system

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
X701 Decision to grant (after re-examination)