WO2018199351A1 - Procédé et dispositif de génération d'un fichier d'image comprenant des données de capteur en tant que métadonnées - Google Patents

Procédé et dispositif de génération d'un fichier d'image comprenant des données de capteur en tant que métadonnées Download PDF

Info

Publication number
WO2018199351A1
WO2018199351A1 PCT/KR2017/004401 KR2017004401W WO2018199351A1 WO 2018199351 A1 WO2018199351 A1 WO 2018199351A1 KR 2017004401 W KR2017004401 W KR 2017004401W WO 2018199351 A1 WO2018199351 A1 WO 2018199351A1
Authority
WO
WIPO (PCT)
Prior art keywords
image file
generating
metadata
sensor
background
Prior art date
Application number
PCT/KR2017/004401
Other languages
English (en)
Korean (ko)
Inventor
최상호
정성엽
Original Assignee
라인 가부시키가이샤
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 라인 가부시키가이샤 filed Critical 라인 가부시키가이샤
Priority to PCT/KR2017/004401 priority Critical patent/WO2018199351A1/fr
Publication of WO2018199351A1 publication Critical patent/WO2018199351A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera

Definitions

  • the present disclosure relates to a method and an apparatus for generating an image file according to a shooting command, and more particularly, to a method and an apparatus for generating an image file including sensor data as metadata included in an image file.
  • Image files are metadata that contain information related to shooting conditions such as the type of camera used to take a picture or the shooting speed such as shutter speed, ISO value, and aperture value. In order to generate the, it was necessary to receive additional content such as additional sensor information or additional image files separately.
  • a method of generating content including dynamic effects by synthesizing images and voices or synthesizing images and videos also provides a limited experience to viewers of the content.
  • Korean Patent No. 10-1513999 (published April 21, 2015) separates a selected first object from a plurality of objects included in two-dimensional image data, and first object at a first position and a second position of a background.
  • a method of displaying a two-dimensional image as a three-dimensional image by generating data relating to the arrangement of is described.
  • the sensor data associated with the object from the sensor unit is obtained, and the sensor data is added to the image file generated by photographing the object by the camera as metadata. It can provide a way to create an image file.
  • metadata is extracted from an image file photographing an object including sensor data acquired through the sensor unit as metadata, and based on the extracted metadata, a desired content including dynamic effects is generated. It may provide a method.
  • a method for generating an image file in response to a photographing command for an object through a user terminal comprising: acquiring sensor data associated with the object from at least one sensor in response to the photographing command
  • the object is at least one of a subject and a background around the subject, and the sensor data includes information required to generate a dynamic effect associated with at least one of the subject and the background;
  • generating the desired image file by adding the sensor data as metadata to an image file generated by photographing the object by the camera in response to the photographing command.
  • the sensor data may include at least one of information about a positional relationship between the user terminal and the object and information about a color of the object.
  • the sensor may include at least one of a position sensor, a gyro sensor, a distance sensor, an illuminance sensor, a depth sensor, a motion tracking sensor, and a shape identification unit.
  • the sensor data includes information about a viewpoint of the camera, information about a distance between the user terminal and the ground obtained from the position sensor, and an inclination of the user terminal when generating the desired image file measured by the gyro sensor.
  • a distance from the user terminal measured by the distance sensor to at least one measurement point of the object, an illuminance value upon generation of the desired image file measured by the illuminance sensor, measured by the depth sensor It may include at least one of a depth value of the measurement point, motion tracking related data of the object obtained by the motion tracking sensor, and information about the shape of the object identified by the shape identification unit.
  • the dynamic effect may be a Virtual Reality (VR) effect or an Augmented Reality (AR) effect, a three-dimensional effect, or a motion effect.
  • VR Virtual Reality
  • AR Augmented Reality
  • the method for generating the image file may further include generating a desired content including the dynamic effect based on the metadata included in the desired image file.
  • the generating of the desired content may include: building a virtual space based on the metadata; And synthesizing the virtual space with a user-photographed or user-selected target object.
  • the virtual space may correspond to a target object background around the target object.
  • the target object background may have a spatial similarity to the background.
  • the target object background may have spatial similarity to the background by including the same at least one environment element associated with the background.
  • the environmental element may include a color of at least a part of the background, an object included in the background, a shape of the object, a type of the object, or a positional relationship of the subject with respect to at least a part of the background.
  • the building of the virtual space may include determining a location in the virtual space of at least one thing to be placed in the virtual space based on the metadata; And generating the virtual space by arranging or rendering the object based on the determined position.
  • the building of the virtual space may include determining a shape of an object to be disposed in the virtual space based on the information about the shape of the object included in the metadata; And generating the virtual space by arranging or rendering the object based on the determined shape of the object.
  • the object to be disposed or rendered may be the same object as the object or another object having the same shape as the object.
  • the method of generating the image file may further include building a database based on the metadata included in the desired image file.
  • the metadata may be stored in the desired image file in an Exchangable Image File format (Exif).
  • the generating of the desired content may include: building a virtual space based on the metadata; And synthesizing the virtual space with a user-photographed or user-selected target object.
  • the virtual space may correspond to a target object background around the target object.
  • the target object background may have a spatial similarity to the background.
  • an electronic device that generates an image file in response to a photographing command for an object through a user terminal, wherein the sensor device obtains sensor data related to an object from at least one sensor in response to the photographing command.
  • a sensor data obtaining unit wherein the object is at least one of a subject and a background around the subject, and the sensor data includes information required to generate a dynamic effect associated with at least one of the subject and the background;
  • an image file generator for generating a desired image file by adding the sensor data as metadata to an image file generated by photographing the object by the camera in response to the photographing command.
  • the electronic device may further include a content generation unit configured to generate desired content including the dynamic effect, based on the metadata included in the desired image file.
  • a method of including sensor data as metadata in a generated image file may be provided by only performing photographing without a separate operation or setting for acquiring sensor data.
  • a single image file can be used to generate content having a dynamic effect having a visual commonality with the corresponding image file, it is possible to easily generate content in which the atmosphere at the time of shooting is reproduced.
  • a VR environment or an AR environment representing an atmosphere similar to the atmosphere in which the image file is photographed can be constructed.
  • FIG. 1 illustrates a method of generating an image file including sensor data as metadata and generating content including dynamic effects, according to an embodiment.
  • FIG. 2 illustrates an apparatus for generating an image file including sensor data as metadata and generating content including dynamic effects, according to an embodiment.
  • FIG. 3 is a flowchart illustrating a method of generating an image file including sensor data as metadata and generating content including dynamic effects, according to an exemplary embodiment.
  • FIG. 4 is a flowchart illustrating a method of generating content including dynamic effects based on metadata included in an image file, according to an example.
  • FIG. 5 is a flowchart illustrating a method of building a virtual space included in content in generating content based on metadata included in an image file according to an example.
  • FIG. 6 is a flowchart illustrating a method of building a database based on metadata included in an image file, according to an example.
  • FIG. 7 illustrates a method of generating content having a dynamic effect, according to an example.
  • FIG. 8 illustrates a method of constructing a database based on metadata included in an image file and generating content having a dynamic effect based on the constructed database.
  • FIG. 1 illustrates a method of generating an image file including sensor data as metadata and generating content including dynamic effects, according to an embodiment.
  • FIG. 1 illustrates a method of generating an image file as the object 130 is photographed through the camera 110 of the user terminal 100.
  • the user terminal 100 may be an electronic device including a sensor unit 120 including a camera 110 and at least one sensor.
  • the user terminal 100 may be a smartphone.
  • the object 130 is an object displayed in an image generated by photographing, and may represent at least one of a subject 134 to be photographed and a background 132 around the subject 134.
  • the subject 134 may include at least one object.
  • the background 132 may mean an area excluding the subject 134 in the image of the object 130 generated by photographing. Background 132 may also include at least one object.
  • the sensor unit 120 may acquire sensor data related to the object 130 using at least one sensor.
  • the sensor data may include information required for generating a dynamic effect associated with at least one of the subject 134 and the background 132.
  • sensor data can be used to generate dynamic effects such as Virtual Reality (VR) effects or Augmented Reality (AR) effects, three-dimensional effects, or motion effects on the content of an image (image) or video. Information may be included.
  • VR Virtual Reality
  • AR Augmented Reality
  • Information may be included.
  • the user terminal 100 may generate an image file by capturing the object 130 by the camera 110 in response to the photographing command, and the sensor data obtained by the sensor unit 120 as the metadata as the image. By adding to a file, a desired image file can be generated.
  • the sensor data when photographing the object 130 through the user terminal 100, the sensor data may be included as metadata in the image file generated by the photographing without any additional manipulation or setting for acquiring the sensor data.
  • the user terminal 100 may transmit the generated image file or metadata included in the image file to the content generation device 140.
  • the content generation device 140 may extract metadata from an image file from the user terminal 100, and generate content including dynamic effects using the metadata. Since the generated content includes dynamic effects generated using metadata (sensor data) included in the image file, the generated content may have a visual commonality with the image file.
  • the generated content may be content including a VR effect representing an environment (or virtual space) similar to the background 132 represented by the image file.
  • the generated content may be content including AR effects in which environmental elements associated with the background 132 represented by the image file are synthesized.
  • content having an atmosphere similar to the atmosphere in which the image file is photographed may be generated, and a VR environment or AR environment having an atmosphere similar to the atmosphere in which the image file is photographed may be constructed.
  • the content generating device 140 exists as a separate device outside the user terminal 100 is illustrated.
  • the content generating device 140 is configured inside the user terminal 100. It may be. That is, the user terminal 100 may generate the content to which the dynamic effect is added by using the metadata included in the image file by using the image file generated by photographing.
  • FIG. 2 illustrates an apparatus for generating an image file including sensor data as metadata and generating content including dynamic effects, according to an embodiment.
  • the electronic device 200 generating an image file including the sensor data as metadata, and the content generating device 140 generating content including dynamic effects using the metadata of the generated image file.
  • the electronic device 200 may correspond to the user terminal 100 described above with reference to FIG. 1.
  • the electronic device 200 is a device for photographing the object 130 and acquiring sensor data related to the object 130 according to a photographing command.
  • the electronic device 200 may be a personal computer, a laptop computer, or a laptop computer. It may be a terminal used by a user such as a laptop computer, a tablet, an Internet Of Things device, or a wearable computer.
  • the electronic device 200 may include a communication unit 210, a controller 220, a camera 230, and a sensor unit 240.
  • the camera 230 and the sensor unit 240 may correspond to the camera 110 and the sensor unit 120 described above with reference to FIG. 1.
  • the communication unit 210 may be a device for the electronic device 200 to communicate with another server or a user terminal.
  • the communication unit 210 may be a hardware module or network device driver such as a network interface card, a network interface chip and a networking interface port of the electronic device 200 that transmits / receives data and / or information to another server or user terminal. It may be a software module such as a driver or a networking program.
  • the controller 220 may manage components of the electronic device 200 and may execute a program or application used by the electronic device 200. For example, the controller 220 may control the sensor unit 240 to obtain sensor data according to the received photographing command, and control the camera 230 to photograph the object 130, and to capture an image file. Can be generated. The controller 220 may execute a program or an application required for generating an image file, and may process operations required for executing the program or the application and processing data. In addition, the controller 220 may be configured to process data received from other servers and user terminals. In addition, the controller 220 may be at least one processor of the electronic device 200 or at least one core in the processor.
  • the controller 220 includes a sensor data acquisition unit 222 for controlling the sensor unit 240 to acquire sensor data, and an image file generation unit 224 for generating an image file according to the photographing by the camera 230. can do.
  • the sensor data acquisition unit 222 may acquire sensor data related to the object 130 from the sensor unit 240 including at least one sensor in response to a photographing command for the object 130.
  • the image file generation unit 224 adds the sensor data as metadata to an image file generated by photographing the object 130 by the camera 230 in response to a photographing command for the object 130. You can create a file.
  • the controller 220 may further include a content generator 226 that generates desired content including dynamic effects based on metadata included in the image file generated by the image file generator 224.
  • a content generator 226 that generates desired content including dynamic effects based on metadata included in the image file generated by the image file generator 224.
  • the content generator 226 extracts the metadata from the image file generated by the image file generator 224, which includes sensor data related to the object 130 obtained through the sensor unit 240 as metadata. And based on the extracted metadata, it is possible to generate content including dynamic effects.
  • the above-described configurations 222 through 226 of the controller 220 may be implemented in at least one processor, and the functions and operations of the configurations 222 through 226 may be executed by at least one processor.
  • the camera 230 may be a device for photographing the object 130.
  • the camera 230 may include an image sensor for photographing the object 130 to generate an image (image or image).
  • the sensor unit 240 may include at least one sensor for measuring / acquiring data related to the object 130.
  • the sensor unit 240 may include at least one of a position sensor, a gyro sensor, a distance sensor, an illuminance sensor, a depth sensor, a motion tracking sensor, and a shape identification unit.
  • the electronic device 200 eg, the user terminal 100
  • the positional relationship between the object 130 and the color information of the object 130 may be determined. It may include any sensor.
  • the sensor data may include information about a viewpoint of the camera 230, information about a distance between the electronic device 200 and the ground obtained from the position sensor, and an electronic device at the time of generating a desired image file measured by the gyro sensor.
  • Tilt of 200 distance from the electronic device 200 measured by the distance sensor to at least one measurement point of the object 130, at the time of generation of the image file measured by the illuminance sensor (ie, at the time of shooting)
  • the image file generator 224 may parameterize the acquired sensor data, and generate the image file by adding the parameterized sensor data as metadata of the image file to be generated.
  • the measurement point may represent any point of the object 130 or may represent any point of the object included in the object 130.
  • the motion tracking related data and / or information about the shape may be used to identify what the object 130 (or the objects included in the object 130) is.
  • the electronic device 200 may further include a display unit for outputting data input by a user or displaying content including image files and / or dynamic effects.
  • the display unit may include a touch screen, and in this case, the display unit may be configured to include a function of an input unit for receiving a setting and a request from a user.
  • the electronic device 200 may include a storage unit as a device for storing data or information.
  • the storage may include any memory or storage device.
  • the storage unit may store a program or an application executed by the controller 220 and information related thereto.
  • the storage unit may store content including the generated image file and / or the generated dynamic effect.
  • the content generation device 140 may be a device for generating content including dynamic effects, based on metadata included in the image file generated by the image file generation unit 224.
  • the content generation device 140 may include a communication unit 250 and a control unit 260.
  • the content generation device 140 may be a server or other computing device that generates content that includes dynamic effects.
  • the communication unit 250 may be a component for communicating with other devices, including the electronic device 200.
  • the communication unit 250 may be a network interface card, a network interface chip and a networking interface port of the server 110 that transmits / receives data and / or information to another device (database or other server) including the electronic device 200. It may be a hardware module.
  • the communication unit 250 may be or a software module such as a network device driver or a networking program.
  • the controller 260 may manage the components of the content generation device 140, execute a program or an application used by the content generation device 140 to generate and provide content, and process related data and operations. Can be.
  • the controller 260 may be at least one processor of the content generating device 140 or at least one core in the processor.
  • the controller 260 may include a content generator 266.
  • Operations and functions of the content generator 266 may be the same as those of the content generator 226 described above, and thus a detailed description thereof will be omitted.
  • the content including the dynamic effect may be generated by the electronic device 200 or may be generated by the content generating device 140 external to the electronic device 200.
  • some operations eg, virtual space creation
  • the remaining operations are performed by the electronic device 200. It may also be performed by.
  • FIG. 2 As described above, the description of the technical features described above with reference to FIG. 1 may be applied to FIG. 2 as it is, and thus redundant descriptions thereof will be omitted.
  • FIG. 3 is a flowchart illustrating a method of generating an image file including sensor data as metadata and generating content including dynamic effects, according to an exemplary embodiment.
  • Steps 310 to 330 to be described below illustrate a method of generating an image file including sensor data as metadata by the electronic device 200 (I).
  • steps 340 and 350 to be described below illustrate a method for generating content including dynamic effects by the electronic device 200 or the content generating device 140 (II).
  • steps 340 and 350 are described as being performed by the content generator 226, but steps 340 and 350 may be performed by the content generator 266.
  • the sensor data acquisition unit 222 may acquire sensor data related to the object 130 from at least one sensor (that is, the sensor unit 240) in response to a photographing command.
  • the sensor data may include information required for generating a dynamic effect associated with at least one of the subject 134 and the background 134.
  • the sensor data acquirer 222 may acquire data about the photographing environment of the object 130 through the sensor unit 240. It can be measured.
  • the sensor data acquirer 222 may acquire measured sensor data. .
  • the sensor data acquired by the sensor data acquisition unit 222 may include information about the positional relationship between the electronic device 200 (eg, the user terminal 100) and the object 130. 130 may include at least one of the information about the color. Information regarding the positional relationship between the electronic device 200 and the object 130 at the time of photographing may include at least one object included in the object 130 or the distance of the electronic device 200 with respect to at least a part of the object, the electronic device 200. Height difference) and associated angles.
  • the sensor data may include information about a viewpoint of the camera 230, information about a distance between the electronic device 200 obtained from a position sensor and the ground, and an electronic device 200 when generating a desired image file measured by a gyro sensor. ), The distance from the electronic device 200 measured by the distance sensor to the at least one measurement point of the object 130, the illuminance at the time of generation of the image file measured by the illuminance sensor A value, a depth value of the measurement point measured by the depth sensor, motion tracking related data of the object 130 obtained by the motion tracking sensor, and information about the shape of the object 130 identified by the shape identification unit. It may include one.
  • the image file generator 224 may parameterize the sensor data acquired by the sensor data acquirer 222.
  • the information about the viewpoint of the camera 230 is a viewType.
  • the type of the camera mobile phone camera, wide angle camera, panoramic camera, etc.
  • zooming or not, focusing may be parameterized.
  • the information about the distance between the electronic device 200 and the ground may be parameterized by digitizing the distance between the electronic device 200 and the ground.
  • the inclination of the electronic device 200 is an angle, and an angle with respect to the ground of the electronic device 200 (that is, the camera 230) may be numerically parameterized.
  • each object included in the object 130 may have an ID (eg, an integer value) as the objectId.
  • the type of each object may be parameterized as an objectType.
  • the type of object may represent a specific object (or creature) such as, for example, a wall, sky, floor, ceiling, light source, road, river, sea, animal, person, door, chair, desk, or the like.
  • the shape of the object 130 (or each object) is a shape, for example, an approximate shape of the object, a circle, a ball, a box, or the like may be parameterized.
  • the two-dimensional position in the photographing environment of the object 130 is a map, for example, the x coordinate and the y coordinate may be parameterized.
  • the three-dimensional position in the photographing environment of the object 130 (or each object) is a location, for example, the x coordinate, the y coordinate and the z coordinate may be parameterized.
  • the size of the object 130 (or each thing) is size, for example, the width, the length and the depth may be parameterized.
  • the color of the object 130 is color, for example, the HEX code of the average color of the object 130 (or each object) may be parameterized.
  • any sensor data such as motion tracking data of the object 130 (or each object), illuminance of the shooting environment, and the like may be digitized / coded as an extension and parameterized.
  • the image file generating unit 224 uses sensor data (parameterized) as metadata in the image file generated by photographing the object 130 by the camera 230.
  • sensor data Parameterized
  • the desired image file can be generated.
  • the sensor data may be stored in the generated image file and the metadata may be stored in an Exchangable Image File format (Exif) (eg, may be stored in an extention of Exif). Accordingly, the image file including the sensor data as metadata may be generated only by the photographing command for the object 130 through the electronic device 200.
  • the content generator 226 may extract metadata included in the image file generated in operation 330.
  • the content generator 226 may extract information corresponding to sensor data necessary for generating a dynamic effect related to at least one of the subject 134 and the background 132 among the information included in the metadata.
  • the content generator 226 may generate desired content including dynamic effects based on the extracted metadata.
  • the dynamic effect on the content may be a Virtual Reality (VR) effect or an Augmented Reality (AR) effect, a three-dimensional effect, or a motion effect associated with the object 130.
  • the generated content may have a visual commonality with the image file generated in step 330.
  • the generated content may be content including a VR effect representing an environment (or virtual space) similar to the background 132 represented by the image file.
  • the generated content may be content including AR effects in which environmental elements associated with the background 132 represented by the image file are synthesized.
  • the motion effect may be to cause a shake effect, a rotation effect, or a tilt effect to occur in at least a portion of the generated content.
  • FIG. 3 As described above, the description of the technical features described above with reference to FIGS. 1 and 2 may be applied to FIG. 3 as it is, and thus redundant descriptions thereof will be omitted.
  • FIG. 4 is a flowchart illustrating a method of generating content including dynamic effects based on metadata included in an image file, according to an example.
  • Steps 410 and 420 to be described below may be included in step 350 described above with reference to FIG. 3.
  • the content generator 226 may build a virtual space based on metadata included in the image file.
  • the virtual space may be a space in which the VR effect or the AR effect of the generated content is displayed. Alternatively, the virtual space may be a portion to which the dynamic effect is applied in the generated content.
  • the content generator 226 may build a virtual space based on sensor data related to the subject 134 and the background 132 included in the metadata.
  • the content generator 226 may synthesize the generated virtual space and the user-photographed or user-selected target object.
  • the virtual space may correspond to a target object background around the target object.
  • the target object may be, for example, another object (or subject) photographed by the user.
  • the content generator 226 may generate content by synthesizing the photographed subject and the generated virtual space (eg, an AR effect applied space).
  • the target object may be an object corresponding to a subject who experiences a VR effect (that is, an object corresponding to the center of the viewpoint (eg, corresponding to the center of the viewpoint of the 360 degree camera).
  • the content generator 226 may be a target.
  • Content may be generated by synthesizing the object and the generated virtual space, and the user may experience the VR effect through the virtual space by watching the content using the VR device.
  • the target object background corresponding to the virtual space may have a spatial similarity to the background 132 of the image file.
  • Spatial similarity may indicate visual commonality (similarity) between the virtual space and the background 132. That is, when comparing the generated content with the image file, the user may feel visual similarity (spatial similarity).
  • the target object background may have spatial similarity to the background 132 by including the same at least one environment element associated with the background 132.
  • An environmental element may indicate a color of at least a portion of the background 132, an object included in the background 132, a shape of the object, a type of the object, or a positional relationship of the subject 134 with respect to at least a portion of the background 132. Can be.
  • the target object background includes the same or similar color as the background 132; Include objects that are the same or similar to the objects included in the background 132, or are the same or similar in shape;
  • the position of the target object with respect to the background of the target object is determined to correspond to the position of the subject 134 with respect to the background 132, thereby having a spatial similarity with respect to the background 132.
  • the spatial similarity to the background 132 may be secured by building the target object background using metadata included in the image file.
  • a method of building a virtual space of content including dynamic effects will be described in more detail with reference to FIGS. 5 and 7 to be described later.
  • the content including the VR effect / AR effect of the atmosphere similar to the image file of the object 130 is generated, or the object ( It may be possible to generate an image file reproducing the atmosphere of the image file photographed 130).
  • FIG. 5 is a flowchart illustrating a method of building a virtual space included in content in generating content based on metadata included in an image file according to an example.
  • Steps 510 and 520 to be described below may be included in step 410 described above with reference to FIG. 4.
  • the content generator 226 (or the content generator 266) is disposed in the virtual space based on the information about the shape of the object 130 included in the metadata included in the image file. You can determine the shape of the object to be.
  • the content generator 226 may determine the objects of the object to be placed in the virtual space based on the information about the shape of each object included in the object 130 (the background 132 and / or the object 134) included in the metadata.
  • the shape can be determined.
  • the content generating unit 226 may be the same object as the object 130 (the object included in the object) or another object having the same shape as the object 130 (the object included in the object).
  • an object disposed or rendered in a virtual space may be determined based on an objectType and a shape of metadata of an image file.
  • the content generator 226 may generate a virtual space by arranging or rendering the object based on the determined shape of the object.
  • the object determined according to the information about the shape of the object 130 included in the metadata may be disposed or rendered in the virtual space.
  • the content generating unit 226 (or the content generating unit 266) is in the virtual space of at least one thing to be placed in the virtual space based on metadata included in the image file.
  • the location can be determined.
  • the content generator 226 corresponds to each object in the virtual space based on the location information of each object included in the object 130 (background 132 and / or the subject 134) included in the metadata. You can determine where the thing will be placed. For example, a position where the object (s) are to be disposed in the virtual space may be determined to correspond to the position of the object included in the background 132 and the position of the object included in the subject 134 in the image file. In this case, the content generator 226 may allow the above-described target object to be disposed at a position of the virtual space corresponding to the subject 134.
  • the content generator 226 may generate a virtual space by arranging or rendering a corresponding object based on the location determined in operation 510.
  • FIG. 6 is a flowchart illustrating a method of building a database based on metadata included in an image file, according to an example.
  • the content generator 226 (or the content generator 266) is added to the metadata included in the image file.
  • the content generator 226 may build a database based on the metadata included in the image file by transmitting the image file or metadata included in the image file to a database external to the electronic device 200. have.
  • the database may store metadata included in image files from the plurality of electronic devices 200.
  • the metadata stored in the database may be utilized in content generation including dynamic effects by the content generation unit 226 as big data.
  • FIG. 7 illustrates a method of generating content having a dynamic effect, according to an example.
  • the person 712 may correspond to the subject, and the door 718 and the puppy 716 may not correspond to the subject and correspond to an object included in the background 714.
  • the metadata of the image file 710 may store information regarding the location and shape of two-dimensional and three-dimensional images in the image file 710 of the objects 712 to 716.
  • the distance information to the electronic device 200 and at least the height and inclination information of the electronic device 200 with respect to at least one measurement point (eg, any of the walls) of the background 714 are also stored in the metadata. It may be.
  • the content generating unit 226 uses the metadata so that the object 718 is a rectangular door, the object 716 is a puppy as an animal, and the object 712 ) Can identify you as a person.
  • the content generator 226 may also identify that the object 712 corresponds to the subject. For example, the object in focus or the object close to the center of the image may be identified as the subject.
  • the content generator 226 may generate the content 720 by using (relative) location information and shape information of the image files of the objects 712 to 716 included in the metadata.
  • a rectangular door 728 corresponding to the object 718 may be disposed in the virtual space 724.
  • a cat 726 (determined as an object representing the same type (animal) as the object 716) corresponding to the object 716 may be disposed in the virtual space 724.
  • the object 722 photographed or selected by the user (corresponding to the target object described above) may be disposed in the virtual space 724.
  • the position of the corresponding virtual space 724 in the image file of the objects 712 to 716 is used.
  • Things 722-726 may be arranged. Since the metadata includes information that can indicate a location in space including depth information of the objects 712 to 716, the content generator 226 may be assigned a dynamic effect related to at least one of the objects 722 to 726. Generated content 720 can be generated.
  • the generated content 720 may further have a dynamic effect such as a VR effect or an AR effect, a three-dimensional effect, or a motion effect, while providing an atmosphere similar to that of the image file 710, compared to the image file 710. .
  • a dynamic effect such as a VR effect or an AR effect, a three-dimensional effect, or a motion effect
  • FIG. 8 illustrates a method of constructing a database based on metadata included in an image file and generating content having a dynamic effect based on the constructed database.
  • the illustrated database 810 may correspond to a database constructed based on the metadata described above with reference to FIG. 6.
  • the database 810 may store metadata included in the image files 710-1 to 710 -N itself or the image files 710-1 to 710 -N generated by the one or more electronic devices 200. Can be.
  • the content generator 226 may select an appropriate one from among metadata of the image files 710-1 through 710 -N stored in the database 810, and use the generated content 720 with the dynamic effect.
  • the electronic device 200 may output a user interface configured to allow a user to select appropriate metadata or a virtual space generated through the metadata.
  • the content generator 226 may select the content 720 according to the user's selection through the user interface. In this manner, the user may generate not only the image file photographed by the electronic device 200 but also the content 720 which reproduces the atmosphere of the image file photographed by another user. In other words, metadata encapsulated in an image file may be shared among users through the database 810.
  • the user may generate the content 720 by finely adjusting the dynamic effect or atmosphere to be applied.
  • FIG. 8 As described above, the description of the technical features described above with reference to FIGS. 1 to 7 may be applied to FIG. 8 as it is, and thus redundant descriptions thereof will be omitted.
  • the apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components.
  • the devices and components described in the embodiments may include a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a programmable PLU (programmable). It can be implemented using one or more general purpose or special purpose computers, such as logic units, microprocessors, or any other device capable of executing and responding to instructions.
  • the processing device may execute an operating system (OS) and one or more software applications running on the operating system.
  • the processing device may also access, store, manipulate, process, and generate data in response to the execution of the software.
  • OS operating system
  • the processing device may also access, store, manipulate, process, and generate data in response to the execution of the software.
  • processing device includes a plurality of processing elements and / or a plurality of types of processing elements. It can be seen that it may include.
  • the processing device may include a plurality of processors or one processor and one controller.
  • other processing configurations are possible, such as parallel processors.
  • the software may include a computer program, code, instructions, or a combination of one or more of the above, and configure the processing device to operate as desired, or process it independently or collectively. You can command the device.
  • Software and / or data may be any type of machine, component, physical device, virtual equipment, computer storage medium or device in order to be interpreted by or to provide instructions or data to the processing device. It can be embodied in The software may be distributed over networked computer systems so that they may be stored or executed in a distributed manner. Software and data may be stored on one or more computer readable recording media.
  • the method according to the embodiment may be embodied in the form of program instructions that can be executed by various computer means and recorded in a computer readable medium.
  • the medium may be to continuously store a program executable by the computer, or to temporarily store for execution or download.
  • the medium may be a variety of recording means or storage means in the form of a single or several hardware combined, not limited to a medium directly connected to any computer system, it may be distributed on the network. Examples of the medium include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, And ROM, RAM, flash memory, and the like, configured to store program instructions.
  • examples of another medium may include a recording medium or a storage medium managed by an app store that distributes an application or a site or server that supplies or distributes various software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé de traitement d'image destiné à : acquérir des informations de profondeur d'une image cible sur la base d'informations de motif prédéterminées comprenant des informations concernant la forme d'un objet dans l'image et des informations concernant les positions relatives entre des objets dans l'image ; et appliquer un effet dynamique à l'image cible en fonction des informations de profondeur acquises.
PCT/KR2017/004401 2017-04-26 2017-04-26 Procédé et dispositif de génération d'un fichier d'image comprenant des données de capteur en tant que métadonnées WO2018199351A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/004401 WO2018199351A1 (fr) 2017-04-26 2017-04-26 Procédé et dispositif de génération d'un fichier d'image comprenant des données de capteur en tant que métadonnées

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/004401 WO2018199351A1 (fr) 2017-04-26 2017-04-26 Procédé et dispositif de génération d'un fichier d'image comprenant des données de capteur en tant que métadonnées

Publications (1)

Publication Number Publication Date
WO2018199351A1 true WO2018199351A1 (fr) 2018-11-01

Family

ID=63920379

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/004401 WO2018199351A1 (fr) 2017-04-26 2017-04-26 Procédé et dispositif de génération d'un fichier d'image comprenant des données de capteur en tant que métadonnées

Country Status (1)

Country Link
WO (1) WO2018199351A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070038944A1 (en) * 2005-05-03 2007-02-15 Seac02 S.R.I. Augmented reality system with real marker object identification
US20070118805A1 (en) * 2002-12-10 2007-05-24 Science Applications International Corporation Virtual environment capture
US20080310707A1 (en) * 2007-06-15 2008-12-18 Microsoft Corporation Virtual reality enhancement using real world data
US20130222369A1 (en) * 2012-02-23 2013-08-29 Charles D. Huston System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment
US20140146084A1 (en) * 2012-05-14 2014-05-29 Orbotix, Inc. Augmentation of elements in data content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070118805A1 (en) * 2002-12-10 2007-05-24 Science Applications International Corporation Virtual environment capture
US20070038944A1 (en) * 2005-05-03 2007-02-15 Seac02 S.R.I. Augmented reality system with real marker object identification
US20080310707A1 (en) * 2007-06-15 2008-12-18 Microsoft Corporation Virtual reality enhancement using real world data
US20130222369A1 (en) * 2012-02-23 2013-08-29 Charles D. Huston System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment
US20140146084A1 (en) * 2012-05-14 2014-05-29 Orbotix, Inc. Augmentation of elements in data content

Similar Documents

Publication Publication Date Title
WO2017204596A1 (fr) Procédé et dispositif de correction de contours faciaux
WO2020171553A1 (fr) Dispositif électronique appliquant un effet bokeh à une image, et procédé de commande associé
WO2017213439A1 (fr) Procédé et appareil de génération d'une image à l'aide de multiples autocollants
WO2019017582A1 (fr) Procédé et système de collecte de modèles de contenu de réalité augmentée en fonction d'une source en nuage et de génération automatique d'un contenu de réalité augmentée
WO2012091326A2 (fr) Système de vision de rue en temps réel tridimensionnel utilisant des informations d'identification distinctes
WO2018066760A1 (fr) Procédé d'acquisition d'image sphérique optimale à l'aide de multiples caméras
WO2021158057A1 (fr) Dispositif électronique et procédé d'affichage d'image sur le dispositif électronique
WO2018074618A1 (fr) Procédé et système pour partager un effet pour une image
WO2021096339A1 (fr) Procédé de transformation d'image
WO2021086018A1 (fr) Procédé d'affichage de réalité augmentée tridimensionnelle
WO2021251534A1 (fr) Procédé, appareil et système de fourniture de plate-forme de diffusion en temps réel à l'aide d'une capture de mouvement et de visage
KR102218843B1 (ko) 스테레오 카메라를 이용한 중첩 레이어 기반의 멀티 카메라 증강현실 방송시스템 및 그 제공 방법
WO2018174311A1 (fr) Procédé et système de fourniture de contenu dynamique pour caméra de reconnaissance faciale
EP3912143A1 (fr) Procédé de fourniture de contenus de réalité augmentée et dispositif électronique associé
WO2018199351A1 (fr) Procédé et dispositif de génération d'un fichier d'image comprenant des données de capteur en tant que métadonnées
WO2017209468A1 (fr) Système et procédé de synthèse d'incrustation couleur permettant de fournir des effets stéréoscopiques tridimensionnels
WO2021075878A1 (fr) Procédé permettant de fournir un service d'enregistrement de réalité augmentée et terminal utilisateur
WO2022098164A1 (fr) Dispositif électronique et son procédé de commande
WO2018182066A1 (fr) Procédé et appareil d'application d'un effet dynamique à une image
WO2012074174A1 (fr) Système utilisant des données d'identification originales pour mettre en oeuvre une réalité augmentée
WO2020075953A1 (fr) Procédé de génération d'information de profondeur au moyen d'une lumière structurée projetée sur un objet externe, et appareil électronique l'utilisant
WO2019132565A1 (fr) Procédé de génération d'image multi-profondeur
WO2019066537A1 (fr) Dispositif d'affichage de contenu et procédé de commande associé
WO2019093692A1 (fr) Procédé et dispositif électronique de commande de véhicule aérien sans pilote comprenant une caméra
WO2020080616A1 (fr) Procédé et dispositif pour traiter une image sur la base d'un réseau neuronal artificiel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17907891

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17907891

Country of ref document: EP

Kind code of ref document: A1