KR102024863B1 - Method and appratus for processing virtual world - Google Patents

Method and appratus for processing virtual world Download PDF

Info

Publication number
KR102024863B1
KR102024863B1 KR1020130017404A KR20130017404A KR102024863B1 KR 102024863 B1 KR102024863 B1 KR 102024863B1 KR 1020130017404 A KR1020130017404 A KR 1020130017404A KR 20130017404 A KR20130017404 A KR 20130017404A KR 102024863 B1 KR102024863 B1 KR 102024863B1
Authority
KR
South Korea
Prior art keywords
sensor
virtual world
information
image sensor
feature point
Prior art date
Application number
KR1020130017404A
Other languages
Korean (ko)
Other versions
KR20140009913A (en
Inventor
한승주
안민수
한재준
김도균
이영범
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to US13/934,605 priority Critical patent/US20140015931A1/en
Publication of KR20140009913A publication Critical patent/KR20140009913A/en
Application granted granted Critical
Publication of KR102024863B1 publication Critical patent/KR102024863B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42202Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] environmental sensors, e.g. for detecting temperature, luminosity, pressure, earthquakes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition

Abstract

A virtual world processing apparatus and method are disclosed. According to the exemplary embodiments, the interaction between the real world and the virtual world may be realized by transferring information sensed about the captured image of the real world to the virtual world using the image sensor characteristic which is information on the characteristic of the image sensor.

Figure R1020130017404

Description

METHOD AND APPRATUS FOR PROCESSING VIRTUAL WORLD}

The following embodiments relate to an apparatus and a method for processing a virtual world, and more particularly, to an apparatus and a method for applying sensing information measured by an image sensor to a virtual world.

Recently, interest in immersive games has increased. At the "E3 2009" Press Conference, Microsoft combined its sensor console, Xbox360, with a separate sensor device consisting of a Depth / Color camera and microphone array, providing users' full-body motion capturing, facial recognition, and speech recognition technologies. Introduced "Project Natal", which allows users to interact with virtual worlds without a controller. In addition, Sony applied position / direction sensing technology that combined color camera, marker, and ultrasonic sensor to Play Station3, its game console, so that the user can interact with the virtual world by inputting the motion trajectory of the controller. Wand "was released.

The interaction between the real world and the virtual world has two directions. The first is to reflect the data information obtained from the sensor of the real world to the virtual world, and the second is to reflect the data information obtained from the virtual world to the real world through an actuator.

In the present specification, a new method for an apparatus and method for applying information sensed from a real world using an environmental sensor to a virtual world is presented.

According to one or more exemplary embodiments, an apparatus for processing a virtual world may include: a receiver configured to receive sensing information of a captured image and sensor characteristics of characteristics of the image sensor from an image sensor; A processor configured to generate control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic; And a transmission unit transmitting the control information to the virtual world.

According to another aspect of the present invention, there is provided a method of processing a virtual world, the method comprising: receiving sensor information on a characteristic of the image sensor and sensing information about a captured image from an image sensor; Generating control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic; And transmitting the control information to the virtual world.

1 is a diagram illustrating a virtual world processing system that controls information exchange between a real world and a virtual world, according to an exemplary embodiment.
2 is a diagram for describing an augmented reality system, according to an exemplary embodiment.
3 is a diagram illustrating a configuration of a virtual world processing apparatus according to an exemplary embodiment.
4 is a flowchart illustrating a virtual world processing method according to an exemplary embodiment.

Hereinafter, with reference to the accompanying drawings an embodiment according to the present invention will be described in detail. However, the present invention is not limited or limited by the embodiments. Like reference numerals in the drawings denote like elements.

1 is a diagram illustrating a virtual world processing system that controls information exchange between a real world and a virtual world, according to an exemplary embodiment.

Referring to FIG. 1, a virtual world processing system according to an embodiment of the present invention may include a real world 110, a virtual world processing apparatus, and a virtual world 140.

The real world 110 may represent a sensor that senses information about the real world 110 or a sensory device that implements information about the virtual world 140 in the real world 110.

In addition, the virtual world 140 may represent a sensory media playback device for playing content including sensory effect information that may be implemented in the virtual world 140 itself or the real world 110 implemented by a program.

According to an embodiment, a sensor may sense and transmit information on a user's motion, state, intention, shape, etc. of the real world 110 to the virtual world processing device.

According to an embodiment, the sensor may transmit a sensor capability 101, a sensor adaptation preference 102, and a sensed information 103 to the virtual world processing device.

The sensor characteristic 101 is information about the characteristic of the sensor. Sensor adaptation preference 102 is information indicative of the degree to which a user of a sensor prefers to the characteristics of the sensor. The sensing information 103 is information obtained by the sensor detecting the real world 110.

The virtual world processing apparatus according to an embodiment may include an adaptation adaptation real world to virtual world (120), a virtual world information (VWI) 104, and an adaptation adaptation real world to virtual world / Virtual World to Real World) 130.

The adaptive RV 120 is information that can apply the sensing information 103 detected by the sensor to the virtual world 140 based on the sensor characteristic 101 and the sensor adaptation preference 102 to the virtual world 140. I can convert it. According to an embodiment, the adaptive RV 120 may be implemented as a real world to virtual world engine (RV engine).

According to an embodiment, the adaptive RV 120 may convert the virtual world information (VWI) 104 using the converted sense information 103.

The VWI 104 is information about a virtual object of the virtual world 140.

The adaptive RV / VR 130 encodes the transformed VWI 104 to generate a Virtual World Effect Metadata (VWEM) 107 that is metadata about the effect applied to the virtual world 140. Can be. According to an embodiment, adaptive RV / VR 130 generates VWEM 107 based on Virtual World Capabilities (VWC) 105 and Virtual World Preferences (VWP) 106. can do.

The VWC 105 is information about the characteristics of the virtual world 140. In addition, the VWP 106 is information indicating the degree of preference of the user for the characteristics of the virtual world 140.

In addition, adaptive RV / VR 130 may send VWEM 107 to virtual world 140. At this time, the virtual world 140 is applied to the VWEM 107, so that an effect corresponding to the sensing information 103 may be implemented in the virtual world 140.

According to one side of the present invention, the effect event occurring in the virtual world 140 may be driven by the sensor device which is an actuator in the real world 110.

The virtual world 140 may generate sensory effect metadata (SEM) 111 by encoding sensory effect information, which is information about an effect event occurring in the virtual world 140. According to an embodiment, the virtual world 140 may include a sensory media playback device for playing content including sensory effect information.

The adaptive RV / VR 130 may generate sensory information 112 based on the SEM 111. Sensory information 112 is information about an effect event implemented in the sensory device of the real world 110.

The adaptive VR 150 may generate information about a sensory device command (SDCmd) 115 that controls the operation of the sensory device of the real world 110. According to an embodiment, adaptive VR 150 may use SDCmd () based on information on Sensory Device Capabilities (SDCap) 113 and information on User Sensory Preference (USP) 114. Information may be generated.

The SDCap 113 is information on characteristics of the sensory device. In addition, the USP 114 is information indicating the degree of preference of the user for the effect implemented in the sensory device.

2 is a diagram for describing an augmented reality system, according to an exemplary embodiment.

Referring to FIG. 2, the augmented reality system according to an exemplary embodiment may acquire an image representing the real world by using the media storage device 210 or the real-time media acquisition device 220. Also, the augmented reality system may acquire sensor information representing the real world using various sensors 230.

Here, the AR camera according to an embodiment may include a real-time media acquisition device 220 and various sensors 230. The augmented reality camera may acquire image or sensor information representing the real world for mixing between the real world information and the virtual object.

The AR container 240 is a device having not only real-world information but also information on a mixing method between the real world and a virtual object. For example, the augmented reality container 240 may include information about which virtual object to mix with which of the real world information at any point in time.

The augmented reality container 240 may request virtual object information from the AR content 250 based on information about a mixing method between the real world and the virtual object. Here, the augmented reality content 250 is a device that includes virtual object information.

The augmented reality content 250 may return virtual object information corresponding to a request of the augmented reality container 240. In this case, the virtual object information may be expressed based on at least one of three-dimensional graphics, audio, video, or text representing the virtual object. In addition, the virtual object information may also include an interaction between a plurality of virtual objects.

The visualization unit 260 may simultaneously visualize real-world information included in the augmented reality container 240 and virtual object information included in the augmented reality content 250. In this case, the interaction unit 270 may provide an interface through which the user can interact with the virtual object through the visualized information. In addition, the interaction unit 270 may update the virtual object by the interaction between the user and the virtual object, or may update the mixing method between the real world and the virtual object.

Hereinafter, a configuration of a virtual world processing apparatus according to an embodiment will be described in detail with reference to FIG. 3.

3 is a diagram illustrating a configuration of a virtual world processing apparatus according to an exemplary embodiment.

Referring to FIG. 3, the virtual world processing apparatus 320 according to an embodiment includes a receiver 321, a processor 322, and a transmitter 323.

The receiver 321 may receive the sensor information about the characteristic of the image sensor 311 and the sensing information for the captured image 315 from the image sensor 311. Here, the image sensor 311 may capture a still image, and may capture a video. For example, the image sensor 311 may include at least one of a photographing sensor and a video sensor.

The processor 322 generates control information for controlling an object of the virtual world based on the sensing information and the sensor characteristic. For example, the processor 332 may generate control information when a value related to a specific element included in the sensing information is within an allowable range defined by the sensor characteristic.

The transmitter 323 transmits control information to the virtual world.

At this time, the operation of the virtual world may be controlled based on the received control information.

For example, assume that the image sensor 311 is an augmented reality camera. The image sensor 311 may acquire the captured image 315 by photographing the real world 310. The image sensor 311 may extract the plurality of feature points 316 included in the captured image 315 by analyzing the captured image 315.

Here, the feature point may be mainly extracted from the boundary surfaces included in the captured image 315, and may be expressed in three-dimensional coordinates.

In some cases, the image sensor 311 may first extract feature points 316 related to the boundary of the closest object or the boundary of the largest object among the boundary surfaces included in the captured image 315.

The image sensor 311 may transmit the sensing information including the extracted plurality of feature points to the virtual world processing apparatus 320.

The virtual world processing apparatus 320 may extract a feature point from the sensing information transmitted from the image sensor 311, and generate control information including the extracted feature point.

Therefore, the virtual world processing apparatus 320 according to an embodiment may generate a control signal for the virtual scene 330 corresponding to the real world 310 using only a small amount of information (for example, a feature point). Can be.

In this case, the virtual world may control the virtual object based on the plurality of feature points included in the control information.

More specifically, the virtual world may represent the virtual scene 330 corresponding to the real world 310 based on the plurality of feature points. In this case, the virtual scene 330 may be represented in three-dimensional space. The virtual world may represent a plane for the virtual scene 330 based on the feature points.

In addition, the virtual world may simultaneously display the virtual scene 330 and the virtual objects 331 corresponding to the real world 310.

According to an embodiment, the sensing information and sensor characteristics received from the image sensor 311 may correspond to the SI 103 and the SC 101 of FIG. 1, respectively.

For example, the sensing information received from the image sensor 311 may be defined as shown in Table 1.

Sensed Information (SI, 103)

-Camera sensor type

-AR Camera type

Here, the AR camera type may basically include a camera sensor type. The camera sensor type may include an element of a resource element, a camera location element, and a camera orientation element, and an attribute of a focal length attribute, an aperture attribute, a shutter speed attribute, and a filter attribute.

At this time, the resource element includes a link to the image captured by the image sensor, the camera location element includes information related to the position of the image sensor measured using the Global Position System (GPS) sensor, and the camera orientation element May include information related to the attitude of the image sensor.

The focal length attribute includes information related to the focal length of the image sensor, the aperture attribute includes information related to the aperture of the image sensor, the shutter speed attribute includes information related to the shutter speed of the image sensor, and the filter attribute includes the image It may include information related to the filter signal processing of the sensor. Here, the filter type may include a UV filter, a polarizing light filter, a neutral density filter, a diffusion filter, a star filter, and the like.

In addition, the augmented reality camera type may further include a feature element and a camera position element.

In this case, the feature element may include a feature point related to the boundary surface in the captured image, and the camera position element may include information related to the position of the image sensor measured using a position sensor that is distinguished from the GPS sensor.

As described above, the feature point is a point mainly generated at boundary surfaces in the image photographed by the image sensor, and may be used to represent a virtual object in an augmented reality environment. More specifically, a feature element including at least one feature point may be used as an element representing a face by a scene descriptor. More details related to the operation of the scene description will be described later.

The camera position element may be utilized to measure the position of the image sensor in a room or a tunnel where it is difficult to measure the position using the GPS sensor.

On the other hand, the sensor characteristics received from the image sensor 311 may be defined as shown in Table 2.

Sensor Capability (SC, 101)

-Camera sensor capability type

AR Camera capability type

Here, the AR camera capability type may basically include a camera sensor capability type. The camera sensor characteristic type may consist of a Supported Resolution List element, a focal length range element, an aperture range element and a shutter speed range element.

In this case, the supported resolution list element includes a list of resolutions supported by the image sensor, the focal length range element includes a range of focal lengths supported by the image sensor, and the aperture range element is an aperture range supported by the image sensor The shutter speed range element may include a range of shutter speeds supported by the image sensor.

In addition, the augmented reality camera characteristic type may further include a maximum feature point element and a camera position range element.

At this time, the maximum feature point element may include the maximum number of feature points that may be detected by the image sensor, and the camera position range element may include a range of positions that may be measured by the position sensor.

Table 3 shows an XML syntax (eXtensible Markup Language Syntax) for a camera sensor type according to an embodiment.

<!-############################################## ##->
<!-Camera Sensor Type->
<!-############################################## ##->
<complexType name = "CameraSensorType">
<complexContent>
<extension base = "iidl: SensedInfoBaseType">
<sequence>
<element name = "Resource" type = "anyURI"/>
<element name = "CameraOrientation" type = "siv: OrientationSensorType" minOccurs = "0"/>
<element name = "CameraLocation" type = "siv: GlobalPositionSensorType" minOccurs = "0"/>
</ sequence>
<attribute name = "focalLength" type = "float" use = "optional"/>
<attribute name = "aperture" type = "float" use = "optional"/>
<attribute name = "shutterSpeed" type = "float" use = "optional"/>
<attribute name = "filter" type = "mpeg7: termReferenceType" use = "optional"/>
</ extension>
</ complexContent>
</ complexType>

Table 4 shows semantics for a camera sensor type according to one embodiment.

Name Definition CameraSensorType Tool for describing sensed information with respect to a camera sensor. Resource Describes the element that contains a link to image or video files. CameraLocation Describes the location of a camera using the structure defined by GlobalPositionSensorType. CameraOrientation Describes the orientation of a camera using the structure defined by OrientationSensorType. focalLength Describes the distance between the lens and the image sensor when the subject is in focus, in terms of millimeters (mm). aperture Describes the diameter of the lens opening. It is expressed as F-stop, e.g. F2.8. It may also be expressed as f-number notation such as f / 2.8. shutterSpeed Describes the time that the shutter remains open when taking a photograph in terms of seconds (sec). filter Describes kinds of camera filters as a reference to a classification scheme term that shall be using the mpeg7: termReferenceType defined in 7.6 of ISO / IEC 15938-5: 2003. The CS that may be used for this purpose is the CameraFilterTypeCS defined in A.x.x.

Table 5 shows XML syntax for a camera sensor characteristic type according to one embodiment.

<!-############################################## ##->
<!-Camera Sensor capability type->
<!-############################################## ##->
<complexType name = "CameraSensorCapabilityType">
<complexContent>
<extension base = "cidl: SensorCapabilityBaseType">
<sequence>
<element name = "SupportedResolutions" type = "scdv: ResolutionListType" minOccurs = "0"/>
<element name = "FocalLengthRange" type = "scdv: ValueRangeType" minOccurs = "0"/>
<element name = "ApertureRange" type = "scdv: ValueRangeType" minOccurs = "0"/>
<element name = "ShutterSpeedRange" type = "scdv: ValueRangeType" minOccurs = "0"/>
</ sequence>
</ extension>
</ complexContent>
</ complexType>

<complexType name = "ResolutionListType">
<sequence>
<element name = "Resolution" type = "scdv: ResolutionType" maxOccurs = "unbounded"/>
</ sequence>
</ complexType>

<complexType name = "ResolutionType">
<sequence>
<element name = "Width" type = "nonNegativeInteger"/>
<element name = "Height" type = "nonNegativeInteger"/>
</ sequence>
</ complexType>

<complexType name = "ValueRangeType">
<sequence>
<element name = "MaxValue" type = "float"/>
<element name = "MinValue" type = "float"/>
</ sequence>
</ complexType>

Table 6 shows the semantics for the camera sensor characteristic type according to one embodiment.

Name Definition CameraSensorCapabilityType Tool for describing a camera sensor capability. SupportedResolutions Describes a list of resolution that the camera can support. ResolutionListType Describes a type of the resolution list which is composed of ResolutionType element. ResolutionType Describes a type of resolution which is composed of Width element and Height element. Width Describes a width of resolution that the camera can perceive. Height Describes a height of resolution that the camera can perceive Focallengrange Describes the range of the focal length that the camera sensor can perceive in terms of ValueRangeType. Its default unit is millimeters (mm).
NOTE The minValue and the maxValue in the SensorCapabilityBaseType are not used for this sensor.
ValueRangeType Defines the range of the value that the sensor can perceive. MaxValue Describes the maximum value that the sensor can perceive. Minvalue Describes the minimum value that the sensor can perceive. ApertureRange Describes the range of the aperture that the camera sensor can perceive in terms of valueRangeType.
NOTE The minValue and the maxValue in the SensorCapabilityBaseType are not used for this sensor.
ShutterSpeedRange Describes the range of the shutter speed that the camera sensor can perceive in terms of valueRangeType. Its default unit is seconds (sec).
NOTE The minValue and the maxValue in the SensorCapabilityBaseType are not used for this sensor.

Table 7 shows XML syntax for an augmented reality camera type according to one embodiment.

<!-############################################## ##->
<!-AR Camera Type->
<!-############################################## ##->
<complexType name = "ARCameraType">
<complexContent>
<extension base = "siv: CameraSensorType">
<sequence>
<element name = "Feature" type = "siv: FeaturePointType" minOccurs = "0" maxOccurs = "unbounded"/>
<element name = "CameraPosition" type = "siv: PositionSensorType" minOccurs = "0"/>
</ sequence>
</ extension>
</ complexContent>
</ complexType>

<complexType name = "FeaturePointType">
<sequence>
<element name = "Position" type = "mpegvct: Float3DVectorType"/>
</ sequence>
<attribute name = "featureID" type = "ID" use = "optional"/>
</ complexType>

Table 8 shows semantics for the augmented reality camera type according to one embodiment.

Name Definition ARCameraType Tool for describing sensed information with respect to an AR camera. Feature Describes the feature detected by a camera using the structure defined by FeaturePointType. FeaturePointType Tool for describing Feature commands for each feature point. Position Describes the 3D position of each of the feature points. featureID To be used to identify each feature. CameraPosition Describes the location of a camera using the structure defined by PositionSensorType.

Table 9 shows XML syntax for an augmented reality camera characteristic type according to one embodiment.

<!-############################################## ##->
<!-AR Camera capability type->
<!-############################################## ##->
<complexType name = "ARCameraCapabilityType">
<complexContent>
<extension base = "siv: CameraSensorCapabilityType">
<sequence>
<element name = "MaxFeaturePoint" type = "nonNegativeInteger" minOccurs = "0"/>
<element name = "CameraPositionRange" type = "scdv: RangeType" minOccurs = "0"/>
</ sequence>
</ extension>
</ complexContent>
</ complexType>

Table 10 shows semantics for the augmented reality camera characteristic type according to one embodiment.

Name Definition ARCameraCapabilityType Tool for describing an AR camera capability. MaxFeaturePoint Describes the maximum number of feature points that the camera can detect. CameraPositionRange Describes the range that the position sensor can perceive in terms of RangeType in its global coordinate system.
NOTE The minValue and the maxValue in the SensorCapabilityBaseType are not used for this sensor.

Table 11 shows XML syntax for a scene descriptor type according to an embodiment.

<!-############################################## #############->
<!-Scene Descriptor Type->
<!-############################################## #############->
<complexType name = "SceneDescriptorType">
<sequence>
<element name = "image" type = "anyURI"/>
</ sequence>
<complexType name = "plan">
<sequence>
<element name = "ID" type = "int32"/>
<element name = "X" type = "float"/>
<element name = "Y" type = "float"/>
<element name = "Z" type = "float"/>
<element name = "Scalar" type = "float"/>
</ sequence>
</ complexType>
<complexType name = "feature">
<sequence>
<element name = "ID" type = "int32"/>
<element name = "X" type = "float"/>
<element name = "Y" type = "float"/>
<element name = "N" type = "float"/>
</ sequence>
<complexType>
</ complexType>

Here, the image element included in the scene descriptor type may include a plurality of pixels. The plurality of pixels may depict an ID of a plan or an ID of a feature.

Where the plan is X plan , Y plan , Z plan And Scalar, and referring to Equation 1, the scene descriptor may represent a face using a face equation including X plan , Y plan , and Z plan .

Figure 112013014688319-pat00001

In addition, the feature is a type corresponding to a feature element included in the sensing information, and the feature may include an X feature , a Y feature, and a Z feature . In this case, the feature may represent three-dimensional points (X feature , Y feature , Z feature ), and the scene descriptor may represent a face using three-dimensional points located at (X feature , Y feature , Z feature ). .

4 is a flowchart illustrating a virtual world processing method according to an exemplary embodiment.

Referring to FIG. 4, the virtual world processing method according to an exemplary embodiment receives sensing information on a captured image and sensor characteristics on characteristics of an image sensor from an image sensor (410).

The virtual world processing method generates control information for controlling an object of the virtual world based on the sensing information and the sensor characteristic (420).

The virtual world processing method transmits control information to the virtual world (430).

At this time, the operation of the virtual world may be controlled based on the received control information. Since the details described with reference to FIGS. 1 through 3 may be applied to each step illustrated in FIG. 4, a detailed description thereof will be omitted.

The method according to the embodiment may be embodied in the form of program instructions that can be executed by various computer means and recorded in a computer readable medium. The computer readable medium may include program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the media may be those specially designed and constructed for the purposes of the embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks. Magneto-optical media, and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like. The hardware device described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

Although the embodiments have been described by the limited embodiments and the drawings as described above, various modifications and variations are possible to those skilled in the art from the above description. For example, the described techniques may be performed in a different order than the described method, and / or components of the described systems, structures, devices, circuits, etc. may be combined or combined in a different form than the described method, or other components. Or even if replaced or substituted by equivalents, an appropriate result can be achieved.

Therefore, other implementations, other embodiments, and equivalents to the claims are within the scope of the claims that follow.

Claims (10)

A receiver configured to receive sensing information of a captured image and sensor characteristics of the characteristics of the image sensor from an image sensor;
A processor configured to generate control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic; And
Transmission unit for transmitting the control information to the virtual world
Including,
In the image sensor,
At least one feature point associated with the boundary of the closest object or the boundary of the largest object among the boundary surfaces included in the photographed image is extracted, and the sensing information including the at least one feature point is transmitted.
The processing unit,
Extracting the at least one feature point from the sensed information, and generating the control information based on the at least one feature point.
The method of claim 1,
The image sensor is
And at least one of a photographing sensor and a video photographing sensor.
The method of claim 1,
The detection information is
A resource element comprising a link to an image captured by the image sensor;
A camera location element containing information related to the position of the image sensor measured using a Global Position System (GPS) sensor; And
Camera orientation element containing information related to the pose of the image sensor
Virtual world processing device comprising a.
The method of claim 1,
The detection information is
A focal length attribute including information related to a focal length of the image sensor;
An aperture property including information related to an aperture of the image sensor;
A shutter speed attribute including information related to a shutter speed of the image sensor; And
Filter attributes including information related to filter signal processing of the image sensor
Virtual world processing device comprising a.
The method of claim 3,
The detection information is
A feature element comprising a feature point associated with an interface in the captured image; And
A camera position element comprising information relating to the position of the image sensor measured using a position sensor distinct from the GPS sensor
The virtual world processing device further comprising.
The method of claim 1,
The sensor characteristics
A Supported Resolution List element containing a list of resolutions supported by the image sensor;
A focal length range element including a range of focal lengths supported by the image sensor;
An aperture range element including an aperture range supported by the image sensor; And
Shutter speed range element including a range of shutter speeds supported by the image sensor
Virtual world processing device comprising a.
The method of claim 6,
The sensor characteristics
A maximum feature point element comprising a maximum number of feature points that can be detected by the image sensor; And
Camera position range element containing the range of positions that can be measured by the position sensor
The virtual world processing device further comprising.
The method of claim 1,
The transmission unit transmits the at least one feature point to the virtual world.
And the virtual world represents at least one plane included in the virtual world based on the at least one feature point.
Receiving sensor information on a characteristic of the image sensor and sensing information about a captured image from an image sensor;
Generating control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic; And
Transmitting the control information to the virtual world
Including,
In the image sensor,
At least one feature point associated with the boundary of the closest object or the boundary of the largest object among the boundary surfaces included in the photographed image is extracted, and the sensing information including the at least one feature point is transmitted.
Generating control information for controlling an object of a virtual world based on the sensing information and the sensor characteristic,
Extracting the at least one feature point from the sensing information and generating the control information based on the at least one feature point
Virtual world processing method comprising a.
A computer-readable recording medium having recorded thereon a program for executing the method of claim 9.
KR1020130017404A 2012-07-12 2013-02-19 Method and appratus for processing virtual world KR102024863B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/934,605 US20140015931A1 (en) 2012-07-12 2013-07-03 Method and apparatus for processing virtual world

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261670825P 2012-07-12 2012-07-12
US61/670,825 2012-07-12

Publications (2)

Publication Number Publication Date
KR20140009913A KR20140009913A (en) 2014-01-23
KR102024863B1 true KR102024863B1 (en) 2019-09-24

Family

ID=50142901

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130017404A KR102024863B1 (en) 2012-07-12 2013-02-19 Method and appratus for processing virtual world

Country Status (1)

Country Link
KR (1) KR102024863B1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001075726A (en) 1999-08-02 2001-03-23 Lucent Technol Inc Computer input device with six degrees of freedom for controlling movement of three-dimensional object
JP2007195091A (en) 2006-01-23 2007-08-02 Sharp Corp Synthetic image generating system
US20090221374A1 (en) * 2007-11-28 2009-09-03 Ailive Inc. Method and system for controlling movements of objects in a videogame
WO2012032996A1 (en) 2010-09-09 2012-03-15 ソニー株式会社 Information processing device, method of processing information, and program
JP2012094100A (en) * 2010-06-02 2012-05-17 Nintendo Co Ltd Image display system, image display device and image display method
US20120242866A1 (en) 2011-03-22 2012-09-27 Kyocera Corporation Device, control method, and storage medium storing program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6778171B1 (en) * 2000-04-05 2004-08-17 Eagle New Media Investments, Llc Real world/virtual world correlation system using 3D graphics pipeline
KR100514308B1 (en) * 2003-12-05 2005-09-13 한국전자통신연구원 Virtual HDR Camera for creating HDRI for virtual environment
KR100918392B1 (en) * 2006-12-05 2009-09-24 한국전자통신연구원 Personal-oriented multimedia studio platform for 3D contents authoring
US20090300144A1 (en) * 2008-06-03 2009-12-03 Sony Computer Entertainment Inc. Hint-based streaming of auxiliary content assets for an interactive environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001075726A (en) 1999-08-02 2001-03-23 Lucent Technol Inc Computer input device with six degrees of freedom for controlling movement of three-dimensional object
JP2007195091A (en) 2006-01-23 2007-08-02 Sharp Corp Synthetic image generating system
US20090221374A1 (en) * 2007-11-28 2009-09-03 Ailive Inc. Method and system for controlling movements of objects in a videogame
JP2012094100A (en) * 2010-06-02 2012-05-17 Nintendo Co Ltd Image display system, image display device and image display method
WO2012032996A1 (en) 2010-09-09 2012-03-15 ソニー株式会社 Information processing device, method of processing information, and program
US20120242866A1 (en) 2011-03-22 2012-09-27 Kyocera Corporation Device, control method, and storage medium storing program

Also Published As

Publication number Publication date
KR20140009913A (en) 2014-01-23

Similar Documents

Publication Publication Date Title
US8388146B2 (en) Anamorphic projection device
CN105279795B (en) Augmented reality system based on 3D marker
US9392248B2 (en) Dynamic POV composite 3D video system
KR101227237B1 (en) Augmented reality system and method for realizing interaction between virtual object using the plural marker
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
US11189057B2 (en) Provision of virtual reality content
US20190045125A1 (en) Virtual reality video processing
KR102197615B1 (en) Method of providing augmented reality service and server for the providing augmented reality service
CN111131735B (en) Video recording method, video playing method, video recording device, video playing device and computer storage medium
WO2020110323A1 (en) Video synthesis device, video synthesis method and recording medium
US20210038975A1 (en) Calibration to be used in an augmented reality method and system
WO2020110322A1 (en) Video synthesis device, video synthesis method and recording medium
US20130040737A1 (en) Input device, system and method
US20140015931A1 (en) Method and apparatus for processing virtual world
US11430178B2 (en) Three-dimensional video processing
US20190295324A1 (en) Optimized content sharing interaction using a mixed reality environment
US9942540B2 (en) Method and a device for creating images
WO2019034804A2 (en) Three-dimensional video processing
KR102024863B1 (en) Method and appratus for processing virtual world
KR101915578B1 (en) System for picking an object base on view-direction and method thereof
KR102471792B1 (en) Cloud server for rendering ar contents and operaing method of thereof
EP4036858A1 (en) Volumetric imaging
US20210037230A1 (en) Multiview interactive digital media representation inventory verification
KR101860215B1 (en) Content Display System and Method based on Projector Position
KR102635477B1 (en) Device for providing performance content based on augmented reality and method therefor

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant