WO2021015582A1 - Dispositif électronique pour fournir un avatar, et son procédé de fonctionnement - Google Patents

Dispositif électronique pour fournir un avatar, et son procédé de fonctionnement Download PDF

Info

Publication number
WO2021015582A1
WO2021015582A1 PCT/KR2020/009762 KR2020009762W WO2021015582A1 WO 2021015582 A1 WO2021015582 A1 WO 2021015582A1 KR 2020009762 W KR2020009762 W KR 2020009762W WO 2021015582 A1 WO2021015582 A1 WO 2021015582A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
modeling data
category
electronic device
information
Prior art date
Application number
PCT/KR2020/009762
Other languages
English (en)
Korean (ko)
Inventor
강혜진
송재윤
안준호
최민석
한규희
박찬민
배창섭
백승협
서상균
유명한
정인호
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Publication of WO2021015582A1 publication Critical patent/WO2021015582A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • Various embodiments of the present disclosure relate to an electronic device that provides an avatar and a method of operating the same.
  • Electronic devices may provide various functions.
  • the electronic device is a short-range wireless communication function, a mobile communication (3G (generation), 4G, or 5G) function, a music playback function, a video playback function, a shooting function, a navigation function, or an object.
  • 3G (generation), 4G, or 5G) function e.g., a mobile communication (3G (generation), 4G, or 5G) function, a music playback function, a video playback function, a shooting function, a navigation function, or an object.
  • Electronic devices provide various functions through object recognition. For example, the electronic device recognizes an object in an image photographed through a camera and provides an avatar corresponding to the recognized object to provide a function to enable interaction with a user. In this case, the electronic device may output an avatar such that it overlaps at least part of the object included in the image or does not overlap with the object.
  • the electronic device may output not only an avatar of the first category having a human shape and a human shape, but also an avatar of the second category having a shape of a character created by characterizing animals, plants, and inanimate objects.
  • avatars in the first category have the shape of a person, so the motion of the avatar is freely expressed, such as tracking and following the motion of the person, or showing the movement of a person prepared in advance. Since it has the unique characteristics of, there may be a problem that motion expression is not free only by defining motions that fit a limited number of human shapes that can be prepared in advance.
  • various embodiments of the present invention are an electronic device that separates and stores modeling data set according to the characteristics of an avatar category, and allows the motion of the avatar to be naturally expressed by using modeling data corresponding to the category of the avatar that is output when the avatar is output. And to provide a method.
  • An electronic device includes a memory, a display, and a processor for storing first modeling data related to an avatar of a first category and second modeling data related to an avatar of a second category, and the processor , Determining a category for at least one avatar to be output through the display, obtaining modeling data corresponding to the determined category among the second modeling data and the second modeling data, and using the obtained modeling data It is configured to control the output of the at least one avatar, and the first modeling data and the second modeling data may be separated from each other and stored.
  • a method of operating an electronic device includes storing first modeling data related to an avatar of a first category and second modeling data related to an avatar of a second category to be distinguished from each other, through the electronic device. Determining a category for at least one avatar to be output, acquiring modeling data corresponding to the determined category from among the second modeling data and the second modeling data, and using the acquired modeling data It may include an operation of controlling the output of the avatar of.
  • the electronic device manages modeling data including motion and lighting information of the avatar for each avatar category, and uses modeling data corresponding to the category of the avatar that is output when the avatar is output.
  • the motion of the output avatar can be expressed naturally.
  • the electronic device stores modeling data including a partial model for an avatar related to a specific character, thereby enabling natural synthesis between the avatar and other images.
  • FIG. 1 is a block diagram of an electronic device in a network environment, according to various embodiments.
  • FIG 2 illustrates at least some components included in an electronic device according to various embodiments of the present disclosure.
  • 3A is a diagram illustrating modeling data of a first structure related to an avatar of a first category according to various embodiments of the present disclosure.
  • 3B is a diagram illustrating modeling data of a second structure related to an avatar of a second category according to various embodiments of the present disclosure.
  • FIG. 4 is a flowchart for controlling an output of an avatar in an electronic device according to various embodiments of the present disclosure.
  • 5A is a flowchart for determining a category of an avatar in an electronic device according to various embodiments of the present disclosure.
  • 5B is a diagram illustrating an operation of selecting at least one avatar in an electronic device according to various embodiments of the present disclosure.
  • FIG. 6 is a flowchart for controlling output of an avatar in an electronic device according to various embodiments of the present disclosure.
  • FIG. 7 is a flowchart for controlling output of an avatar in an electronic device according to various embodiments of the present disclosure.
  • FIGS. 8A and 8B are diagrams for describing an avatar output from an electronic device according to various embodiments of the present disclosure.
  • FIG. 9 is a flowchart for controlling output of an avatar in an electronic device according to various embodiments of the present disclosure.
  • FIG. 10 is a diagram for describing an operation of changing an output mode of an avatar in an electronic device according to various embodiments of the present disclosure.
  • 11 is a flowchart for extracting attribute data corresponding to an output mode in an electronic device according to various embodiments of the present disclosure.
  • FIG. 1 is a block diagram of an electronic device 101 in a network environment 100 according to various embodiments.
  • the electronic device 101 communicates with the electronic device 102 through a first network 198 (for example, a short-range wireless communication network), or a second network 199 It is possible to communicate with the electronic device 104 or the server 108 through (eg, a long-distance wireless communication network).
  • the electronic device 101 may communicate with the electronic device 104 through the server 108.
  • the electronic device 101 includes a processor 120, a memory 130, an input device 150, an audio output device 155, a display device 160, an audio module 170, and a sensor module ( 176, interface 177, haptic module 179, camera module 180, power management module 188, battery 189, communication module 190, subscriber identification module 196, or antenna module 197 ) Can be included.
  • a sensor module 176, interface 177, haptic module 179, camera module 180, power management module 188, battery 189, communication module 190, subscriber identification module 196, or antenna module 197
  • at least one of these components may be omitted or one or more other components may be added to the electronic device 101.
  • some of these components may be implemented as one integrated circuit.
  • the sensor module 176 eg, a fingerprint sensor, an iris sensor, or an illuminance sensor
  • the display device 160 eg, a display.
  • the processor 120 for example, executes software (eg, a program 140) to implement at least one other component (eg, a hardware or software component) of the electronic device 101 connected to the processor 120. It can be controlled and can perform various data processing or operations. According to an embodiment, as at least a part of data processing or operation, the processor 120 stores commands or data received from other components (eg, the sensor module 176 or the communication module 190) to the volatile memory 132 The command or data stored in the volatile memory 132 may be processed, and result data may be stored in the nonvolatile memory 134.
  • software eg, a program 140
  • the processor 120 stores commands or data received from other components (eg, the sensor module 176 or the communication module 190) to the volatile memory 132
  • the command or data stored in the volatile memory 132 may be processed, and result data may be stored in the nonvolatile memory 134.
  • the processor 120 includes a main processor 121 (eg, a central processing unit or an application processor), and an auxiliary processor 123 (eg, a graphics processing unit, an image signal processor) that can be operated independently or together. , A sensor hub processor, or a communication processor). Additionally or alternatively, the coprocessor 123 may be set to use lower power than the main processor 121 or to be specialized for a designated function. The secondary processor 123 may be implemented separately from the main processor 121 or as a part thereof.
  • a main processor 121 eg, a central processing unit or an application processor
  • an auxiliary processor 123 eg, a graphics processing unit, an image signal processor
  • the coprocessor 123 may be set to use lower power than the main processor 121 or to be specialized for a designated function.
  • the secondary processor 123 may be implemented separately from the main processor 121 or as a part thereof.
  • the coprocessor 123 is, for example, on behalf of the main processor 121 while the main processor 121 is in an inactive (eg, sleep) state, or the main processor 121 is active (eg, an application is executed). ) While in the state, together with the main processor 121, at least one of the components of the electronic device 101 (for example, the display device 160, the sensor module 176, or the communication module 190) It is possible to control at least some of the functions or states related to. According to an embodiment, the coprocessor 123 (eg, an image signal processor or a communication processor) may be implemented as part of another functionally related component (eg, the camera module 180 or the communication module 190). have.
  • an image signal processor or a communication processor may be implemented as part of another functionally related component (eg, the camera module 180 or the communication module 190). have.
  • the memory 130 may store various data used by at least one component of the electronic device 101 (eg, the processor 120 or the sensor module 176).
  • the data may include, for example, software (eg, the program 140) and input data or output data for commands related thereto.
  • the memory 130 may include a volatile memory 132 or a nonvolatile memory 134.
  • the program 140 may be stored as software in the memory 130, and may include, for example, an operating system 142, middleware 144, or an application 146.
  • the input device 150 may receive a command or data to be used for a component of the electronic device 101 (eg, the processor 120) from an outside (eg, a user) of the electronic device 101.
  • the input device 150 may include, for example, a microphone, a mouse, a keyboard, or a digital pen (eg, a stylus pen).
  • the sound output device 155 may output an sound signal to the outside of the electronic device 101.
  • the sound output device 155 may include, for example, a speaker or a receiver.
  • the speaker can be used for general purposes such as multimedia playback or recording playback, and the receiver can be used to receive incoming calls.
  • the receiver may be implemented separately from or as a part of the speaker.
  • the display device 160 may visually provide information to the outside of the electronic device 101 (eg, a user).
  • the display device 160 may include, for example, a display, a hologram device, or a projector and a control circuit for controlling the device.
  • the display device 160 may include a touch circuitry set to sense a touch, or a sensor circuit (eg, a pressure sensor) set to measure the strength of a force generated by the touch. have.
  • the audio module 170 may convert sound into an electric signal or, conversely, convert an electric signal into sound. According to an embodiment, the audio module 170 obtains sound through the input device 150, the sound output device 155, or an external electronic device (for example, an external electronic device directly or wirelessly connected to the electronic device 101). Sound may be output through the electronic device 102) (for example, a speaker or headphones).
  • the sensor module 176 detects an operating state (eg, power or temperature) of the electronic device 101, or an external environmental state (eg, a user state), and generates an electrical signal or data value corresponding to the detected state. can do.
  • the sensor module 176 is, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, It may include a temperature sensor, a humidity sensor, or an illuminance sensor.
  • the interface 177 may support one or more designated protocols that may be used for the electronic device 101 to directly or wirelessly connect with an external electronic device (eg, the electronic device 102 ).
  • the interface 177 may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, an SD card interface, or an audio interface.
  • HDMI high definition multimedia interface
  • USB universal serial bus
  • SD card interface Secure Digital Card
  • the connection terminal 178 may include a connector through which the electronic device 101 can be physically connected to an external electronic device (eg, the electronic device 102 ).
  • the connection terminal 178 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (eg, a headphone connector).
  • the haptic module 179 may convert an electrical signal into a mechanical stimulus (eg, vibration or movement) or an electrical stimulus that a user can perceive through a tactile or motor sense.
  • the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electrical stimulation device.
  • the camera module 180 may capture a still image and a video.
  • the camera module 180 may include one or more lenses, image sensors, image signal processors, or flashes.
  • the power management module 188 may manage power supplied to the electronic device 101.
  • the power management module 188 may be implemented as, for example, at least a part of a power management integrated circuit (PMIC).
  • PMIC power management integrated circuit
  • the battery 189 may supply power to at least one component of the electronic device 101.
  • the battery 189 may include, for example, a non-rechargeable primary cell, a rechargeable secondary cell, or a fuel cell.
  • the communication module 190 is a direct (eg, wired) communication channel or a wireless communication channel between the electronic device 101 and an external electronic device (eg, electronic device 102, electronic device 104, or server 108). It is possible to support establishment and communication through the established communication channel.
  • the communication module 190 operates independently of the processor 120 (eg, an application processor), and may include one or more communication processors that support direct (eg, wired) communication or wireless communication.
  • the communication module 190 is a wireless communication module 192 (eg, a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 194 (eg : A LAN (local area network) communication module, or a power line communication module) may be included.
  • a corresponding communication module is a first network 198 (for example, a short-range communication network such as Bluetooth, WiFi direct or IrDA (infrared data association)) or a second network 199 (for example, a cellular network, the Internet, or It can communicate with external electronic devices through a computer network (for example, a telecommunication network such as a LAN or WAN).
  • the wireless communication module 192 uses subscriber information (e.g., International Mobile Subscriber Identifier (IMSI)) stored in the subscriber identification module 196 in a communication network such as the first network 198 or the second network 199.
  • subscriber information e.g., International Mobile Subscriber Identifier (IMSI)
  • IMSI International Mobile Subscriber Identifier
  • the antenna module 197 may transmit a signal or power to the outside (eg, an external electronic device) or receive from the outside.
  • the antenna module may include one antenna including a conductor formed on a substrate (eg, a PCB) or a radiator formed of a conductive pattern.
  • the antenna module 197 may include a plurality of antennas. In this case, at least one antenna suitable for a communication method used in a communication network such as the first network 198 or the second network 199 is, for example, provided by the communication module 190 from the plurality of antennas. Can be chosen.
  • the signal or power may be transmitted or received between the communication module 190 and an external electronic device through the at least one selected antenna.
  • other components eg, RFIC
  • other than the radiator may be additionally formed as a part of the antenna module 197.
  • At least some of the components are connected to each other through a communication method (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI))) between peripheral devices and signals ( E.g. commands or data) can be exchanged with each other.
  • a communication method e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)
  • GPIO general purpose input and output
  • SPI serial peripheral interface
  • MIPI mobile industry processor interface
  • commands or data may be transmitted or received between the electronic device 101 and the external electronic device 104 through the server 108 connected to the second network 199.
  • Each of the electronic devices 102 and 104 may be a device of the same or different type as the electronic device 101.
  • all or part of the operations executed by the electronic device 101 may be executed by one or more of the external electronic devices 102, 104, or 108.
  • the electronic device 101 needs to perform a function or service automatically or in response to a request from a user or another device, the electronic device 101 does not execute the function or service by itself.
  • One or more external electronic devices receiving the request may execute at least a part of the requested function or service, or an additional function or service related to the request, and transmit the execution result to the electronic device 101.
  • the electronic device 101 may process the result as it is or additionally and provide it as at least a part of a response to the request.
  • cloud computing, distributed computing, or client-server computing technology may be used.
  • FIG 2 illustrates at least some components included in the electronic device 101 according to various embodiments.
  • the electronic device 101 may include a sensor module 176, a camera module 180, a display device 160, a processor 120, and a memory 130.
  • at least one of the configurations of the electronic device 101 described above may be omitted or another configuration may be added.
  • the modeling data acquisition unit 122 and the rendering unit 124 may be software that can be used by the processor 120 or hardware included in the processor 120.
  • the avatar database (data base) 131 and 133 may be stored in the memory 130.
  • the processor 120 may obtain at least one avatar that can be output through the display device 160.
  • the avatar may refer to an object expressed in a virtual space corresponding to an external object (eg, a person) included in one or more images acquired using the camera module 180.
  • the avatar may include an avatar of a first category (or type) having a person shape.
  • the processor 120 may acquire an avatar of the first category in which physical characteristics (eg, face, body, etc.) extracted from the user image acquired through the camera module 180 are reflected.
  • the avatar may include an avatar of a second category (or type) having a specific character shape.
  • the avatar of the second category may or may not have a human shape.
  • the processor 120 may obtain an avatar of the second category generated by a third party through an external electronic device (eg, the electronic device 102, the electronic device 104, or the server 108). I can.
  • the processor 120 may transfer the obtained avatar to the inside of the electronic device 101 (for example, the memory 130) or outside the electronic device 101 (for example, the electronic device). It may be stored in 102, the electronic device 104 or the server 108.
  • the processor 120 may store modeling data related to the stored avatar.
  • the modeling data may include avatar information, background information, category information, and configuration information, and may be at least a part of the illustrated avatar DB.
  • the modeling data may be related to a rendering attribute for each avatar, and may include a set of one or more files related to each avatar.
  • the modeling data may have different structures according to the category of the stored avatar.
  • the modeling data includes modeling data of a first structure related to an avatar of a first category (eg, an avatar DB 131) and an avatar of a second category, as described later in FIGS. 3A and 3B. It may include modeling data (eg, the avatar DB 133) of the related second structure.
  • the modeling data of the first structure may be a rendering attribute set according to the characteristics of the avatar of the first category
  • the modeling data of the second structure may be a rendering attribute set according to the characteristics of the avatar of the second category.
  • the processor 120 may output at least one avatar among stored avatars through the display device 160.
  • the processor 120 may process at least one avatar to be displayed together with an external object (eg, a user's body) recognized through the camera module 180.
  • the processor 120 may process at least one avatar to be output on at least a part of the recognized external object (eg, body, face, hand, etc.).
  • the present invention is not limited thereto.
  • at least one avatar may be distinguished (or separated) from an external object recognized through the camera module 180 and then output.
  • the processor 120 may output a predetermined list (eg, a thumbnail list) expressing a stored avatar, and receive an input (eg, a touch input, a voice input, etc.) for the output list. At least one avatar to be output through the display device 180 may be determined.
  • a predetermined list eg, a thumbnail list
  • an input eg, a touch input, a voice input, etc.
  • the processor 120 and/or the modeling data acquisition unit 122 determine at least one avatar to be output through the display device 160 or determine at least one avatar through the display device 160.
  • modeling data corresponding to at least one avatar determined from among stored modeling data may be obtained.
  • the obtained modeling data may be related to a rendering attribute set (or optimized) according to the determined characteristics of the avatar.
  • the modeling data includes the avatar's actions (e.g., move left, right, move forward, move backward, rotate the avatar), ambient light for the avatar, and background images (e.g. 2D background image, 3D background image). It may include at least one of a display method or sensitivity of an avatar movement to sensing information.
  • the processor 120 and/or the modeling data acquisition unit 122 may acquire modeling data corresponding to the determined avatar or category of the avatar from among stored modeling data.
  • the processor 120 and/or the rendering unit 124 may render the avatar based on the obtained modeling data.
  • the processor 120 and/or the rendering unit 124 may render the avatar based on attribute information (eg, setting information) of the acquired modeling data and output it through the display device 160.
  • the processor 120 and/or the rendering unit 124 may control the output of the avatar output through the display device 160 based on sensor information.
  • the processor 120 and/or the rendering unit 124 is based on the information input through the camera module 180 and/or the sensor module 176, the external object (for example, the electronic device 101). User) sensing information can be determined.
  • the processor 120 and/or the rendering unit 124 based on the attribute information of the modeling data corresponding to the sensing information, the avatar motion (eg, facial expression, posture, and/or motion) to correspond to the external object.
  • the sensing information may include at least one of a face, an expression, an action, a gesture, or a location of an external object.
  • the processor 120 and/or the rendering unit 124 may analyze feature points of the external object and determine facial expressions or gestures of the external object based thereon.
  • FIG. 3A is a diagram 300 illustrating modeling data 131 of a first structure related to an avatar of a first category according to various embodiments
  • FIG. 3B is a diagram illustrating a second avatar related to an avatar of a second category according to various embodiments.
  • modeling data 131 of a first structure may include avatar information 310 and setting information 320.
  • the avatar information 310 defines each generated avatar, and may be expressed as avatar generation information for each avatar.
  • such avatar information 310 may be generated in response to avatar generation.
  • the avatar information 310 may include 3D_avatar_0 (311-1), overlay_frame_0 (311-3), and sticker (311-5).
  • 3D_avatar_0 311-1 may correspond to avatar modeling information
  • overlay_frame_0 311-3 may correspond to background information
  • sticker 311-5 may correspond to avatar category information.
  • the 3D_avatar_0 311-1 may include a plurality of information elements. These information elements are information for determining the avatar model (eg, 3D modeling information of the avatar), information for determining the color to be applied to the avatar model, information for determining the material of the avatar model, and information for determining the texture of the avatar. It may include at least one of information or information for determining an animation effect for an avatar.
  • information elements are information for determining the avatar model (eg, 3D modeling information of the avatar), information for determining the color to be applied to the avatar model, information for determining the material of the avatar model, and information for determining the texture of the avatar. It may include at least one of information or information for determining an animation effect for an avatar.
  • the overlay_frame_0 311-3 may include a background image (eg, a 2D background image and a 3D background image).
  • the background information may further include information on a method for mapping a 2D background image to a 3D figure.
  • the sticker 311-5 may include information indicating a category of an avatar.
  • the avatar category may include a first category representing an avatar having a human shape and a second category representing an avatar having a specific character shape.
  • the sticker 305 may include meta information for each avatar.
  • the meta information may describe at least one output mode that each avatar can support.
  • the at least one output mode that the avatar can support may include at least one of a motion-based output mode, a tracking-based output mode, and a preloaded output mode. have.
  • the motion-based output mode may be a mode in which a gesture corresponding to a gesture of an external object is expressed through an avatar corresponding to an external object or an avatar corresponding to a part of an external object
  • the tracking-based output mode is The mode may be a mode in which an avatar corresponding to an external object or at least a part of an avatar corresponding to a part of an external object adaptively move according to a change in feature points extracted from an external object area.
  • the preloaded output mode may be a mode in which an avatar output to a display device is expressed in a predefined manner regardless of a gesture of an external object.
  • the avatar information 310 defines each of the generated avatars.
  • each avatar stored (or generated) in the electronic device 101 Avatar information 310 corresponding to 1 category avatar) may be stored.
  • the modeling data 131 of the first structure contains two avatar information (eg, first avatar information related to the first avatar). (311) and second avatar information 313 related to the second avatar) are stored, and each avatar information may include 3D_avatar_0, overlay_frame_0, and sticker.
  • the setting information 320 is at least one motion file 321 and motion rule 325 for expressing the movement of the avatar, and at least one setting value for rendering the avatar or background.
  • a lighting file 323 may be included.
  • the motion file 321 includes movement for each node of the avatar (eg, nose, ears, eyes, head, arms, legs, etc.), and actions of the avatar (eg, move left, move right, move forward). , Moving backward, rotating the avatar, running, etc.), facial expression of the avatar in a specific action, or motion sensitivity to the avatar.
  • the motion file 321 may reflect a person's physical characteristics (eg, movements, skeletons, etc.) so that an avatar movement similar to that of a person can be expressed.
  • the lighting file 323 may include information (or set value) for determining lighting for an avatar.
  • the lighting file 323 may include lighting information corresponding to an action of an avatar (or an external object), information about lighting corresponding to an action of an avatar, and the like.
  • the motion rule 325 may be data describing in detail a rule for applying motion information to an avatar.
  • the condition may include a movement of an external object in which the avatar's motion (eg, facial expression, posture, and/or motion) is adaptively changed.
  • the avatar of the first category has a shape of a person, so that the avatars of the various first categories that are generated may use the same rendering attribute. Accordingly, the above-described setting information may be stored as a set so that avatars of a plurality of first categories can use in common. In other words, each of the avatars stored (or generated) in the electronic device 101 may be output through the display device 160 based on the above-described setting information.
  • the modeling data 133 of the second structure may include at least one avatar information 350.
  • the number of avatar information included in the modeling data 133 of the second structure may correspond to the number of avatars (eg, avatars of the second category) stored (or generated) in the electronic device 101.
  • the modeling data 133 of the second structure includes two pieces of avatar information (eg, first avatar information 351 related to the first avatar). ) And second avatar information 353 related to the second avatar) may be stored.
  • each avatar information 351 and 353 is 3D_avatar_0 (351-1), overlay_frame_0 (351-3), sticker (351-5), motion file (351-7), and motion rule ( motion rule) (351-9) may be included.
  • the configurations of the modeling data 133 of the second structure may be separated from each other, but this is only exemplary, and the exemplary embodiment of the present invention is not limited thereto.
  • at least one of the illustrated configurations may be included in another configuration.
  • 3D_avatar_0 (351-1) corresponds to avatar information
  • overlay_frame_0 (313) corresponds to background information
  • sticker (315) corresponds to category information of the avatar
  • the motion rule 351-9 may correspond to a rule for applying motion information to the avatar.
  • the 3D_avatar_0 351-1 may include a plurality of information elements. These information elements include information for determining an avatar model (eg, 3D modeling information of an avatar), information for determining a color to be applied to the avatar model, similar to the modeling data 131 of the first structure of FIG. 3A, It may include at least one of information for determining a material of the avatar model, information for determining a texture of the avatar, or information for determining an animation effect for the avatar. Additionally, 3D_avatar_0 351-1 may further include a partial model (eg, head model data 351-1b) configured as a part of an avatar different from the modeling data 131 of the first structure.
  • a partial model eg, head model data 351-1b
  • the partial modeling data may include all data on the partial model, or may include only an index value for a region corresponding to the partial model in the entire model.
  • the avatar of the second category has a specific character shape, not a human shape, there may be a problem that it is difficult to naturally separate only a part (eg, head) from the entire avatar model. This problem can be solved by using partial modeling data included in the modeling data 133 of the second structure.
  • 3D_avatar_0 (351-1) may additionally include lighting information (351-1a) in which unique characteristics of only the avatar are reflected different from the modeling data of the first structure.
  • the modeling data 131 of the first structure has a structure in which a plurality of avatars are rendered using common lighting information, while the modeling data 133 of the second structure is each avatar has its own unique lighting information. It can have a structure that is rendered using. This can solve the problem that the avatar of the second category has a specific character shape, not a human shape, as described above, and thus it is difficult to express the characteristics of the avatar with the same lighting setting.
  • the motion file 351-7 is a movement for each node (eg, nose, ears, eyes, head, arms, legs, etc.) of each generated avatar, and an action of the avatar (eg, moves to the left). , Moving to the right, moving forward, moving backward, rotating the avatar, running, etc.), facial expression of the avatar in a specific action, or motion sensitivity of the avatar.
  • These motion files 351-7 reflect the unique characteristics of each avatar, and each avatar may have its own motion file, which is a system that allows each avatar to use a common motion file. It can be distinguished from the modeling data 131 of the 1 structure. For this reason, when an avatar of the second category is rendered, it is possible to express a unique movement of a corresponding avatar based on a motion file related to the rendered avatar.
  • overlay_frame_0 351-3
  • motion rule (351-9) may be the same as or similar to the modeling data 131 of the first structure described above.
  • the modeling data 133 of the second structure may have a structure in which each avatar has a motion rule 351-9.
  • the electronic device (eg, the electronic device 101) stores first modeling data related to an avatar of a first category and second modeling data related to an avatar of a second category (eg, a memory 103)), a display (for example, the display device 160), and a processor (for example, the processor 120), and the processor determines a category for at least one avatar to be output through the display, and the It may be configured to obtain modeling data corresponding to the determined category among the second modeling data and the second modeling data, and to control the output of the at least one avatar by using the obtained modeling data.
  • the first modeling data and the second modeling data may be separated from each other and stored.
  • one of the avatars of the first category and the avatars of the second category may include an avatar having a human shape, and the other one may include an avatar having a character shape.
  • the first modeling data and the second modeling data may include at least one of avatar information, background information, category information, or configuration, and the setting information includes facial expression and posture of the avatar. Alternatively, it may include information related to at least one of the operations.
  • the electronic device may further include a camera module (for example, the camera module 180), and the processor may set a setting corresponding to a movement of an object included in an image acquired through the camera module. It may be configured to obtain information from the obtained modeling data.
  • a camera module for example, the camera module 180
  • the processor may set a setting corresponding to a movement of an object included in an image acquired through the camera module. It may be configured to obtain information from the obtained modeling data.
  • the first modeling data and the second modeling data may include category information describing a category of the avatar, and the processor may include the at least one avatar based on the category information. Can be configured to determine a category for.
  • the first modeling data and the second modeling data may include output mode information describing at least one output mode supported by the avatar
  • the processor may include the output mode information It may be configured to determine at least one output mode supported by the avatar based on and output an identifier indicating the determined at least one output mode.
  • the processor may be configured to output the identifier while controlling the output of the avatar.
  • the processor may be configured to change an output mode for the avatar based on an input to the identifier.
  • the processor when the output mode for the avatar is changed, the processor may be configured to obtain attribute information corresponding to the changed output mode from the acquired modeling data.
  • the processor may be configured to obtain a partial model of the avatar from the acquired modeling data when the mode is changed to a mode for outputting a portion of an avatar having a character shape.
  • each of the operations may be sequentially performed, but not necessarily sequentially. For example, the order of each operation may be changed, and at least two operations may be performed in parallel.
  • the electronic device 101 is a category of avatars to be output through a display (eg, the display device 160).
  • a display eg, the display device 160
  • the processor 120 may determine whether an avatar to be output through the display is an avatar of a first category having a human shape or an avatar of a second category having a character shape.
  • the processor 120 may determine the category of the avatar based on at least a portion of modeling data corresponding to the avatar selected by the user, as described later through FIG. 5A.
  • the electronic device 101 may obtain modeling data corresponding to the determined avatar category.
  • the modeling data may be a rendering attribute set according to the characteristics of the avatar
  • the processor 120 models a first structure set according to the characteristics of the avatar of the first category in response to the determined avatar category.
  • One of the data or modeling data of the second structure set according to the characteristics of the avatar of the second category may be selected.
  • the electronic device 101 may control the output of the avatar by using at least part of the acquired modeling data.
  • the processor 120 may determine at least one of a color, a material, a texture, and a lighting animation effect to be applied to the avatar by using at least part of the obtained modeling data and apply it to the selected avatar.
  • the processor 120 may output an avatar so as to overlap (or synthesize) at least a part of an external object recognized through the camera module 180 or output an avatar so as not to overlap with an external object.
  • the processor 120 may obtain sensing information on an external object recognized through the camera module 180, and correspond to the external object based on attribute information of modeling data corresponding to the acquired sensing information.
  • the avatar motion eg, facial expression, posture, and/or motion
  • FIG. 5A is a flowchart 500 for determining a category of an avatar in the electronic device 101 according to various embodiments of the present disclosure.
  • FIG. 5B is a diagram 550 illustrating an operation of selecting at least one avatar by the electronic device 101 according to various embodiments of the present disclosure.
  • the operations of FIG. 5A described below may represent various embodiments of the operation 410 of FIG. 4.
  • each of the operations may be sequentially performed, but not necessarily sequentially. For example, the order of each operation may be changed, and at least two operations may be performed in parallel.
  • the electronic device 101 (eg, the processor 120 of FIG. 1) outputs an avatar list through a display (eg, the display device 160) in operation 510.
  • the avatar list may include information in a predetermined form (eg, thumbnail) representing each avatar that can be output through the display device.
  • the processor 120 includes at least one avatar 554 of a first category having a human shape or an avatar of at least one second category having a character shape ( An avatar list including at least one of 556) may be output (552).
  • the electronic device 101 may receive an input for selecting at least one avatar from an avatar list.
  • the processor 120 may receive a direct touch or an indirect (or proximity) touch input for selecting at least one of avatars included in the avatar list.
  • this is only exemplary, and the present invention is not limited thereto.
  • at least one avatar may be selected by voice input, gesture input, button input, or the like.
  • the electronic device 101 may extract category information corresponding to the selected avatar in operation 530.
  • the category information may be a part of modeling data (eg, the avatar DB 131 or the avatar DB 133) stored in the memory 130.
  • the processor 120 may extract sticker information (sticker 315 or sticker 351-5) corresponding to category information of the avatar included in the modeling data of the selected avatar.
  • the electronic device 101 may determine a category of the avatar based on the extracted category information. For example, the processor 120 may determine whether the avatar selected by the user is an avatar of a first category or an avatar of a second category, based on the category information.
  • the electronic device 101 may perform an operation of obtaining modeling data corresponding to the determined category.
  • the processor 120 may perform an operation related to operation 420 of FIG. 4.
  • FIG. 6 is a flowchart 600 for controlling the output of an avatar in the electronic device 101 according to various embodiments of the present disclosure.
  • the operations of FIG. 6 described below may represent various embodiments of operation 430 of FIG. 5A.
  • each of the operations may be sequentially performed, but not necessarily sequentially. For example, the order of each operation may be changed, and at least two operations may be performed in parallel.
  • the electronic device 101 (for example, the processor 120 of FIG. 1) is applied to an external object included in an image acquired using the camera module 180.
  • a corresponding avatar can be displayed.
  • the processor 120 may output an avatar of a first category or an avatar of a second category so as to overlap at least a part of an external object recognized through the camera module 180.
  • the processor 120 may output the avatar of the first category or the avatar of the second category so as not to overlap at least a part of the external object recognized through the camera module 180. have.
  • the electronic device 101 uses at least one of the camera module 180 and the sensor module 176 to provide an external object and an electronic device ( 101), you can check the change in movement.
  • the processor 120 checks a change in movement between the external object and the electronic device 101 by using the camera module 180 or between the external object and the electronic device 101 by using the sensor module 176.
  • a change in movement may be checked, or a change in movement between an external object and the electronic device 101 may be checked using the camera module 180 and the sensor module 176.
  • the processor 120 may determine a movement between at least a portion of the external object and the electronic device 101 using at least one of the camera module 180 and the sensor module 176.
  • a movement between at least a portion of an external object and the electronic device 101 may include a relative movement.
  • a movement between at least a portion of an external object and the electronic device 101 is a relative movement of the electronic device 101 with respect to at least a portion of the external object or a relative position of the electronic device 101 with respect to at least a portion of the external object.
  • the electronic device 101 may obtain attribute data (eg, setting information) corresponding to a change in motion from the acquired modeling data.
  • the attribute data may be related to a movement of at least a portion of an avatar corresponding to a movement of at least a portion of an external object or a lighting of at least a portion of the avatar.
  • the processor 120 may generate attribute data corresponding to a change in motion from the modeling data 131 of the first structure (eg, motion file 321, lighting file 323).
  • At least one of the motion rules 325) may be obtained.
  • the processor when an avatar of the second category is output, the processor is attribute data corresponding to a change in motion from the modeling data 133 of the second structure (eg, motion file 351-7, lighting file 351-1a). , At least one of the partial model 351-1b and the motion rule 351-9)) may be obtained.
  • the processor 120 may determine a relative position between at least a part of the external object and the electronic device 101 and obtain attribute data corresponding to the determined relative position.
  • the electronic device 101 may control the output of the avatar based on the acquired attribute data.
  • the processor 120 may control a motion of an output avatar to correspond to a motion of an external object.
  • the processor 120 applies at least one of an avatar corresponding to an external object or movement or lighting of an avatar corresponding to a part of an external object (eg, face, body, hand, etc.) by applying various output modes. Can be controlled.
  • the various output modes may include at least one of a motion-based output mode, a tracking-based output mode, or a preloaded output mode, and the processor 102 In a state in which the output mode is applied, at least one of movement or lighting of the avatar may be controlled.
  • the motion-based output mode may be a mode in which a gesture corresponding to a gesture of an external object is expressed through an avatar corresponding to an external object or an avatar corresponding to a part of an external object
  • the tracking-based output mode is The mode may be a mode in which an avatar corresponding to an external object or at least a part of an avatar corresponding to a part of an external object adaptively move according to a change in feature points extracted from an external object area.
  • the preloaded output mode may be a mode in which an avatar output to a display device is expressed in a predefined manner regardless of a gesture of an external object.
  • FIG. 7 is a flowchart 700 for controlling the output of an avatar in the electronic device 101 according to various embodiments of the present disclosure.
  • FIGS. 8A and 8B are diagrams for describing an avatar output from the electronic device 101 according to various embodiments of the present disclosure.
  • the operations of FIG. 7 described below may represent various embodiments of operation 640 of FIG. 6.
  • each of the operations may be sequentially performed, but not necessarily sequentially. For example, the order of each operation may be changed, and at least two operations may be performed in parallel.
  • the electronic device 101 may detect an output mode change event in operation 710.
  • the output mode change event is an event of changing an output mode selected from among a motion-based output mode, a tracking-based output mode, or a preloaded output mode to another output mode. It may include.
  • an output avatar eg, an avatar of a first category (or an avatar of a second category)
  • another avatar eg, an avatar of a second category (or an avatar of a first category).
  • the electronic device 101 may acquire attribute data corresponding to the changed output mode.
  • the processor 102 may obtain attribute information corresponding to the changed output mode from modeling data corresponding to the currently output avatar.
  • the processor 120 may obtain attribute information corresponding to the changed output mode from the modeling data 131 of the first structure.
  • the processor 120 may obtain attribute information corresponding to the changed output mode from the modeling data 133 of the second structure.
  • the processor 102 in response to detecting an event of changing to a motion-based output mode, provides physical characteristics of each node of the avatar, the action of the avatar, and the avatar that can be expressed in the motion-based output mode.
  • Property information of a motion-based output mode including at least one of motion and facial expressions that can be expressed through the action of, information for determining lighting for the avatar, information for determining animation effects for the avatar, and motion sensitivity information of the avatar Can be obtained.
  • the processor 102 may display an avatar that can be expressed in a tracking-based output mode or Among the physical characteristics of each node, the action of the avatar, the motion and expression that can be expressed through the action of the avatar, information for determining the lighting for the avatar, information for determining the animation effect for the avatar, and information on the motion sensitivity of the avatar. At least one of the attribute information of the tracking-based output mode or the attribute information of the preload output mode may be obtained.
  • the electronic device 101 eg, the processor 120 of FIG. 1 outputs an avatar corresponding to an event through the display device 160 based on the acquired attribute information. I can.
  • the processor 102 may output the avatar in the tracking-based output mode, based on attribute information of the motion-based output mode, as shown at 810 and 820 of FIG. 8A.
  • the processor 102 may extract feature points from the external object acquired through the camera module 180, and correspond to the avatar 810 corresponding to the external object or a part of the external object according to the change of the extracted feature points.
  • At least a portion of the avatar 820 may be processed to move adaptively.
  • the processor 102 may reproduce a natural motion reflecting the characteristics of the avatar compared to a conventional electronic device using the same setting information regardless of the output mode by applying the lighting corresponding to the tracking output mode to the avatar.
  • the processor 102 may control the output of the avatar in the preload output mode 830 and the motion-based output mode 840 as shown in 830 and 840 of FIG. 8A.
  • the processor 102 may apply an animation effect based on the acquired attribute information. For example, in a state in which the avatar of the first category is output, as shown in 860 of FIG. 8B, an animation effect may be applied to the avatar in the form of a person. As another example, in a state in which the avatar of the second category is output, an animation effect may be applied to the avatar in the form of a character as shown in 850 of FIG. 8B.
  • FIG. 9 is a flowchart 900 for controlling the output of an avatar in the electronic device 101 according to various embodiments of the present disclosure.
  • 10 is a diagram for describing an operation of changing an output mode of an avatar in the electronic device 101 according to various embodiments of the present disclosure.
  • the operations of FIG. 9 described below may represent various embodiments of operation 710 of FIG. 7.
  • each of the operations may be sequentially performed, but not necessarily sequentially. For example, the order of each operation may be changed, and at least two operations may be performed in parallel.
  • the electronic device 101 may check at least one output mode supported by the selected avatar in operation 910.
  • the output mode may include at least one of a motion-based output mode, a tracking-based output mode, or a preloaded output mode.
  • processor 102 based on at least a portion of the modeling data obtained based on the category of the avatar.
  • At least one output mode supported by the avatar can be checked.
  • the processor 102 may check at least one supported output mode through meta information of the selected avatar (eg, sticker information (sticker 305 or sticker 315) of modeling data).
  • the electronic device 101 displays at least one identifier (eg, an icon) representing the identified at least one output mode. You can print it through.
  • the processor 102 as shown in 1000 of FIG. 10, is a tracking-based output mode for an avatar corresponding to an external object (eg, basic mode), an avatar corresponding to a part of the external object.
  • Identifiers 1002 indicating support of a tracking-based output mode (eg, a mask mode) for and a preload output mode (eg, a live figure mode) for an avatar corresponding to an external object may be displayed.
  • a tracking-based output mode eg, a mask mode
  • a preload output mode eg, a live figure mode
  • the processor 102 is a tracking-based output mode (eg, basic mode) for an avatar corresponding to an external object and a tracking-based avatar corresponding to a part of the external object. Identifiers 1012 indicating that the output mode (eg, mask mode) is supported may be displayed. In addition, as shown in 1020 of FIG. 10, the processor 102 is a tracking-based output mode (eg, basic mode) for an avatar corresponding to an external object, Identifiers 1022 indicating that an output mode (eg, mask mode), a preload output mode for an avatar corresponding to an external object (eg, live figure mode), and a motion-based output mode (eg, mini motion mode) are supported. You can also display it.
  • a tracking-based output mode eg, basic mode
  • Identifiers 1022 indicating that an output mode (eg, mask mode), a preload output mode for an avatar corresponding to an external object (eg, live figure mode), and a motion-based output mode (eg, mini motion mode)
  • the electronic device 101 may receive an input for selecting at least one outputted identifier.
  • the processor 102 may detect an output mode change event based on the received input.
  • the processor 102 may perform an operation of acquiring attribute data corresponding to the changed output mode in response to detecting the output mode change event.
  • the processor 102 may perform an operation related to operation 720 of FIG. 7.
  • FIG. 11 is a flowchart 1100 for extracting attribute data corresponding to an output mode from the electronic device 101 according to various embodiments of the present disclosure.
  • the operations of FIG. 11 described below may represent various embodiments of operation 720 of FIG. 7.
  • each of the operations may be sequentially performed, but not necessarily sequentially. For example, the order of each operation may be changed, and at least two operations may be performed in parallel.
  • the electronic device 101 eg, the processor 120 of FIG. 1
  • an avatar output mode eg, a mask mode
  • the electronic device 101 when it is determined that the change to the mask mode is not performed, the electronic device 101 (eg, the processor 120 of FIG. 1) may output an avatar corresponding to the event. According to an embodiment, the processor 102 may perform an operation related to operation 730 of FIG. 7.
  • the electronic device 101 when it is determined that the change to the mask mode is not performed, the electronic device 101 (for example, the processor 120 of FIG. 1) outputs an avatar selected by the user through the display device in operation 1120.
  • Avatar may be determined whether the avatar of the first category or the avatar of the second category.
  • the electronic device 101 when it is determined that the avatar of the first category is being output, the electronic device 101 (for example, the processor 120 of FIG. 1 ), in operation 1130, the partial model for a mask mode ( E.g. face) can be extracted.
  • the avatar of the first category has a shape of a person, as described above, so that only a part (eg, head) can be naturally separated from the entire avatar model.
  • modeling data corresponding to the avatar of the first category eg, the first modeling data 131) may also include a partial model for the avatar, and the processor 120 may obtain a partial model from the modeling data. May be.
  • the electronic device 101 when it is determined that the avatar of the second category is being output, the electronic device 101 (eg, the processor 120 of FIG. 1) selects modeling data (eg, the second structure of the second structure) in operation 1140.
  • a partial model for a mask mode eg, a face
  • the avatar of the second category has a specific character shape, not a human shape, so it is difficult to naturally separate only a part (eg, face, body, hand, etc.) from the entire avatar model.
  • the electronic device 101 may solve the above-described problem by using a partial model for the avatar of the second category previously stored through operation 1140.
  • the electronic device 101 may output a partial model extracted on at least a part of an external object included in an image.
  • the method of operating an electronic device stores first modeling data related to an avatar of a first category and second modeling data related to an avatar of a second category to be distinguished from each other.
  • Performing an operation determining a category for at least one avatar to be output through the electronic device, obtaining modeling data corresponding to the determined category from among the second modeling data and the second modeling data, and the acquired
  • the operation of controlling the output of the at least one avatar using modeling data may be included.
  • one of the avatars of the first category and the avatars of the second category may include an avatar having a human shape, and the other one may include an avatar having a character shape.
  • the first modeling data and the second modeling data may include at least one of avatar information, background information, category information, or configuration, and the setting information includes an expression of the avatar.
  • the method of operating the electronic device includes: obtaining an image through the electronic device and obtaining setting information corresponding to a motion of an object included in the obtained image from the obtained modeling data. It may include.
  • the first modeling data and the second modeling data may include category information describing a category of the avatar, and based on the category information, a category for the at least one avatar It may include an act of determining.
  • the first modeling data and the second modeling data may include output mode information describing at least one output mode supported by the avatar, and based on the output mode information, the The operation of determining at least one output mode supported by the avatar and outputting an identifier indicating the determined at least one output mode may be included.
  • the method of operating the electronic device may include outputting the identifier while controlling the output of the avatar.
  • the method of operating the electronic device may include changing an output mode for the avatar based on an input to the identifier.
  • the operation method of the electronic device may include obtaining attribute information corresponding to the changed output mode from the obtained modeling data.
  • the operation method of the electronic device may include an operation of obtaining a partial model for the avatar from the obtained modeling data.
  • Electronic devices may be devices of various types.
  • the electronic device may include, for example, a portable communication device (eg, a smart phone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance.
  • a portable communication device eg, a smart phone
  • a computer device e.g., a smart phone
  • a portable multimedia device e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a wearable device e.g., a smart bracelet
  • phrases such as “at least one of, B, or C” may include any one of the items listed together in the corresponding one of the phrases, or all possible combinations thereof.
  • Terms such as “first”, “second”, or “first” or “second” may be used simply to distinguish the component from other corresponding components, and the components may be referred to in other aspects (eg, importance or Order) is not limited.
  • Some (eg, first) component is referred to as “coupled” or “connected” to another (eg, second) component, with or without the terms “functionally” or “communicatively”. When mentioned, it means that any of the above components can be connected to the other components directly (eg by wire), wirelessly, or via a third component.
  • module used in this document may include a unit implemented in hardware, software, or firmware, and may be used interchangeably with terms such as logic, logic blocks, parts, or circuits.
  • the module may be an integrally configured component or a minimum unit of the component or a part thereof that performs one or more functions.
  • the module may be implemented in the form of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • Various embodiments of the present document include one or more commands stored in a storage medium (eg, internal memory 136 or external memory 138) that can be read by a machine (eg, electronic device 101). It may be implemented as software (for example, the program 140) including them.
  • the processor eg, the processor 120 of the device (eg, the electronic device 101) may call and execute at least one command among one or more commands stored from a storage medium. This makes it possible for the device to be operated to perform at least one function according to the at least one command invoked.
  • the one or more instructions may include code generated by a compiler or code executable by an interpreter.
  • a storage medium that can be read by a device may be provided in the form of a non-transitory storage medium.
  • non-transient only means that the storage medium is a tangible device and does not contain a signal (e.g., electromagnetic wave), and this term refers to the case where data is semi-permanently stored in the storage medium. It does not distinguish between temporary storage cases.
  • a signal e.g., electromagnetic wave
  • a method according to various embodiments disclosed in this document may be provided in a computer program product.
  • Computer program products can be traded between sellers and buyers as commodities.
  • the computer program product is distributed in the form of a device-readable storage medium (e.g. compact disc read only memory (CD-ROM)), or through an application store (e.g. Play Store TM ) or two user devices ( It can be distributed (e.g., downloaded or uploaded) directly between, e.g. smartphones).
  • a device e.g. compact disc read only memory (CD-ROM)
  • an application store e.g. Play Store TM
  • two user devices It can be distributed (e.g., downloaded or uploaded) directly between, e.g. smartphones).
  • at least a portion of the computer program product may be temporarily stored or temporarily generated in a storage medium that can be read by a device such as a server of a manufacturer, a server of an application store, or a memory of a relay server.
  • each component eg, a module or program of the above-described components may include a singular number or a plurality of entities.
  • one or more components or operations among the above-described corresponding components may be omitted, or one or more other components or operations may be added.
  • a plurality of components eg, a module or a program
  • the integrated component may perform one or more functions of each component of the plurality of components in the same or similar to that performed by the corresponding component among the plurality of components prior to the integration. .
  • operations performed by a module, program, or other component may be sequentially, parallel, repeatedly, or heuristically executed, or one or more of the operations may be executed in a different order or omitted. , Or one or more other actions may be added.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Divers modes de réalisation de la présente invention concernent un dispositif électronique destiné à fournir un avatar, et un procédé de fonctionnement associé, le dispositif électronique comprenant : une mémoire servant à stocker des premières données de modélisation relatives à un avatar d'une première catégorie et des deuxièmes données de modélisation associées à un avatar d'une deuxième catégorie ; un dispositif d'affichage ; et un processeur. Le processeur est conçu pour déterminer une catégorie pour au moins un avatar devant être délivré en sortie par le biais du dispositif d'affichage, acquérir, parmi les premières données de modélisation et les deuxièmes données de modélisation, des données de modélisation correspondant à la catégorie déterminée, et commander la délivrance en sortie dudit avatar en utilisant les données de modélisation acquises ; et les première données de modélisation et les deuxièmes données de modélisation peuvent être stockées de façon à être séparées les unes des autres.
PCT/KR2020/009762 2019-07-25 2020-07-24 Dispositif électronique pour fournir un avatar, et son procédé de fonctionnement WO2021015582A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0090514 2019-07-25
KR1020190090514A KR20210012562A (ko) 2019-07-25 2019-07-25 아바타를 제공하는 전자 장치 및 그의 동작 방법

Publications (1)

Publication Number Publication Date
WO2021015582A1 true WO2021015582A1 (fr) 2021-01-28

Family

ID=74193602

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/009762 WO2021015582A1 (fr) 2019-07-25 2020-07-24 Dispositif électronique pour fournir un avatar, et son procédé de fonctionnement

Country Status (2)

Country Link
KR (1) KR20210012562A (fr)
WO (1) WO2021015582A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120020137A (ko) * 2009-05-29 2012-03-07 마이크로소프트 코포레이션 애니메이션 또는 모션들을 캐릭터에 적용하는 시스템 및 방법
KR101374313B1 (ko) * 2012-08-14 2014-03-13 주식회사 바른기술 배경 영상을 제외한 단순화된 움직임 정보를 전송하고 아바타를 이용하여 디스플레이하는 장치와 그 방법
KR101540544B1 (ko) * 2014-09-05 2015-07-30 서용창 캐릭터를 이용한 메시지 서비스 방법, 상기 방법을 수행하는 사용자 단말, 상기 방법을 포함하는 메시지 애플리케이션
KR20170134366A (ko) * 2015-04-07 2017-12-06 인텔 코포레이션 아바타 키보드
US20180089880A1 (en) * 2016-09-23 2018-03-29 Apple Inc. Transmission of avatar data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120020137A (ko) * 2009-05-29 2012-03-07 마이크로소프트 코포레이션 애니메이션 또는 모션들을 캐릭터에 적용하는 시스템 및 방법
KR101374313B1 (ko) * 2012-08-14 2014-03-13 주식회사 바른기술 배경 영상을 제외한 단순화된 움직임 정보를 전송하고 아바타를 이용하여 디스플레이하는 장치와 그 방법
KR101540544B1 (ko) * 2014-09-05 2015-07-30 서용창 캐릭터를 이용한 메시지 서비스 방법, 상기 방법을 수행하는 사용자 단말, 상기 방법을 포함하는 메시지 애플리케이션
KR20170134366A (ko) * 2015-04-07 2017-12-06 인텔 코포레이션 아바타 키보드
US20180089880A1 (en) * 2016-09-23 2018-03-29 Apple Inc. Transmission of avatar data

Also Published As

Publication number Publication date
KR20210012562A (ko) 2021-02-03

Similar Documents

Publication Publication Date Title
WO2020171540A1 (fr) Dispositif électronique permettant de fournir un mode de prise de vue sur la base d'un personnage virtuel et son procédé de fonctionnement
WO2019164266A1 (fr) Dispositif électronique permettant de produire une image contenant un avatar 3d reflétant le mouvement du visage grâce à un avatar 3d correspondant au visage et procédé de fonctionnement de celui-ci
WO2020149581A1 (fr) Dispositif électronique pour générer un avatar et procédé associé
WO2020171621A1 (fr) Procédé de commande d'affichage d'avatar et dispositif électronique associé
WO2020171385A1 (fr) Dispositif électronique prenant en charge une recommandation et un téléchargement d'avatar
WO2021020814A1 (fr) Dispositif électronique de mise en place d'avatar et son procédé d'exploitation
EP3895130A1 (fr) Dispositif électronique pour générer une animation d'avatar et procédé associé
WO2020130281A1 (fr) Dispositif électronique et procédé de fourniture d'un avatar sur la base de l'état émotionnel d'un utilisateur
WO2020096413A1 (fr) Caméra escamotable et rotative et dispositif électronique comprenant celle-ci
WO2019103396A1 (fr) Procédé de configuration d'interface d'entrée et dispositif électronique associé
WO2019125029A1 (fr) Dispositif électronique permettant d'afficher un objet dans le cadre de la réalité augmentée et son procédé de fonctionnement
WO2020130667A1 (fr) Procédé et dispositif électronique pour commander un dispositif de réalité augmentée
WO2020171541A1 (fr) Dispositif électronique et procédé de mise en place d'une interface d'utilisateur pour l'édition de frimousses pendant l'interfonctionnement avec une fonction de caméra en utilisant ledit dispositif électronique
WO2021045552A1 (fr) Dispositif électronique de synthèse d'image et son procédé de fonctionnement
WO2021242005A1 (fr) Dispositif électronique et procédé de génération d'autocollant d'émoji basés sur un avatar d'utilisateur
WO2020116868A1 (fr) Dispositif électronique pour générer un émoji en réalité augmentée, et procédé associé
WO2021172832A1 (fr) Procédé de modification d'image basée sur la reconnaissance des gestes, et dispositif électronique prenant en charge celui-ci
WO2021230568A1 (fr) Dispositif électronique permettant de fournir un service de réalité augmentée et son procédé de fonctionnement
WO2021162353A1 (fr) Dispositif électronique incluant un appareil photographique et son procédé de fonctionnement
WO2021149938A1 (fr) Dispositif électronique et procédé de commande de robot
WO2020085718A1 (fr) Procédé et dispositif de génération d'avatar sur la base d'une image corrigée
WO2019164287A1 (fr) Dispositif électronique et procédé associé de fourniture d'un objet de réalité augmentée
WO2021015582A1 (fr) Dispositif électronique pour fournir un avatar, et son procédé de fonctionnement
WO2022030943A1 (fr) Appareil et procédé de segmentation d'image basés sur un apprentissage profond
WO2022014836A1 (fr) Procédé et appareil d'affichage d'objets virtuels dans différentes luminosités

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844954

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20844954

Country of ref document: EP

Kind code of ref document: A1