KR20160008357A - Video call method and apparatus - Google Patents

Video call method and apparatus Download PDF

Info

Publication number
KR20160008357A
KR20160008357A KR1020140088439A KR20140088439A KR20160008357A KR 20160008357 A KR20160008357 A KR 20160008357A KR 1020140088439 A KR1020140088439 A KR 1020140088439A KR 20140088439 A KR20140088439 A KR 20140088439A KR 20160008357 A KR20160008357 A KR 20160008357A
Authority
KR
South Korea
Prior art keywords
image
information
counterpart
electronic device
control information
Prior art date
Application number
KR1020140088439A
Other languages
Korean (ko)
Inventor
이준택
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020140088439A priority Critical patent/KR20160008357A/en
Publication of KR20160008357A publication Critical patent/KR20160008357A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras

Abstract

An embodiment of the present invention includes an operation of acquiring a first image from a camera, an operation of extracting three-dimensional angle information by matching the first image with modeling information, and generating image control information including the three- And transmitting the image control information to a counterpart electronic device for video communication and a second image based on counterpart video control information received from the counterpart electronic device, to provide. Other embodiments are also possible.

Description

{VIDEO CALL METHOD AND APPARATUS}

The present invention relates to a method for effectively making video calls.

Recently, due to the remarkable development of information communication technology and semiconductor technology, the spread and use of electronic devices are rapidly increasing. If the electronic device first provides a primary service such as a voice call or a text message transmission, it provides a variety of services by providing a wireless Internet environment like a notebook computer in recent years.

Among the above various services, a video call service for making a call while watching the face of the other party on the screen of the electronic device is actively used instead of only talking with the other party by voice.

However, unlike a voice call, a video call needs to transmit not only audio but also video data, so that the capacity of the data to be transmitted is large, which is more sensitive to the network condition than the voice call. Conventionally, data having a complicated and large-sized packet structure is transmitted during a video call. Since the method focuses on compressing the image in order to improve the quality of the image, the packet requires parameter values required for compression in each field. In the past, when the network state is not good, an avatar is displayed in place of the counterpart video, or an image similar to the counterpart video is found and displayed instead. However, the ultimate goal of a video call is to talk to someone who wants to see the other person's face. Displaying an alternative video is not a good solution.

It is an object of the present invention to provide a video communication method and apparatus capable of reducing network traffic by reducing data transmission amount by transmitting only changed information on an image rather than transmitting an image during video communication.

Various embodiments of the present invention provide a video call method capable of solving the problem of degrading the image quality by transmitting only the changed information about the image and generating and displaying an image similar to the actual image using the changed information And apparatus.

A method of operating an electronic device according to an exemplary embodiment of the present invention includes: acquiring a first image from a camera; extracting three-dimensional angular information by matching the first image with modeling information; Generating a second image based on the other party's image control information received from the counterpart electronic device; generating a second image based on the counterpart image control information received from the counterpart electronic device; . ≪ / RTI >

An electronic device according to an embodiment of the present invention includes a camera for acquiring an image, a controller for extracting three-dimensional angle information by matching the image with modeling information, and generating image control information including the three-dimensional angle information A communication unit for transmitting the image control information to a counterpart electronic device for video communication and receiving counterpart video control information from the counterpart electronic device; And a display unit for displaying an image.

According to an embodiment of the present invention, instead of transmitting an image during a video call, only the changed information about the image is transmitted, thereby reducing the data transmission amount and reducing the network traffic.

According to an embodiment of the present invention, when only changed information on an image is transmitted, an image similar to an image obtained by using the changed information is displayed and the image quality of the image is lowered.

According to an embodiment of the present invention, network traffic can be reduced by transmitting an image according to a network state or transmitting only changed information on an image.

1 is a block diagram illustrating an electronic device according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating a video communication method in a transmission side according to an embodiment of the present invention.
FIG. 3 is a diagram illustrating an example of three-dimensional mesh information according to an embodiment of the present invention.
4 is a diagram illustrating an example of generating modeling information according to an embodiment of the present invention.
5 is a diagram illustrating an example of extracting three-dimensional angle information according to an embodiment of the present invention.
6 is a diagram illustrating an example of three-dimensional angle information according to an embodiment of the present invention.
7 is a diagram illustrating an example of extracting facial expression information according to an embodiment of the present invention.
8 is a diagram illustrating an example of image control information according to an embodiment of the present invention.
FIG. 9 is a flowchart illustrating a video communication method at a receiving end according to an embodiment of the present invention.
10 is a flowchart illustrating a video call method according to various embodiments.
11 shows a block diagram of an electronic device according to various alternative embodiments.

Hereinafter, various embodiments will be described in detail with reference to the accompanying drawings. Note that, in the drawings, the same components are denoted by the same reference symbols as possible. Further, the detailed description of well-known functions and constructions that may obscure the gist of the present invention will be omitted. In the following description, only parts necessary for understanding the operation according to various embodiments of the present invention will be described, and the description of other parts will be omitted so as not to obscure the gist of the present invention.

An electronic device according to various embodiments of the present invention may be an apparatus including a communication function. For example, the electronic device can be a smartphone, a tablet personal computer, a mobile phone, a videophone, an e-book reader, a desktop personal computer, a laptop Such as a laptop personal computer (PC), a netbook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), an MP3 player, a mobile medical device, a camera, or a wearable device And may include at least one of a head-mounted-device (HMD) such as electronic glasses, an electronic garment, an electronic bracelet, an electronic necklace, an electronic app apparel, an electronic tattoo, or a Smart watch.

According to some embodiments, the electronic device may be a smart home appliance with communication capabilities. [0003] Smart household appliances, such as electronic devices, are widely used in the fields of television, digital video disk (DVD) player, audio, refrigerator, air conditioner, vacuum cleaner, oven, microwave oven, washing machine, air cleaner, set- And may include at least one of a box (e.g., Samsung HomeSyncTM, Apple TVTM, or Google TVTM), game consoles, an electronic dictionary, an electronic key, a camcorder, or an electronic frame.

According to some embodiments, the electronic device may be a variety of medical devices (e.g., magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), computed tomography (CT) (global positioning system receiver), EDR (event data recorder), flight data recorder (FDR), automotive infotainment device, marine electronic equipment (eg marine navigation device and gyro compass), avionics, A security device, or an industrial or home robot.

According to some embodiments, the electronic device may be a piece of furniture or a structure / structure including a communication function, an electronic board, an electronic signature receiving device, a projector, (E.g., water, electricity, gas, or radio wave measuring instruments, etc.). An electronic device according to the present invention may be one or more of the various devices described above. It should also be apparent to those skilled in the art that the electronic device according to the present invention is not limited to the above-described devices.

Hereinafter, an operation performed during a video call is described, and an image acquired or received in consideration of the fact that a person making a video call is mostly a person is referred to as an upper body of a person. However, the image is not limited to the upper body of a person. Also, the image obtained by the camera of the electronic device may be referred to as a first image as a user's own image, and the image received through the communication unit of the electronic device may be referred to as a second image as a counterpart image. It is to be understood that the same is by way of illustration and example only and is not to be taken by way of limitation.

1 is a block diagram illustrating an electronic device according to an embodiment of the present invention.

Referring to FIG. 1, the electronic device 100 may include a controller 110, a communication unit 120, a camera 130, a storage unit 140, an input unit 150, a display unit 160, and an output unit 170.

The control unit 110 may include an angle extraction module 111, a facial expression extraction module 112, an information generation module 113, a determination module 114, an angle application module 115, a facial expression application module 116, The control unit 110 controls all the components included in the electronic device 100. The controller 110 controls the overall operation of the electronic device 100 and the signal flow between the internal configurations of the electronic device 100, performs the function of processing data, and controls the power supply from the battery to the configurations. The controller 110 may be a central processing unit (CPU) or a graphic processing unit (GPU). As is well known, a CPU is a core control unit of a computer system that performs operations such as calculation and comparison of data, and interpretation and execution of instructions. The GPU is a graphics control unit that performs calculations and comparisons of graphics-related data, and interprets and executes commands. The CPU and the GPU may each be integrated into a single package of two or more independent cores (e.g., quad-core) in a single integrated circuit. The CPU and the GPU may be integrated on a single chip (SoC). The CPU and GPU may also be packaged in multilayer. On the other hand, a configuration including a CPU and a GPU may be referred to as an application processor (AP).

The angle extracting module 111, the facial expression extracting module 112, the information generating module 113, and the determining module 114 are modules used for transmitting image control information to a partner electronic device for video communication. First, modules used when transmitting image control information (transmission side) will be described.

The angle extracting module 111 can extract the three-dimensional angle information by matching the image obtained by the camera 130 with the modeling information. The modeling information is information necessary for restoring the same image as the actual image with only the image control information. The modeling information is formed by shaping the shape of the upper body of a person (for example, 3D mesh information), and applying a texture (for example, a human skin color) to the shaped shape so as to be similar to an actual image. For example, the modeling information may be configured in three dimensions in consideration of the fact that most people are video callers, but the present invention is not limited thereto. Since the modeling information is for a user of the electronic device 100 that makes a video call, the modeling information can be generated in advance using an image obtained before the video call. Alternatively, the modeling information may be generated using the acquired image during a video call.

The three-dimensional angle information is how the user in the image obtained in real time in the camera 130 is compared with the previous image. For example, the three-dimensional angle information indicates how the left-right tilt, the up-down tilt, and the rotation direction of the user in the image change. The angle extraction module 111 may extract three-dimensional angle information including at least one of yaw, pitch, and roll from the image. Taking the three-dimensional coordinate system as an example, the yaw corresponds to the Y-axis, the pitch corresponds to the X-axis, and the roll corresponds to the Z-axis. The yaw indicates the direction of rotation in which the user rotates left and right. The pitch represents a vertical slope in which the user tilts up and down. The roll represents a left-right inclination in which the user tilts left and right.

The facial expression extraction module 112 can extract the facial expression information from the image by applying the facial expression recognition technology to the image. The facial recognition technique generally represents a technique for automatically identifying a user's facial expression (emotion) through a digital image. Such a facial expression recognition technique can extract the facial expression information by comparing a table of a predefined facial expression with the face of the user. The facial expression recognizing technique is a well-known technique, and the detailed description of the facial expression recognizing technique will be omitted below. For example, the facial expression information may be a smiling facial expression, a grimace facial expression, a crying facial expression, an angry facial expression, a surprised facial expression, and the like.

The information generation module 113 may extract the feature points from the image and generate modeling information by matching the extracted feature points with the reference model. The information generation module 113 may store the generated modeling information in the storage unit 140. In the case of a person, for example, the feature point may be a face contour, head, eye, nose, mouth, ear, neck, shoulder line. The reference model may be a basic form for a person previously created to generate modeling information for a person. The information generation module 113 may generate modeling information for an object in the video communication by matching the feature point and the reference model. The subject may be a user who is a video caller or a partner. The information generation module 113 may generate partner modeling information by matching the feature points extracted from the partner image received from the partner electronic device with the reference model.

The information generation module 113 may generate the image control information including the three-dimensional angle information and the facial expression information. The image control information may also include other information that may inform the change of the image. For example, the image control information may further include the three-dimensional angle information and the information not extracted as the facial expression information as additional information.

The determination module 114 can check the network status in real time during the video call. The network state may be at least one of a transmission-side network state for transmitting the image control information and a reception-side network state for receiving the image control information. In various embodiments, the decision module 114 may preferentially consider the sending side network state rather than the receiving side network state.

For example, the determination module 114 may determine that the network status is 'bad' if the transmission-side network status is 'bad' even though the reception-side network status is 'good'. Conversely, if the receiving side network state is 'bad' but the transmitting side network state is 'good', the determining module 114 may determine that the network state is 'good'. Or vice versa. The determination module 114 may determine whether to transmit the image acquired from the camera or the image control information associated with the image acquired from the camera according to the network status. For example, the determination module 114 may determine whether the network status is suitable for sending an image, determine to transmit the image if 'good', and transmit image control information if the network status is 'bad' .

According to various embodiments, the determination module 114 may adjust the packet size of the image control information according to the network status. The packet size means a data transmission amount. The large packet size means that the data transmission amount is large, and the small packet size means that the data transmission amount is small. As more information is transmitted as the image control information, the counterpart electronic device can easily restore the image as the actual image. For example, when the packet size of the image control information is adjusted to be larger than the reference value, the determination module 114 can transmit more additional information in addition to the three-dimensional angle information and the facial expression information, It is easy to do. However, if the network status is deteriorated and the packet size of the image control information is adjusted to be smaller than the reference value, the determination module 114 may transmit only the three-dimensional angle information and the facial expression information, . However, since the purpose of sending video control information instead of video is to reduce network traffic by decreasing data transmission amount, the packet size of video control information may include a data capacity less than a reference value (e.g., 1.2M / Frame).

According to various embodiments, the determination module 114 may determine whether the three-dimensional angle information is out of an angle. The determination module 114 may determine to transmit the image acquired from the camera 130 instead of the image control information when the three-dimensional angle information is out of a predetermined angle. For example, when the user making the video call is changed to another user, the three-dimensional angle information may deviate significantly from a certain angle. In this case, since it is difficult for the counterpart electronic device to recover an image similar to an image actually obtained by the camera 130 using only the image control information, the determination module 114 may transmit the image itself instead of the image control information.

The angle applying module 115, the facial expression applying module 116, and the image changing module 117 are modules used when receiving the counter party image control information from the counterpart electronic device to generate the counterpart image. Hereinafter, the modules used when receiving the partner video control information (receiver) will be described.

The angle application module 115 may reflect the other party's three-dimensional angle information in the counterpart modeling information. The facial expression applying module 116 may reflect the facial expression information to the counterpart modeling information. The image changing module 117 may change the counterpart image using the counterpart three-dimensional angle information and the counterpart modeling information reflecting the counterpart facial expression information. According to an embodiment, when the image control information further includes additional information, the image changing module 117 may change the counterpart image by reflecting the additional information on the counterpart modeling information. The image change module 117 may change the counter party image by reflecting the counterpart image control information continuously received during the video call in the counterpart modeling information.

According to various embodiments, when the frame rate of the display unit 160 is different from the frame rate of the counterpart image changed by the image control information, the controller 110 may interpolate the counterpart image using an average value of the counterpart three- (interpolation). The frame rate indicates the number of frames to be displayed per second when the image is displayed on the display unit 160. However, when the frame rate of the changed counterpart image is lower than the frame rate of the display unit 160 when the counterpart image is changed with the image control information received from the controller 110, the image quality of the counterpart image may be degraded. In order to solve this image degradation problem, the controller 110 interpolates the counterpart image corresponding to the insufficient frame by using the average value of the counterpart three-dimensional angle information so that the frame rate of the changed counterpart image and the frame rate of the display unit 160 become equal Can be controlled.

The camera 130 may acquire an image by photographing the user who is talking to the video call. The camera 130 is a device capable of capturing still images and moving images. According to various embodiments, camera 130 may include one or more image sensors (e.g., front or rear), a lens, an image signal processor (ISP), or a flash (e.g., LED or xenon lamp) The camera 130 can continuously acquire an image by photographing the object of the video call.

The communication unit 120 may transmit the image to the counterpart electronic device. The communication unit 120 may receive the counterpart image acquired from the counterpart electronic apparatus from the counterpart electronic apparatus. Also, the communication unit 120 may transmit the image control information or the modeling information to the counterpart electronic device. The communication unit 120 may receive the counterpart video control information from the counterpart electronic device. The communication unit 120 performs voice communication, video communication, or data communication with an external device via a network under the control of the control unit 110. [ The communication unit 120 includes a radio frequency transmitter for up-converting and amplifying the frequency of a transmitted signal, and a radio frequency receiver for performing low-noise amplification and down-conversion on the frequency of a received signal. The communication unit 120 may include a mobile communication module (e.g., a 3-Generation mobile communication module, a 3.5-Generation mobile communication module or a 4-Generation mobile communication module) (E.g., DMB module) and a short range communication module (e.g., Wi-Fi module, Bluetooth module, NFC module).

The storage unit 140 may store the modeling information or the counterpart modeling information. The storage unit 140 may store data such as photographs, documents, applications, music, and the preset values and set conditions in the electronic device 100. The storage unit 140 may be a secondary memory unit of the electronic device 100, and may include a disk, a RAM, a ROM, a flash memory, and the like.

The display unit 160 may display the image obtained by the camera 130 and the counterpart image. The display unit 160 may display the changed partner image under the control of the controller 110. The display unit 160 displays an image on a screen (at least one image is shown) under the control of the control unit 110. That is, when the controller 110 processes (e.g., decodes) the data into an image to be displayed on the screen and stores the data in the buffer, the display unit 160 converts the image stored in the buffer into an analog signal and displays it on the screen. The display unit 160 may include a liquid crystal display (LCD), an organic light emitting diode (OLED), an active matrix organic light emitting diode (AMOLED), or a flexible display. The display unit 160 of the present invention may be configured as a touch screen capable of receiving input while being displayed.

The input unit 150 may include a plurality of keys for receiving numeric or character information and setting various functions. These keys may include a menu retrieval key, a screen on / off key, a power on / off key, and a volume control key. The input unit 150 generates a key event related to the user setting and the function control of the electronic device 100, and transmits the generated key event to the control unit 110. The key event may include a power on / off event, a volume adjustment event, a screen on / off event, a shutter event, and the like. The control unit 110 controls the above-described configurations in response to these key events. Meanwhile, the key of the input unit 150 may be referred to as a hard key, and the virtual key displayed on the display unit 160 may be referred to as a soft key.

The output unit 170 can output a voice for the video call destination (user and the other party). The output unit 170 may be an audio processing unit and may output audio under the control of the control unit 110. [ Generally, the audio processing unit is coupled with a speaker (SPK) and a microphone (MIC) to perform input and output of audio signals (e.g., voice data) for voice recognition, voice recording, digital recording and call. The audio processing unit receives an audio signal from the microphone or the communication unit 120, D / A-converts the received audio signal to analog, amplifies the analog audio signal, and outputs the amplified audio signal to a speaker (SPK). The speaker SPK converts the received audio signal into a sound wave and outputs the sound wave. A microphone (MIC) converts sound waves from people or other sound sources into audio signals.

FIG. 2 is a flowchart illustrating a video communication method in a transmission side according to an embodiment of the present invention. The video call method of the present invention can be performed by the electronic device of Fig.

Referring to Figures 1 and 2, at operation 210, the electronic device 100 may acquire an image from the camera 130. The image may be the upper body of a person, considering that the person making the video call is mostly a person. The camera 130 can acquire an image by photographing a target of a video call. The electronic device 100 can display the image on the display unit 160. [ The electronic device 100 may continuously capture the video call object during the video call to acquire an image.

In operation 210a, the electronic device 100 may transmit modeling information associated with the image. The modeling information is made similar to the image obtained from the camera 130, and can be used for restoring the image similar to the actual image on the receiving side. For example, the electronic device 100 may extract a feature point from the image, match the extracted feature point with a reference model, and shape the shape of the upper body. The electronic device 100 can generate the modeling information similar to the actual image by applying a texture (e.g., human skin color, point, scratch) to the shaped shape. Since the modeling information is for a user of an electronic device that makes a video call, the modeling information can be generated in advance using an image acquired before the video call. Alternatively, the modeling information may be generated using the acquired image during a video call.

In the case of a person, for example, the feature point may be a face contour, head, eye, nose, mouth, ear, neck, shoulder line. The reference model may be a basic form for a person previously created to generate modeling information for a person. The electronic device 100 may store the modeling information in the storage unit 140 and may transmit the modeling information to the partner electronic device that makes the video call. Alternatively, the electronic device 100 may determine whether to transmit the modeling information according to the performance of the counterpart electronic device. To this end, the electronic device 100 may receive information on the performance of the counterpart electronic device during the video call. For example, the electronic device 100 may not transmit the modeling information if it is determined that the counterpart electronic device can generate the modeling information directly based on the performance of the counterpart electronic device. However, when the counterpart electronic device can not directly generate the modeling information, the electronic device 100 can transmit the modeling information.

In operation 220, the electronic device 100 may extract the three-dimensional angle information by matching the image with the modeling information. The three-dimensional angle information is how the user in the image obtained in real time is compared with the previous image. For example, the three-dimensional angle information indicates how the left-right tilt, the up-down tilt, and the rotation direction of the user in the image change. The electronic device 100 may extract three-dimensional angular information including at least one of yaw, pitch and roll from the image. Taking the three-dimensional coordinate system as an example, the yaw corresponds to the Y-axis, the pitch corresponds to the X-axis, and the roll corresponds to the Z-axis. The yaw indicates the direction of rotation in which the user rotates left and right. The pitch represents a vertical slope in which the user tilts up and down. The roll represents a left-right inclination in which the user tilts left and right. A detailed description of the three-dimensional angle information will be described later with reference to FIG.

In operation 230, the electronic device 100 may extract facial expression information from the image. The electronic device 100 can extract the expression information by comparing the user's face with a table for a predefined facial expression.

In operation 240, the electronic device 100 may generate image control information including the three-dimensional angle information and the facial expression information. The image control information may also include other information that may inform the change of the image. For example, the image control information may further include the three-dimensional angle information and the information not extracted as the facial expression information as additional information.

In operation 250, the electronic device 100 may transmit the image control information to the counterpart electronic device.

According to various embodiments, the electronic device 100 may adjust the packet size of the image control information according to the network status. As more information is transmitted by the image control information, the counterpart electronic device can easily restore the image as the actual image. For example, when the electronic device 100 has the network status and the packet size of the image control information is adjusted to be larger than the reference value, the electronic device 100 can transmit more additional information in addition to the three-dimensional angle information and the facial expression information, It is easy to restore the image. However, when the network state of the electronic device 100 deteriorates and the packet size of the image control information is adjusted to be smaller than the reference value, the electronic device 100 can transmit only the three-dimensional angle information and the facial expression information, It may be difficult.

According to various embodiments, the electronic device 100 may determine whether the three-dimensional angle information is out of an angle. The electronic device 100 may determine to transmit the image obtained from the camera 130 instead of the image control information when the three-dimensional angle information is out of a predetermined angle. For example, when the user making the video call is changed to another user, the three-dimensional angle information may deviate significantly from a certain angle. In this case, it is difficult for the counterpart electronic device to recover the image as the actual image with only the image control information, so that the electronic device 100 can transmit the image itself instead of the image control information. When the image is received, the counterpart electronic device can directly display the received image without restoring it.

FIG. 3 is a diagram illustrating an example of three-dimensional mesh information according to an embodiment of the present invention.

Referring to FIGS. 1 and 3, the electronic device 100 can acquire three-dimensional mesh information by shaping a shape of a human face from an image obtained from a camera. Reference numeral 310 denotes three-dimensional mesh information for a face having a left-right inclination of 0 degrees. Reference numeral 320 denotes three-dimensional mesh information for a face whose left slope is 30 degrees. Reference numeral 330 denotes three-dimensional mesh information for a face whose left slope is 90 degrees. Although FIG. 3 shows the 3D mesh information according to the left slope, the electronic device 100 can shape the shape of all the face angles to acquire 3D mesh information.

4 is a diagram illustrating an example of generating modeling information according to an embodiment of the present invention.

Referring to FIGS. 1 and 4, the electronic device 100 may acquire an image, such as reference numeral 410 (Input Image), from the camera 130. Reference numeral 420 (Facial feature extraction) shows an example in which the electronic device 100 extracts feature points from the image 410. For example, the feature point may be a human eye, nose, mouth, such as reference numeral 421, or a head contour or ear, such as 422. At reference numeral 440 (Shape deformation), the electronic device 100 can acquire three-dimensional mesh information by matching the feature points with a generic model such as 430. The three-dimensional mesh information is shown in Fig. Reference numeral 450 indicates that the electronic device 100 may apply texture such as skin color, dot, scratch, hair, etc. to the 3D mesh information. Upon completion of operation 450, the electronic device 100 may obtain modeling information, such as reference numeral 460.

5 is a diagram illustrating an example of extracting three-dimensional angle information according to an embodiment of the present invention.

1 and 5, the electronic device 100 may acquire an image, such as reference numeral 510, from the camera 130. The electronic device 100 may extract the three-dimensional angle information by matching the image 510 with modeling information such as reference numeral 520. [ The three-dimensional angle information indicates how the left-right tilt, the up-down tilt, and the rotation direction of the user in the image 510 are changed as shown in reference numeral 530. Taking the three-dimensional coordinate system as an example, the rotation direction corresponds to the Y-axis, the up-down slope corresponds to the X-axis, and the left-right slope corresponds to the Z-axis. Hereinafter, three-dimensional angle information will be described with reference to FIG.

6 is a diagram illustrating an example of three-dimensional angle information according to an embodiment of the present invention.

Referring to FIG. 6, the three-dimensional angle information may include a rotation direction (yaw), a vertical slope (pitch), and a horizontal slope (roll) of the user's face. Taking the three-dimensional coordinate system as an example, the yaw corresponds to the Y axis, the pitch corresponds to the X axis, and the roll corresponds to the Z axis. The yaw indicates the direction of rotation in which the user rotates left and right. The pitch represents a vertical slope at which the user tilts up and down (or forward and backward). The roll represents a left-right inclination in which the user tilts left and right. Reference numeral 610 denotes three-dimensional angle information in which the direction of rotation, the up-and-down tilt, and the left-right tilt are all "0 °" with the user viewing the camera 130 in front. Reference numeral 620 denotes three-dimensional angle information in which the user is tilted 25 ° to the left, the rotational direction, the vertical slope is all "0 °", and the left slope is "25 °". For example, reference numeral 620 denotes a state in which the user tilts the face toward the left shoulder while looking at the camera 130 in front.

Reference numeral 630 denotes three-dimensional angle information in which only the rotational direction is "30 DEG " and the up-and-down tilt and the left-right tilt are both" 0 DEG " For example, reference numeral 630 denotes a state in which the user turns his / her face to the right based on a view of the camera 130 in front. Reference numeral 640 denotes three-dimensional angular information in which the user is tilted to the right, to the right and to the top, and the direction of rotation is "20 °", the upward slope is "30 °", and the rightward slope is "5 °". For example, reference numeral 640 denotes a state in which the user rotates the face 20 ° to the right, lifts the face 30 ° upward, and tilts the face 5 ° to the right with respect to the front view of the camera 130 .

7 is a diagram illustrating an example of extracting facial expression information according to an embodiment of the present invention.

1 and 7, the electronic device 100 may acquire an image, such as reference numeral 710, from the camera 130. The electronic device 100 can extract the facial expression information by matching the image 710 to a facial expression table such as reference numeral 720. [ The facial expression table shown at reference numeral 720 may include facial features including a smiling facial expression, a frown facial expression, a crying facial expression, an angry facial expression, a surprised facial expression, and the like. For example, the electronic device 100 can extract the facial expression information matched to the image 710 in the facial expression table 720 with a smiling expression such as reference numeral 730. According to various embodiments, the electronic device 100 can transmit an image of the corresponding region itself, which has not been extracted as the facial expression information, to the counterpart electronic device.

8 is a diagram illustrating an example of image control information according to an embodiment of the present invention.

Referring to FIG. 8, the image control information is transmitted to the counterpart electronic device in the video call instead of the image. The image control information may include three-dimensional angle information such as 3D angle data, facial expression information such as reference numeral 820 (Face Expression data), and additional information such as 830 (Miscellaneous Conf Data). The additional information may be the three-dimensional angle information and information not extracted as the facial expression information. The image control information may further include an extra space that may further include other information, such as 840 (Reserved). The image control information may be smaller in data amount than the image. Accordingly, the electronic device 100 can reduce the network traffic by transmitting the image control information to the network more reliably than by transmitting the image.

FIG. 9 is a flowchart illustrating a video communication method at a receiving end according to an embodiment of the present invention. The video call method of FIG. 9 can be performed by the electronic device of FIG.

Referring to FIGS. 1 and 9, the electronic device 100 may receive a counterpart image acquired from a camera of the counterpart electronic device from a counterpart electronic device that makes a video call. The electronic device 100 can display the counterpart video.

At operation 910, the electronic device 100 may receive counterpart modeling information or may generate counterpart modeling information. The counterpart modeling information is made similar to the counterpart image, and the electronic device 100 can directly generate or receive the counterpart electronic device from the counterpart electronic device. For example, the electronic device 100 may extract feature points from the counterpart image and generate the counterpart modeling information by matching the extracted feature points with the reference model.

In operation 920, the electronic device 100 may receive image control information including three-dimensional angular information and facial expression information from the counterpart electronic device. Since the three-dimensional angle information has been described with reference to FIG. 6, a description thereof will be omitted. Since the facial expression information is also described with reference to FIG. 7, a description thereof will be omitted.

In operation 930, the electronic device 100 may reflect the three-dimensional angle information in the counterpart modeling information.

In operation 940, the electronic device 100 may reflect the facial expression information in the counterpart modeling information.

In operation 950, the electronic device 100 can change the counterpart image using the reflected counterpart modeling information, and display the counterpart counterpart image. If the image control information further includes additional information, the electronic device 100 may change the counterpart image by reflecting the additional information in the counterpart modeling information.

According to various embodiments, when the frame rate of the display unit 160 is different from the frame rate of the counterpart image changed by the image control information, the electronic device 100 may interpolate the counterpart image using an average value of the counterpart three-dimensional angle information . The frame rate indicates the number of frames to be displayed per second when the image is displayed on the display unit 160. However, when the frame rate of the changed counterpart image is lower than the frame rate of the display unit 160 when the counterpart image is changed with the received image control information, the electronic device 100 may have a problem that the image quality of the counterpart image is lowered. In order to solve this image quality degradation problem, the electronic device 100 interpolates the counterpart image corresponding to the insufficient frame by using the average value of the counterpart three-dimensional angle information so that the frame rate of the changed counterpart image and the frame rate of the display unit 160 are the same .

10 is a flowchart illustrating a video call method according to various embodiments.

Referring to FIG. 10, the first electronic device and the second electronic device can make video calls. When the first video call is connected, the first electronic device can transmit a first image acquired from the first camera to the second electronic device. The second electronic device may transmit a second image acquired by the second camera to the second electronic device. &Quot; first ", "second" are for distinguishing between two electronic devices, one of the two electronic devices is called a first electronic device, the other electronic device is called a second electronic device . In addition, "first" is written in front of the components included in the first electronic device, and "second" is written in front of the components included in the second electronic device. It is to be understood that the invention is not limited to the details of the invention.

In operation 1010, the first electronic device (e.g., transmitting side) may acquire an image from the first camera and display the acquired image. During the video call, the first camera acquires an image in real time.

At operation 1020, the first electronic device can verify the network status. The network status can be checked in real time during the video call. The network state may be at least one of a network state of the first electronic device and a network state of the second electronic device. In various embodiments, the first electronic device may preferentially consider the network status of the first electronic device than the network status of the second electronic device. For example, the first electronic device may determine that the network state is 'bad' if the network state of the first electronic device is 'bad' even though the network state of the second electronic device is 'good' . Conversely, the first electronic device may determine the network status to be 'good' if the network status of the first electronic device is 'good' even though the network status of the second electronic device is 'bad'. Or vice versa.

If the network status is not "bad" ("good"), then at operation 1070, the first electronic device transmits the video and, if the network status is "bad" have.

In operation 1030, the first electronic device may extract the three-dimensional angular information by matching the image with the modeling information. The modeling information forms a shape of an upper half of a person, and a texture similar to a skin color of a person is made to be similar to an actual image. The first electronic device may transmit the modeling information according to the capabilities of the second electronic device. The three-dimensional angle information is how the user in the image obtained in real time is compared with the previous image. For example, the three-dimensional angle information indicates how the left-right tilt, the up-down tilt, and the rotation direction of the user in the image change.

In operation 1040, the first electronic device may extract facial expression information from the image by applying a facial recognition technique to the image. The first electronic device can extract the facial expression information by comparing the face of the user with a predefined facial expression table.

In operation 1050, the first electronic device may generate image control information including the three-dimensional angle information and the facial expression information. The image control information may also include other information that may inform the change of the image. For example, the image control information may further include the three-dimensional angle information and the information not extracted as the facial expression information as additional information.

In operation 1060, the first electronic device may transmit the image control information to the second electronic device.

In operation 1200, the second electronic device may determine whether or not it is video that is received from the first electronic device. The second electronic device can check the amount of the received data or the type of data to determine whether the received data is a video or not. For example, the second electronic device may determine that the image received from the first electronic device is an image if the amount of the data exceeds a reference value or the data type is indicated as "image" or "1 & . Alternatively, the second electronic device can determine that the data received from the first electronic device is image control information when the amount of the data is less than a reference value or the data type is indicated as "info" or "0".

If the received is a video, at operation 1210, the second electronic device displays the received image, and if the received electronic image is not an image, the second electronic device may perform operation 1220. If the received information is not an image, the information received at operation 1200 may be image control information.

In operation 1220, the second electronic device may receive modeling information associated with the image, or may generate the modeling information. In the present invention, it is assumed that the second electronic device generates the modeling information.

In operation 1230, the second electronic device may reflect the 3D angular information in the image control information into the modeling information.

At operation 1240, the second electronic device may reflect facial expression information in the modeling information.

At operation 1250, the second electronic device may use the reflected modeling information to change the previously received image and display the modified image. If the second electronic device further includes additional information in the image control information, the second electronic device may reflect the additional information in the modeling information to change the image.

11 shows a block diagram of an electronic device according to various alternative embodiments.

Referring to Fig. 11, the electronic device 1100 may constitute all or part of the electronic device 100 shown in Fig. 1, for example. The electronic device 1100 includes at least one application processor (AP) 1110, a communication module 1120, a subscriber identification module (SIM) card 1124, a memory 1130, a sensor module 1140, an input device 1150, a display 1160, an interface 1170, an audio module 1180 A camera module 1191, a power management module 1195, a battery 1196, an indicator 1197, and a motor 1198.

The AP 1110 may control a plurality of hardware or software components connected to the AP 1110 by operating an operating system or an application program, and may perform various data processing and operations including multimedia data. The AP 1110 may be implemented as a system on chip (SoC), for example. According to one embodiment, the AP 1110 may further include a graphics processing unit (GPU) (not shown).

The communication module 1120 (e.g., the communication interface 160) may perform data transmission / reception in communication between the electronic device 1101 (e.g., the electronic device 100) and other electronic devices connected via a network. According to one embodiment, the communication module 1120 may include a cellular module 1121, a Wifi module 1123, a BT module 1125, a GPS module 1127, an NFC module 1128, and a radio frequency (RF) module 1129.

The cellular module 1121 may provide voice, video, text, or Internet services over a communication network (e.g., LTE, LTE-A, CDMA, WCDMA, UMTS, WiBro or GSM). In addition, the cellular module 1121 can perform identification and authentication of electronic devices within the communication network, for example, using a subscriber identity module (e.g., SIM card 1124). According to one embodiment, the cellular module 1121 may perform at least some of the functions that the AP 1110 may provide. For example, the cellular module 1121 may perform at least some of the multimedia control functions.

According to one embodiment, the cellular module 1121 may include a communication processor (CP). In addition, the cellular module 1121 may be implemented as an SoC, for example. 11, components such as the cellular module 1121 (e.g., communication processor), the memory 1130, or the power management module 1195 are illustrated as separate components from the AP 1110. However, according to one embodiment, May include at least a portion of the aforementioned components (e.g., cellular module 1121).

According to one embodiment, the AP 1110 or the cellular module 1121 (e.g., communication processor) loads commands or data received from at least one of non-volatile memory or other components connected to each other into volatile memory for processing can do. In addition, the AP 1110 or the cellular module 1121 may store data generated by at least one of the other components or received from at least one of the other components in the non-volatile memory.

Each of the Wifi module 1123, the BT module 1125, the GPS module 1127, and the NFC module 1128 may include a processor for processing data transmitted and received through a corresponding module, for example. Although the cellular module 1121, the Wifi module 1123, the BT module 1125, the GPS module 1127, or the NFC module 1128 are shown as separate blocks in FIG. 11, according to one embodiment, the cellular module 1121, the Wifi module 1123, the BT module 1125, At least some (e.g., two or more) of modules 1127 or NFC modules 1128 may be included in one integrated chip (IC) or IC package. For example, at least some of the processors corresponding to the cellular module 1121, the Wifi module 1123, the BT module 1125, the GPS module 1127, or the NFC module 1128, respectively (e.g., corresponding to the communication processor and Wifi module 1123 corresponding to the cellular module 1121 Wifi processor) can be implemented in a single SoC.

The RF module 1129 can transmit and receive data, for example, transmit and receive RF signals. The RF module 1129 may include, for example, a transceiver, a power amplifier module (PAM), a frequency filter, or a low noise amplifier (LNA). In addition, the RF module 1129 may further include a component for transmitting and receiving electromagnetic waves in free space in a wireless communication, for example, a conductor or a conductor. 11, the cellular module 1121, the Wifi module 1123, the BT module 1125, the GPS module 1127, and the NFC module 1128 are shown sharing one RF module 1129. However, according to one embodiment, the cellular module 1121, the Wifi module 1123 At least one of the BT module 1125, the GPS module 1127, and the NFC module 1128 can transmit and receive an RF signal through a separate RF module.

The SIM card 1124 may be a card including a subscriber identity module and may be inserted into a slot formed at a specific location of the electronic device. The SIM card 1124 may include unique identification information (e.g., an integrated circuit card identifier (ICCID)) or subscriber information (e.g., international mobile subscriber identity (IMSI)).

The memory 1130 (e.g., the memory 130) may include an internal memory 1132 or an external memory 1134. The built-in memory 1132 may be a memory such as a volatile memory (for example, a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic random access memory (SDRAM), or the like) or a nonvolatile memory , At least one of an OTPROM (one time programmable ROM), a PROM (programmable ROM), an EPROM (erasable and programmable ROM), an EEPROM (electrically erasable and programmable ROM), a mask ROM, a flash ROM, a NAND flash memory, . ≪ / RTI >

According to one embodiment, the internal memory 1132 may be a solid state drive (SSD). The external memory 1134 may be a flash drive, for example, a compact flash (CF), a secure digital (SD), a micro secure digital (SD), a mini secure digital (SD), an extreme digital And the like. The external memory 1134 can be functionally connected to the electronic device 1100 through various interfaces. According to one embodiment, the electronic device 1101 may further include a storage device (or storage medium) such as a hard drive.

The sensor module 1140 may measure a physical quantity or sense an operating state of the electronic device 1100, and convert the measured or sensed information into an electrical signal. The sensor module 1140 includes a gyro sensor 1140A, a gyro sensor 1140B, an air pressure sensor 1140C, a magnetic sensor 1140D, an acceleration sensor 1140E, a grip sensor 1140F, a proximity sensor 1140G, a color sensor 1140H blue sensor), a biological sensor 1140I, a temperature / humidity sensor 1140J, an illuminance sensor 1140K, or an ultraviolet (UV) sensor 1140M. Additionally or alternatively, the sensor module 1140 may include, for example, an electronic sensor (not shown), an electromyography sensor (not shown), an electroencephalogram sensor (not shown), an ECG sensor an electrocardiogram sensor (not shown), an infra red sensor (not shown), an iris sensor (not shown), or a fingerprint sensor (not shown). The sensor module 1140 may further include a control circuit for controlling at least one sensor included in the sensor module 1140.

The input device 1150 may include a touch panel 1152, a (digital) pen sensor 1154, a key 1156, or an ultrasonic input device 1158. The touch panel 1152 can recognize a touch input in at least one of an electrostatic, a pressure sensitive, an infrared, or an ultrasonic manner, for example. The touch panel 1152 may further include a control circuit. In electrostatic mode, physical contact or proximity recognition is possible. The touch panel 1152 may further include a tactile layer. In this case, the touch panel 1152 may provide a tactile response to the user.

The (digital) pen sensor 1154 can be implemented using the same or similar method as receiving the touch input of the user, or using a separate recognizing sheet, for example. The key 1156 may include, for example, a physical button, an optical key or a keypad. The ultrasonic input device 1158 is a device that can confirm the data by sensing a sound wave from the electronic device 1100 to a microphone (e.g., a microphone 1188) through an input tool for generating an ultrasonic signal, and is capable of wireless recognition. According to one embodiment, the electronic device 1100 may use the communication module 1120 to receive user input from an external device (e.g., a computer or a server) connected thereto.

The display 1160 (e.g., the display 150) may include a panel 1162, a hologram device 1164, or a projector 1166. The panel 1162 may be, for example, a liquid-crystal display (LCD) or an active-matrix organic light-emitting diode (AM-OLED). The panel 1162 may be embodied, for example, in a flexible, transparent or wearable manner. The panel 1162 may be composed of the touch panel 1152 and one module. The hologram device 1164 can display a stereoscopic image in the air using interference of light. The projector 1166 can display an image by projecting light onto a screen. The screen may be located, for example, inside or outside the electronic device 1100. According to one embodiment, the display 1160 may further include a control circuit for controlling the panel 1162, the hologram device 1164, or the projector 1166.

The interface 1170 may include, for example, a high-definition multimedia interface (HDMI) 1172, a universal serial bus (USB) 1174, an optical interface 1176, or a D-sub (D-subminiature) The interface 1170 may be included in the communication unit 120 shown in FIG. 1, for example. Additionally or alternatively, the interface 1170 may include, for example, a mobile high-definition link (MHL) interface, a secure digital (SD) card / multi-media card (MMC) interface, or an infrared data association can do.

The audio module 1180 can convert sound and electric signals into both directions. At least some of the components of the audio module 1180 may be included, for example, in the output 170 shown in FIG. The audio module 1180 may process sound information input or output through, for example, a speaker 1182, a receiver 1184, an earphone 1186, a microphone 1188, or the like.

The camera module 1191 is a device capable of capturing a still image and a moving image. The camera module 1191 may include one or more image sensors (e.g., a front sensor or a rear sensor), a lens (not shown), an image signal processor ) Or a flash (not shown), such as an LED or xenon lamp.

The power management module 1195 can manage the power of the electronic device 1100. Although not shown, the power management module 1195 may include, for example, a power management integrated circuit (PMIC), a charger integrated circuit (PMIC), or a battery or fuel gauge.

The PMIC can be mounted, for example, in an integrated circuit or a SoC semiconductor. The charging method can be classified into wired and wireless. The charging IC can charge the battery, and can prevent an overvoltage or an overcurrent from the charger. According to one embodiment, the charging IC may comprise a charging IC for at least one of a wired charging scheme or a wireless charging scheme. The wireless charging system may be, for example, a magnetic resonance system, a magnetic induction system or an electromagnetic wave system, and additional circuits for wireless charging may be added, such as a coil loop, a resonant circuit or a rectifier have.

The battery gauge can measure the remaining amount of the battery 1196, voltage during charging, current, or temperature, for example. The battery 1196 may store or generate electricity and supply power to the electronic device 1100 using the stored or generated electricity. The battery 1196 may include, for example, a rechargeable battery or a solar battery.

The indicator 1197 may indicate a specific state of the electronic device 1100 or a portion thereof (e.g., the AP 1110), for example, a boot state, a message state, or a charged state. The motor 1198 can convert an electrical signal into mechanical vibration. Although not shown, the electronic device 1100 may include a processing unit (e.g., a GPU) for mobile TV support. The processing device for supporting the mobile TV can process media data conforming to standards such as digital multimedia broadcasting (DMB), digital video broadcasting (DVB), or media flow.

Each of the above-described components of the electronic device according to various embodiments of the present invention may be composed of one or more components, and the name of the component may be changed according to the type of the electronic device. The electronic device according to various embodiments of the present invention may be configured to include at least one of the above-described components, and some components may be omitted or further include other additional components. In addition, some of the components of the electronic device according to various embodiments of the present invention may be combined into one entity, so that the functions of the components before being combined can be performed in the same manner.

It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. Accordingly, the scope of the present invention should be construed as being included in the scope of the present invention, all changes or modifications derived from the technical idea of the present invention.

100: Electronic device
110: control unit 111: angle extracting module
112: facial expression extraction module 113: information generation module
114: determining module 115: angle applying module
116: facial expression applying module 117: image changing module
120: communication unit 130: camera
140: storage unit 150: input unit
160: Display portion 170: Output portion

Claims (20)

A method of operating an electronic device,
Acquiring a first image from a camera,
Extracting three-dimensional angular information by matching the first image with modeling information,
Generating image control information including the three-dimensional angle information;
An operation of transmitting the image control information to a counterpart electronic device
And generating a second image based on the other party's image control information received from the counterpart electronic device.
The method according to claim 1,
The operation of creating the second image may include:
And changing the second image by reflecting the counterpart three-dimensional angle information included in the counterpart image control information to counterpart modeling information,
And displaying the second image on a display unit.
3. The method of claim 2,
Receiving the counterpart modeling information from the counterpart electronic device, or
And generating counterpart modeling information by matching feature points extracted from the second image received from the counterpart electronic device with a reference model.
3. The method of claim 2,
Interpolating the second image using an average value of the counterpart three-dimensional angle information when the frame rate of the display unit is different from the frame rate of the second image changed to the image control information, Methods of inclusion.
The method according to claim 1,
An operation for confirming a network status during the video call;
Further comprising determining whether to transmit the first image acquired from the camera or to transmit the image control information according to the network status.
The method according to claim 1,
Extracting three-dimensional angular information including at least one of a yaw, a pitch and a roll from the image.
The method according to claim 1,
Further comprising extracting facial expression information from the image by applying a facial expression recognition technique to the image.
7. The method of claim 6,
And generating image control information including the three-dimensional angle information and the facial expression information.
The method according to claim 1,
An operation for confirming a network status during the video call;
And adjusting a packet size of the image control information according to the network status.
The method according to claim 1,
Determining whether the three-dimensional angle information is out of a predetermined angle,
And transmitting the first image acquired from the camera when the three-dimensional angle information is out of a certain angle.
A camera for acquiring an image,
A control unit for extracting three-dimensional angle information by matching the image with modeling information, and generating image control information including the three-dimensional angle information;
A communication unit for transmitting the image control information to a counterpart electronic device for video communication and receiving counterpart video control information from the counterpart electronic device,
And a display unit for displaying the acquired image and a counterpart image created based on the counterpart image control information by the control unit.
12. The method of claim 11,
Wherein the controller changes the counter party image by reflecting the counterpart three-dimensional angle information included in the counter party image control information to counterpart modeling information.
13. The method of claim 12,
The communication unit may receive the counterpart modeling information from the counterpart electronic device,
Wherein the control unit generates counterpart modeling information by matching the feature point extracted from the counterpart image received from the counterpart electronic device with the reference model.
13. The apparatus according to claim 12,
And interpolates the counterpart image using an average value of the counterpart three-dimensional angle information when the frame rate of the display unit is different from the frame rate of the counterpart image changed to the image control information.
12. The apparatus according to claim 11,
And determines whether to transmit the image acquired from the camera or to transmit the image control information according to the network status.
12. The apparatus according to claim 11,
And extracting three-dimensional angular information including yaw, pitch and roll from the image.
12. The apparatus according to claim 11,
And extracting facial expression information from the image by applying a facial recognition technology to the image.
18. The apparatus of claim 17,
And generates image control information including the three-dimensional angle information and the facial expression information.
12. The apparatus according to claim 11,
And controls the packet size of the video control information according to the network status.
12. The apparatus according to claim 11,
Determines whether the three-dimensional angle information deviates from a predetermined angle, and transmits the image acquired from the camera when the three-dimensional angle information deviates from a predetermined angle.
KR1020140088439A 2014-07-14 2014-07-14 Video call method and apparatus KR20160008357A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140088439A KR20160008357A (en) 2014-07-14 2014-07-14 Video call method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140088439A KR20160008357A (en) 2014-07-14 2014-07-14 Video call method and apparatus

Publications (1)

Publication Number Publication Date
KR20160008357A true KR20160008357A (en) 2016-01-22

Family

ID=55308890

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140088439A KR20160008357A (en) 2014-07-14 2014-07-14 Video call method and apparatus

Country Status (1)

Country Link
KR (1) KR20160008357A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180122636A1 (en) * 2016-10-31 2018-05-03 Semiconductor Manufacturing International (Shanghai) Corporation Manufacturing method of semiconductor device
CN109714588A (en) * 2019-02-16 2019-05-03 深圳市未来感知科技有限公司 Multi-viewpoint stereo image positions output method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180122636A1 (en) * 2016-10-31 2018-05-03 Semiconductor Manufacturing International (Shanghai) Corporation Manufacturing method of semiconductor device
CN109714588A (en) * 2019-02-16 2019-05-03 深圳市未来感知科技有限公司 Multi-viewpoint stereo image positions output method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20160086386A1 (en) Method and apparatus for screen capture
US10365882B2 (en) Data processing method and electronic device thereof
CN105554369B (en) Electronic device and method for processing image
KR102294945B1 (en) Function controlling method and electronic device thereof
KR102031874B1 (en) Electronic Device Using Composition Information of Picture and Shooting Method of Using the Same
KR102220443B1 (en) Apparatas and method for using a depth information in an electronic device
US20160063767A1 (en) Method for providing visual reality service and apparatus for the same
KR20150136440A (en) Method for controlling display and electronic device supporting the same
KR20160020189A (en) Method and apparatus for processing image
KR102262086B1 (en) Apparatus and method for processing image
US10848669B2 (en) Electronic device and method for displaying 360-degree image in the electronic device
KR20160024168A (en) Method for controlling display in electronic device and the electronic device
KR102219456B1 (en) Method and apparatus for rendering contents
KR20160055337A (en) Method for displaying text and electronic device thereof
KR20150120147A (en) Apparatus and method for providing communication service information
US10408616B2 (en) Method for acquiring sensor data and electronic device thereof
KR20150137504A (en) Method for image processing and electronic device implementing the same
KR20150141426A (en) Electronic device and method for processing an image in the electronic device
KR20150135895A (en) Method for processing image and electronic device thereof
KR20150098161A (en) Method for creating a content and an electronic device thereof
KR20160135476A (en) Electronic device and Method for controlling a camera of the same
KR20150099050A (en) Media data synchronization method and device
KR20150081751A (en) Image processing method and electronic device implementing the same
KR20160002132A (en) Electronic device and method for providing sound effects
KR20160008357A (en) Video call method and apparatus

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination