US20210112104A1 - Method and system for group call using whisper - Google Patents
Method and system for group call using whisper Download PDFInfo
- Publication number
- US20210112104A1 US20210112104A1 US17/069,526 US202017069526A US2021112104A1 US 20210112104 A1 US20210112104 A1 US 20210112104A1 US 202017069526 A US202017069526 A US 202017069526A US 2021112104 A1 US2021112104 A1 US 2021112104A1
- Authority
- US
- United States
- Prior art keywords
- whisper
- group
- participant
- participants
- packet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/611—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1822—Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1083—In-session procedures
- H04L65/1093—In-session procedures by adding participants; by removing participants
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/26—Devices for calling a subscriber
- H04M1/27—Devices whereby a plurality of signals may be stored simultaneously
- H04M1/274—Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc
- H04M1/2745—Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc using static electronic memories, e.g. chips
- H04M1/27467—Methods of retrieving data
- H04M1/27475—Methods of retrieving data using interactive graphical means or pictorial representations
Definitions
- Apparatuses, systems, and methods according to example embodiments relate to group calling.
- a conference call refers to a group call that allows a plurality of users to talk at the same time.
- a group call may be implemented using a real-time call service, and may transfer audio data and media, such as video, to a call counterpart.
- data transferred through the group call is transmitted to each of the users participating in a corresponding instance of the group call. For example, in an instance of a group call participated in by five users including user 1, user 2, user 3, user 4, and user 5, data of audio/video of the user 1 is transferred to each of the remaining users, that is, the user 2, the user 3, the user 4, and the user 5. That is, in the group call according to the related art, data is broadcasted at all times.
- Example embodiments provide a group call method, apparatus and system in which a desired participant or participant group may be spontaneously selected during the progress of a group call to allow whispering with the selected participant or participant group.
- a non-transitory computer-readable record medium storing instructions that, when executed by a processor of a computer apparatus including a touchscreen and an input device, cause the computer apparatus to execute a group call method including: participating in a group call session with a plurality of participants; designating at least one participant among the plurality of participants as a whisper target based on a first touch gesture on the touchscreen during the group call session; generating a whisper packet configured to control a server to transfer, only to the whisper target, at least one of video and audio that are input through the input device while the whisper target is designated; and transmitting the whisper packet to the server through the group call session.
- the designating may include: displaying a plurality of objects respectively corresponding to the plurality of participants on the touchscreen; identifying an object from among the plurality of objects indicated by a touch on the touchscreen that is maintained for a preset period of time; and designating a participant corresponding to the identified object as the whisper target while the touch is maintained.
- the designating may include: setting a whisper group including the at least one participant during the group call session; activating the whisper group based on a second touch gesture on the touchscreen; and designating the at least one participant included in the whisper group as the whisper target while the whisper group is active.
- the setting of the whisper group may include setting at least two whisper groups, each of which includes at least one participant, and the activating of the whisper group may include selectively activating a single whisper group among the at least two whisper groups based on the second touch gesture.
- the whisper group may be automatically set for each of the at least one participant included in the whisper group through the server based on the whisper group being set.
- the setting of the whisper group may include: displaying a plurality of objects respectively corresponding to the plurality of participants and a whisper group setting area on the touchscreen; identifying an object from among the plurality of objects that is moved to the whisper group setting area through a drag-and-drop gesture; and including a participant from among the plurality of participants that corresponds to the identified object in the whisper group.
- the setting of the whisper group may include: displaying a plurality of objects respectively corresponding to the plurality of participants and a whisper group generation button in a whisper group generation area on the touchscreen; activating a whisper group member selection mode based on selection of the whisper group generation button; identifying an object indicated by a touch on the touchscreen in a state in which the whisper group member selection mode is active; and including a participant from among the plurality of participants that corresponds to the identified object in the whisper group.
- the activating may include activating the whisper group based on a swipe gesture in a first direction on the touchscreen.
- the group call method may include: receiving a whisper packet from the server; modulating an audio signal indicated by the received whisper packet; and outputting the modulated audio through an output device of the computer apparatus.
- the group call method may further include: displaying a plurality of objects respectively corresponding to the plurality of participants on the touchscreen; receiving a whisper packet from the server; and highlighting and displaying an object from among the plurality of objects corresponding to a participant having transmitted the received whisper packet.
- a group call method including: participating in a group call session with a plurality of participants; designating at least one participant among the plurality of participants a whisper target based on a first touch gesture on a touchscreen during the group call session; generating a whisper packet configured to control a server to transfer, only to the whisper target, at least one of video and audio that are input through an input device while the whisper target is designated; and transmitting the whisper packet to the server through the group call session.
- the designating may include: displaying a plurality of objects respectively corresponding to the plurality of participants on the touchscreen; identifying an object from among the plurality of objects indicated by a touch on the touchscreen that is maintained for a preset period of time; and designating a participant corresponding to the identified object as the whisper target while the touch is maintained.
- the designating may include: setting a whisper group including the at least one participant during the group call session; activating the whisper group based on a second touch gesture; and designating the at least one participant included in the whisper group as the whisper target while the whisper group is active.
- the setting of the whisper group may include setting at least two whisper groups, each of which includes at least one participant, and the activating of the whisper group may include selectively activating a single whisper group among the at least two whisper groups based on the second touch gesture.
- the whisper group may be automatically set for each of the at least one participant included in the whisper group through the server based on the whisper group being set.
- the setting of the whisper group may include: displaying a plurality of objects respectively corresponding to the plurality of participants and a whisper group setting area on the touchscreen; identifying an object from among the plurality of objects that is moved to the whisper group setting area through a drag-and-drop gesture; and including a participant from among the plurality of participants that corresponds to the identified object in the whisper group.
- the setting of the whisper group may include: displaying a plurality of objects respectively corresponding to the plurality of participants and a whisper group generation button in a whisper group generation area on the touchscreen; activating a whisper group member selection mode based on selection of the whisper group generation button; identifying an object indicated by a touch on the touchscreen in a state in which the whisper group member selection mode is active; and including a participant from among the plurality of participants that corresponds to the identified object in the whisper group.
- the activating may include activating the whisper group based on a swipe gesture in a first direction on the touchscreen.
- the group call method may further include: receiving a whisper packet from the server; modulating an audio signal indicated by the received whisper packet; and outputting the modulated audio through an output device.
- the group call method may further include: displaying a plurality of objects respectively corresponding to the plurality of participants on the touchscreen; receiving a whisper packet from the server; and highlighting and displaying an object from among the plurality of objects corresponding to a participant having transmitted the received whisper packet.
- a computer apparatus including: a touchscreen; an input device; at least one memory configured to store computer-readable instructions; and at least one processor configured to execute the computer-readable instructions to: participate in a group call session with a plurality of participants, designate at least one participant among the plurality of participants as a whisper target based on a first touch gesture on the touchscreen during the group call session, generate a whisper packet configured to control a server to transfer, only to the whisper target, at least one of video and audio that are input through the input device while the whisper target is designated, and transmit the whisper packet to the server through the group call session.
- the at least one processor may be further configured to execute the computer-readable instructions to: display a plurality of objects respectively corresponding to the plurality of participants on the touchscreen, identify an object from among the plurality of objects indicated by a touch on the touchscreen that is maintained for a preset period of time, and designate a participant corresponding to the identified object as the whisper target while the touch is maintained.
- the at least one processor may be further configured to execute the computer-readable instructions to: set a whisper group including the at least one participant during the group call session, activate the whisper group based on a second touch gesture on the touchscreen, and designate the at least one participant included in the whisper group as the whisper target while the whisper group is active.
- the computer apparatus may further include an output device, and the at least one processor is further configured to execute the computer-readable instructions to: display a plurality of objects respectively corresponding to the plurality of participants on the touchscreen, receive a whisper packet from the server, modulate an audio signal indicated by the received whisper packet, output the modulated audio through the output device included in the computer apparatus, and highlight an object from among the plurality of objects corresponding to a participant having transmitted the received whisper packet.
- FIG. 1 is a diagram illustrating an example of a network environment according to at least one example embodiment
- FIG. 2 is a diagram illustrating an example of a computer apparatus according to at least one example embodiment
- FIG. 3 illustrates an example of a group call system according to at least one example embodiment
- FIGS. 4 to 7 illustrate examples of a group call process according to at least one example embodiment
- FIG. 8 illustrates an example of an extended container format based on an extended transfer protocol according to at least one example embodiment
- FIGS. 9 and 10 illustrate examples of a screen for selecting a whisper target according to at least one example embodiment
- FIG. 11 illustrates an example of a screen in the case of receiving a whisper packet according to at least one example embodiment
- FIGS. 12 and 13 illustrate examples of setting a whisper group according to at least one example embodiment
- FIGS. 14 and 15 illustrate other examples of setting a whisper group according to at least one example embodiment
- FIG. 16 illustrates an example of activating a whisper group according to at least one example embodiment
- FIG. 17 illustrates an example of a screen in the case of being included in a whisper group and receiving a whisper according to at least one example embodiment
- FIG. 18 is a flowchart illustrating an example of a group call method according to at least one example embodiment.
- Example embodiments will be described in detail with reference to the accompanying drawings.
- Example embodiments may be embodied in various different forms, and should not be construed as being limited to only the illustrated embodiments. Rather, the illustrated embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey the concepts of this disclosure to those skilled in the art. Accordingly, known processes, elements, and techniques, may not be described with respect to some example embodiments. Unless otherwise noted, like reference characters denote like elements throughout the attached drawings and written description, and thus descriptions will not be repeated.
- first,” “second,” “third,” etc. may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections, should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer, or section, from another region, layer, or section. Thus, a first element, component, region, layer, or section, discussed below may be termed a second element, component, region, layer, or section, without departing from the scope of this disclosure.
- spatially relative terms such as “beneath,” “below,” “lower,” “under,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below,” “beneath,” or “under,” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” may encompass both an orientation of above and below.
- the device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
- the element when an element is referred to as being “between” two elements, the element may be the only element between the two elements, or one or more other intervening elements may be present.
- the expression, “at least one of a, b, and c,” should be understood as including only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or any variations of the aforementioned examples.
- the term “exemplary” is intended to refer to an example or illustration.
- Example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed in more detail below.
- a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc.
- functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order.
- Units and/or devices may be implemented using hardware and/or a combination of hardware and software.
- hardware devices may be implemented using processing circuitry such as, but not limited to, a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner.
- processing circuitry such as, but not limited to, a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner.
- Software may include a computer program, program code, instructions, or some combination thereof, for independently or collectively instructing or configuring a hardware device to operate as desired.
- the computer program and/or program code may include program or computer-readable instructions, software components, software modules, data files, data structures, and/or the like, capable of being implemented by one or more hardware devices, such as one or more of the hardware devices mentioned above.
- Examples of program code include both machine code produced by a compiler and higher level program code that is executed using an interpreter.
- a hardware device is a computer processing device (e.g., a processor), Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a microprocessor, etc.
- the computer processing device may be configured to carry out program code by performing arithmetical, logical, and input/output operations, according to the program code.
- the computer processing device may be programmed to perform the program code, thereby transforming the computer processing device into a special purpose computer processing device.
- the processor becomes programmed to perform the program code and operations corresponding thereto, thereby transforming the processor into a special purpose processor.
- Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, or computer record medium or device, capable of providing instructions or data to, or being interpreted by, a hardware device.
- the software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion.
- software and data may be stored by one or more computer readable record mediums, including the tangible or non-transitory computer-readable storage media discussed herein.
- computer processing devices may be described as including various functional units that perform various operations and/or functions to increase the clarity of the description.
- computer processing devices are not intended to be limited to these functional units.
- the various operations and/or functions of the functional units may be performed by other ones of the functional units.
- the computer processing devices may perform the operations and/or functions of the various functional units without sub-dividing the operations and/or functions of the computer processing units into these various functional units.
- Units and/or devices may also include one or more storage devices.
- the one or more storage devices may be tangible or non-transitory computer-readable storage media, such as random access memory (RAM), read only memory (ROM), a permanent mass storage device (such as a disk drive, solid state (e.g., NAND flash) device, and/or any other like data storage mechanism capable of storing and recording data.
- RAM random access memory
- ROM read only memory
- a permanent mass storage device such as a disk drive, solid state (e.g., NAND flash) device, and/or any other like data storage mechanism capable of storing and recording data.
- the one or more storage devices may be configured to store computer programs, program code, instructions, or some combination thereof, for one or more operating systems and/or for implementing the example embodiments described herein.
- the computer programs, program code, instructions, or some combination thereof may also be loaded from a separate computer readable record medium into the one or more storage devices and/or one or more computer processing devices using a drive mechanism.
- Such separate computer readable record medium may include a Universal Serial Bus (USB) flash drive, a memory stick, a Blu-ray/DVD/CD-ROM drive, a memory card, and/or other like computer readable storage media.
- the computer programs, program code, instructions, or some combination thereof may be loaded into the one or more storage devices and/or the one or more computer processing devices from a remote data storage device via a network interface, rather than via a local computer readable record medium.
- the computer programs, program code, instructions, or some combination thereof may be loaded into the one or more storage devices and/or the one or more processors from a remote computing system that is configured to transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, over a network.
- the remote computing system may transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, via a wired interface, an air interface, and/or any other like medium.
- the one or more hardware devices, the one or more storage devices, and/or the computer programs, program code, instructions, or some combination thereof, may be specially designed and constructed for the purposes of the example embodiments, or they may be known devices that are altered and/or modified for the purposes of example embodiments.
- a hardware device such as a computer processing device, may run an operating system (OS) and one or more software applications that run on the OS.
- the computer processing device also may access, store, manipulate, process, and create data in response to execution of the software.
- OS operating system
- a hardware device may include multiple processing elements and multiple types of processing elements.
- a hardware device may include multiple processors or a processor and a controller.
- other processing configurations are possible, such as parallel processors.
- a group call system may be implemented by at least one computer system
- a group call apparatus may be implemented by at least one computer apparatus included in the group call system
- a group call method may be performed through at least one computer apparatus included in the group call system.
- a computer program may be installed and executed on the computer apparatus, and the computer apparatus may perform the group call method under control of the executed computer program.
- the computer program may be stored in a computer-readable storage medium to execute the group call method on a computer in conjunction with the computer apparatus.
- FIG. 1 illustrates an example of a network environment according to at least one example embodiment.
- the network environment may include a plurality of electronic devices 110 , 120 , 130 , and 140 , a plurality of servers 150 and 160 , and a network 170 .
- FIG. 1 is provided as an example only. A number of electronic devices or a number of servers is not limited thereto.
- the network environment of FIG. 1 is provided to describe one example among environments applicable to the example embodiments. An environment applicable to the example embodiments is not limited to the network environment of FIG. 1 .
- Each of the plurality of electronic devices 110 , 120 , 130 , and 140 may be a stationary terminal or a mobile terminal that is configured as a computer apparatus.
- the plurality of electronic devices 110 , 120 , 130 , and 140 may be a smartphone, a mobile phone, a navigation device, a computer, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a tablet PC, and the like.
- PDA personal digital assistant
- PMP portable multimedia player
- tablet PC tablet PC
- the electronic device 110 used herein may refer to one of various types of physical computer apparatuses capable of communicating with other electronic devices 120 , 130 , and 140 , and/or the servers 150 and 160 over the network 170 in a wireless or wired communication manner.
- the communication scheme is not limited and may include a near field wireless communication scheme between devices as well as a communication scheme using a communication network (e.g., a mobile communication network, wired Internet, wireless Internet, and a broadcasting network) includable in the network 170 .
- a communication network e.g., a mobile communication network, wired Internet, wireless Internet, and a broadcasting network
- the network 170 may include at least one of network topologies that include a personal area network (PAN), a local area network (LAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), a broadband network (BBN), and Internet.
- PAN personal area network
- LAN local area network
- CAN campus area network
- MAN metropolitan area network
- WAN wide area network
- BBN broadband network
- the network 170 may include at least one of network topologies that include a bus network, a star network, a ring network, a mesh network, a star-bus network, a tree or hierarchical network, and the like. However, they are provided as examples only.
- Each of the servers 150 and 160 may be configured as a computer apparatus or a plurality of computer apparatuses that provides an instruction, a code, a file, content, a service, etc., through communication with the plurality of electronic devices 110 , 120 , 130 , and 140 over the network 170 .
- the server 150 may be a system that provides a service (e.g., a group call service or an audio conferencing service, a messaging service, a mail service, a social network service, a map service, a translation service, a financial service, a payment service, a search service, and a content providing service) to the plurality of electronic devices 110 , 120 , 130 , and 140 connected over the network 170 .
- a service e.g., a group call service or an audio conferencing service, a messaging service, a mail service, a social network service, a map service, a translation service, a financial service, a payment service, a search service, and a content providing service
- FIG. 2 is a block diagram illustrating an example of a computer apparatus according to at least one example embodiment.
- a computer apparatus 200 shown in FIG. 2 may correspond to any one of the plurality of electronic devices 110 , 120 , 130 , and 140 or any one of the plurality of servers 150 and 160 .
- the computer apparatus 200 may include a memory 210 , a processor 220 , a communication interface 230 , and an input/output (I/O) interface 240 .
- the memory 210 may include a permanent mass storage device, such as random access (RAM), read only memory (ROM), and a disc drive, as a non-transitory computer-readable storage medium.
- the permanent mass storage device such as ROM and disc drive, may be included in the computer apparatus 200 as a separate permanent storage device different from the memory 210 .
- an operating system (OS) and at least one program code may be stored in the memory 210 .
- Such software components may be loaded from another non-transitory computer-readable storage medium to the memory 210 .
- the other non-transitory computer-readable storage medium may include a non-transitory computer-readable storage medium, for example, a floppy drive, a disk, a tape, a DVD/CD-ROM drive, a memory card, etc.
- software components may be loaded to the memory 210 through the communication interface 230 , instead of, or in addition to, the non-transitory computer-readable storage medium.
- the software components may be loaded to the memory 210 of the computer apparatus 200 based on a computer program installed by files received over the network 170 .
- the processor 220 may be configured to process computer-readable instructions of a computer program by performing basic arithmetic operations, logic operations, and I/O operations.
- the computer-readable instructions may be provided from the memory 210 or the communication interface 230 to the processor 220 .
- the processor 220 may be configured to execute received instructions in response to a program code stored in a storage device, such as the memory 210 .
- the communication interface 230 may include a transceiver (transmitter and receiver), and may provide a function for communication between the computer apparatus 200 and another apparatus, for example, the aforementioned electronic devices 120 , 130 , and 140 , and/or the servers 150 and 160 , over the network 170 .
- the processor 220 of the computer apparatus 200 may transfer data, a file, a request or an instruction created based on the program code stored in the storage device, such as the memory 210 , etc., to other apparatuses over the network 170 under control of the communication interface 230 .
- the communication interface 230 may receive a signal, an instruction, data, a file, etc., from another apparatus.
- a signal, an instruction, data, etc., received through the communication interface 230 may be transferred to the processor 220 or the memory 210 , and a file, etc., may be stored in a storage medium, for example, the permanent storage device, further includable in the computer apparatus 200 .
- the I/O interface 240 may be a device used to interface with an I/O apparatus 250 .
- an input device may include a device, such as a microphone, a keyboard, and a mouse
- an output device may include a device, such as a display device and a speaker.
- the I/O interface 240 may be a device for interface with an apparatus in which an input function and an output function are integrated into a single function, such as a touchscreen.
- the I/O apparatus 250 may be configured as a single apparatus with the computer apparatus 200 .
- a touchscreen, a microphone, a speaker, etc. may be included in the computer apparatus 200 , such as a smartphone.
- the computer apparatus 200 may include a number of components greater than or less than a number of components shown in FIG. 2 .
- the computer apparatus 200 may include at least a portion of the I/O apparatus 250 , or may further include other components, for example, a transceiver, a database (DB), and the like.
- DB database
- FIG. 3 illustrates an example of a group call system according to at least one example embodiment.
- the group call system may include a server 310 configured to provide a group call service and client devices 320 for a plurality of participants.
- communication between the server 310 and the client devices 320 may be performed through an extended transfer protocol in which destination information is included in a packet according to example embodiments. For example, a case in which a client device 1 321 , a client device 2 322 , and a client device 3 323 among the client devices 320 participate in a single group call instance may be considered.
- each of the client device 1 321 , the client device 2 322 , and the client device 3 323 that participate in the corresponding group call instance may designate a specific participant and may transmit a packet to a client device corresponding to the specific participant.
- the client device 1 321 may designate the participant corresponding the client device 3 323 and may transfer media, such as audio or video, to only the client device 3 323 .
- the client device 3 323 may be a whisper target and the client device 1 may whisper to the client device 3 323 .
- a participant of a group call service may transmit data to all of participants of a corresponding group call instance and may also transmit data to a specific participant or a small group of specific participants (i.e., a whisper target).
- the client device 1 321 may transmit data to the client device 2 322 but not transmit data to the client device 3 323 .
- the client device 321 1 may transmit data to the client device 2 322 and the client device 3 323 , but not transmit data to the client device n.
- FIGS. 4 to 7 illustrate examples of a group call process according to at least one example embodiment.
- FIG. 4 illustrates an example in which User 1 410 , User 2 420 , and User 3 430 participate in a group call instance.
- each of the User 1 410 , the User 2 420 , and the User 3 430 may correspond to one of the client devices 320 of FIG. 3 .
- the server 310 may manage a table that includes a source identifier (Srcid) and connecting information for each of the User 1 410 , the User 2 420 , and the User 3 430 as shown in a table 440 .
- Srcid source identifier
- the source identifier refers to a unique value for each group call and, generally, may be used to identify a source of data received at the server 310 from the users (e.g., an audio source of a user A or a video source of a user B).
- the source identifier may be used to identify a destination of which user data transmitted from one of the client devices 320 to the server 310 is to be transferred to in association with connecting information, that is, to identify destination information of the data.
- the source identifier (Srcid) and the connecting information may be managed as a single table with other information about the User 1 410 , the User 2 420 , and the User 3 430 .
- the server 310 may manage a table, such as the table 440 , by allocating at least one source identifier based on call related information received from the client device.
- the server 310 may transmit, to the client device initiating or participating in the group call, the source identifier allocated to the corresponding client device. Also, if other users are participating in the group call, the server 310 may transmit source identifiers of at least a portion of the other users participating in the group call to the client device of the user participating in the group call.
- the server 310 may transmit source identifiers of at least a portion of other users participating in the group call to the client device of the user participating in the group call in response to a request from the client device or based on determination of the server 310 , during the group call.
- the User 1 410 may use client device 1 321
- the User 2 420 may use client device 2 322
- the User 3 430 may use client device 3 323 .
- the client device 1 321 and the client device 2 322 may each include an audio device and a video device.
- the client device 3 323 may include an audio device but not a video device.
- the User 3 430 may join a group call which uses audio and video between the User 1 410 , using client device 1 321 , and the User 2 420 , using client device 2.
- the server 310 may allocate a source identifier Src5 to the User 3 430 and may store the source identifier Src5 in the table 440 in association with connecting information ip3/port1 corresponding to the source identifier Src5.
- the server 310 may transmit, to the client device 3 323 of the User 3 430 , source identifier information of the User 1 410 and the User 2 420 already participating in the group call with the source identifier Src5 allocated to the User 3 430 .
- the server 310 may transmit, to the client device 3 323 of the User 3 430 , source identifier information Src1 for audio of the User 1 410 and source identifier information Src3 for audio of the User 2 420 .
- the server 310 may not transmit, to the client device 3 323 of the User 3 430 , source identifier information Src2 for video of the User 1 410 and source identifier information Src4 for video of the User 2 420 .
- the server 310 may notify the User 1 410 and the User 2 420 already participating in the group call that the source identifier of the User 3 430 is Src5.
- each of the packets P1 and P2 may include a source identifier.
- the packet P1 may include audio data and the packet P2 includes video data
- the packet P1 may include a source identifier Src1 of the User 1 410 for audio
- the packet P2 may include a source identifier Src2 of the User 1 410 for video.
- the server 310 may identify that the packet P1 includes audio data based on the source identifier Src1 included in the packet P1, and may transfer the packet P1 to user devices of the User 2 420 and the User 3 430 having audio sources through ip2/port1 and ip3/port1 by referring to the table 440 . Also, the server 310 may identify that the packet P2 includes video data based on the source identifier Src2 included in the packet P2, and may transfer the packet P2 to the user device of the User 2 420 having the video source through ip2/port2 by referring to the table 440 .
- the packet P2 is not transferred to the User 3 430 .
- FIG. 5 illustrates an example of a process in which the User 1 410 transfers audio and video of the User 1 410 only to the User 2 420 during a group call.
- the server 310 may identify destination information in a packet transmitted from the User 1 410 and transfer the packet only to the User 2 420 based on the destination information in the packet.
- the User 1 410 may designate User 2 420 as a whisper target and whisper to the User 2 420 by transferring audio and video only to the User 2 420 .
- the example illustrated in FIG. 5 differs from an example in which a destination is not specified.
- a request for transferring audio and video only to the User 2 420 may be received from the User 1 410 .
- a source identifier Src3 may be designated as destination information in a packet P3 that includes the audio of the User 1 410
- a source identifier Src4 may be designated as destination information in a packet P4 that includes the video of the User 1 410 .
- the server 310 may designate the destination information in response to the request.
- the client device 321 may designate the destination information in response to the request based on source identifiers of other users received from the server 310 , for example, by referring to a table 510 of FIG. 5 .
- the server 310 may transfer the packet P3 to ip2/port1 of the User 2 420 corresponding to the source identifier Src3 designated in the packet P3 and may transfer the packet P4 to ip2/port2 of the User 2 420 corresponding to the source identifier Src4 designated in the packet P4, by referring to the table 440 of FIG. 4 .
- a source identifier may be designated in each packet received from the User 1 410 until the request for transferring audio and video of the User 1 410 only to the User 2 420 during the group call is released from the User 1 410 .
- the source identifier may be designated in each packet while the User 2 420 is designated as the whisper target.
- FIG. 6 illustrates an example of a process in which the User 1 410 transfers only video of the User 1 410 to the User 2 420 and transfers only audio of the User 1 410 to the User 3 430 during a group call.
- the server 310 may identify destination information in a packet transmitted from the User 1 410 and transfer the packet, based on the destination information in the packet, only to the User 2 420 if the packet is a video packet and transfer the packet only to the User 3 430 if the packet is an audio packet.
- FIG. 6 illustrates an example of a process in which the User 1 410 transfers only video of the User 1 410 to the User 2 420 and transfers only audio of the User 1 410 to the User 3 430 during a group call.
- the request for transferring video of the User 1 410 only to the User 2 420 and audio of the User 1 410 only to the User 3 430 may be received from the User 1 410 .
- a source identifier Src4 may be designated as destination information in a packet P5 for the video of the User 1 410 and a source identifier Src5 may be designated as destination information in a packet P6 for the audio of the User 1 410 .
- the server 310 may designate the destination information in response to the request.
- the client device 321 may designate the destination information in response to the request based on source identifiers of other users received from the server 310 , for example, by referring to a table 510 of FIG. 6 .
- the server 310 may transfer the packet P5 to ip2/port2 of the User 2 420 corresponding to the source identifier Src4 designated in the packet P5 and may transfer the packet P6 to ip3/port1 of the User 3 430 corresponding to the source identifier Src5 designated in the packet P6, by referring to the table 440 of FIG. 4 .
- a related transfer protocol may be extended to include designate destination information.
- Such related transfer protocols transfer a packet to all of participants in a form of broadcast and thus, do not need to designate a destination. Accordingly, there is no field for including destination information.
- the example embodiment may extend the related transfer protocol by adding a field for designating destination information using a reserved area in a header of a packet according to the related transfer protocol.
- FIG. 7 illustrates a case in which a User 4 710 and a User 5 720 as well as the User 1 410 , the User 2 420 , and the User 3 430 further participate in a group call instance is considered.
- each of the User 4 710 and the User 5 720 may use one of the client devices 320 of FIG. 3 .
- FIG. 7 illustrates an example in which the User 1 410 designates the User 2 420 and the User 5 720 and transfers audio data.
- the server 310 may identify destination information in a packet transmitted from the User 1 410 and may transfer the packet only to the User 2 420 and the User 5 720 based on the destination information in the packet.
- the example illustrated in FIG. 7 differs from an example in which a destination is not specified.
- a request for transferring audio only to the User 2 420 and the User 5 720 may be received from the User 1 410 .
- source identifiers Src3 and Src8 may be designated as destination information in a packet P7 that includes audio of the User 1 410 .
- the server 310 may designate the destination information in response to the request.
- the client device 321 may designate the destination information in response to the request based on source identifiers of other users received from the server 310 , for example, by referring to a table 730 of FIG. 7 .
- the server 310 may transfer the packet P7 to each of the User 2 420 and the User 5 720 through connecting information of the User 2 420 corresponding to the source identifier Src3 and connecting information of the User 5 720 corresponding to the source identifier Src8, which are designated in the packet P7.
- FIG. 8 illustrates an example of an extended container format based on an extended transfer protocol according to at least one example embodiment.
- FIG. 8 illustrates a container format 800 of an extended transfer protocol of a real-time transport protocol (RTP).
- destination synchronization sources dSSRCs
- SSRCs synchronization sources
- a first box 810 indicated with dotted lines may represent a field for recording dSSRCs (D-SSRCs) in an RTP packet header.
- recording a dSSRC may represent recording an identifier of the dSSRC.
- D-CNT for counting a number of dSSRCs to correspond to CC for counting a number of SSRCs may be newly defined.
- a second box 820 indicated with dotted lines may represent a field for counting a number of dSSRCs.
- a client may designate another client to which a packet is to be transferred by adding a dSSRC to the packet, and the server 310 may verify a destination to which the packet is to be transferred through the dSSRC included in the packet received from the client and may transmit the packet to the verified destination.
- FIGS. 9 and 10 illustrate examples of a screen for selecting a whisper target according to at least one example embodiment.
- a first screen 900 may be a terminal screen of a user that participates in a group call.
- terminal screen may be displayed on a terminal, which may be one of the client devices 320 of FIG. 3 .
- the terminal screen of the user may be configured through a touchscreen.
- videos included in packets received from a plurality of participants of a group call may be displayed on the first screen 900 through objects for the respective corresponding participants.
- the packets may be routed through a server for the group call and transferred to each of the participants.
- audio, for example, voices of the respective participants, included in the packets may be output through an output device, for example, a speaker included in a terminal of a user.
- video and/or audio input from the terminal of the user through an input device, for example, a camera and/or a microphone may be routed and may be transferred to each of the participants.
- a second screen 1000 represents an example in which a user selects a participant that a user desires to whisper with during a group call.
- the user may indicate a participant that the user desires to whisper with by touching an area of an object corresponding to the participant among objects (objects that represent videos of the participants) displayed on a terminal screen of the user.
- a terminal of the user may identify an object on which the touch is being maintained and may configure a whisper packet such that video and/or audio that are input while the touch is maintained may be transferred only to a participant corresponding to the identified object.
- the second screen 1000 represents an example in which the user inputs a long tap gesture on an area 1010 of an object on which a video of User 6 is displayed.
- the terminal of the user may configure a whisper packet such that video and/or audio input during the long tap gesture may be transferred only to User 6 and may transfer the whisper packet to a server for the group call.
- the server may transfer the whisper packet only to the User 6 while the long tap gesture is maintained.
- the input video and/or audio may be transferred to all of the participants of the group call as displayed on the first screen 900 . That is, the user may quickly and conveniently transmit a whisper to a corresponding participant by simply touching, on a screen, an object corresponding to the participant that the user desires to whisper with during the progress of the group call and may quickly return to the group call by simply releasing the touch.
- FIG. 11 illustrates an example of a screen of a terminal that is receiving a whisper packet according to at least one example embodiment.
- a third screen 1100 represents an example in which another participant transmits a whisper.
- a video of the User 1 is graphically distinguished (i.e., highlighted and/or enlarged) and thereby displayed.
- the terminal of the user may highlight and output a voice of the user 1 by modulating and outputting the whisper of the User 1, that is, an audio included in a whisper packet.
- an audio corresponding to the whisper may be modulated through howling processing and thereby output.
- the user may easily identify a participant that transmits the whisper (i.e., a source of the whisper) to the user.
- FIGS. 12 and 13 illustrate examples of setting a whisper group according to at least one example embodiment.
- an area 1210 for setting a whisper group is displayed on a fourth screen 1200 .
- a user may conveniently add a corresponding participant to a whisper group by moving, to the area 1210 through a drag-and-drop (D&D) gesture, an object corresponding to the participant that the user desires to include in the whisper group among objects on which videos of the participants are displayed on the fourth screen 1200 .
- D&D drag-and-drop
- a first indication 1310 on a fifth screen 1300 represents that a single participant is included in a whisper group capable of including four participants.
- the single participant may be added to the whisper group in response to the user moving an object corresponding to the single participant to the area 1210 on the fourth screen 1200 .
- the user may set the whisper group by moving, to the area 1210 , objects corresponding to participants that the user desires to include in the whisper group.
- a number of participants includable in the whisper group may be readily set depending on example embodiments, without being particularly limited.
- FIGS. 14 and 15 illustrate other examples of setting a whisper group according to at least one example embodiment.
- a button 1410 for entering a mode for selecting participants to be included in a whisper group is displayed on a sixth screen 1400 .
- the corresponding mode may be activated.
- the user reselects the button 1410 in a state in which the corresponding mode is active, the corresponding mode may be inactivated.
- a seventh screen 1500 represents an example in which, in response to the user selecting an object on which a video of a participant is displayed, for example, touching an area on which an object is displayed, in a state in which a mode for selecting participants to be included in the whisper group is active, the participant corresponding to the selected object participates in the whisper group.
- indications 1510 and 1520 respectively represent that, in response to the user selecting the User 6 and the User 11, the User 6 and the User 11 are selected for the whisper group.
- a second indication 1530 represents a number of participants selected by the user in the aforementioned mode.
- the second indication 1530 represents that two participants are included in the whisper group in which four participants are includable by providing an image in two of four circles in the second indication 1530 .
- the image representing each of the two participants in the whisper group includes an ear.
- the second indication 1530 may be determined based on a profile of a participant included in the whisper group. As described above, a number of participants included in the whisper group may be readily set depending on example embodiments, without being particularly limited.
- a user that sets a whisper group according to one of FIGS. 12 to 15 may be automatically included in the whisper group without a separate input.
- FIG. 16 illustrates an example of activating a whisper group according to at least one example embodiment.
- FIG. 16 illustrates an example in which the user activates a whisper group as displayed on an eighth screen 1600 through a preset touch gesture (e.g., a swipe in a first direction in the example embodiment) during the progress of the group call as displayed on the first screen 900 of FIG. 9 .
- a preset touch gesture e.g., a swipe in a first direction in the example embodiment
- the terminal of the user may activate a preset whisper group.
- the eighth screen 1600 represents an example of a screen for activating a whisper group that includes four participants, for example, User 1, User 6, User 10, and User 12.
- a whisper packet may be configured such that video and/or audio input through an input device, for example, a camera and/or a microphone, included in the terminal of the user may be transferred only to the participants of the whisper group while the corresponding whisper group is active.
- an input device for example, a camera and/or a microphone
- the terminal of the user may inactivate the whisper group and may process again a call with all participants of the group call as displayed on the first screen 900 .
- a plurality of whisper groups may be set by the user.
- a first whisper group including four participants, for example, User 1, User 6, User 10, and User 12, and a second whisper group including three participants, for example, User 3, User 6, and User 9, may be set.
- the terminal of the user in response to recognizing a swipe from the right to the left on the first screen 900 , the terminal of the user may activate the first whisper group.
- the terminal of the user in response to recognizing a swipe from the right to the left one more time, the terminal of the user may activate the second whisper group.
- the terminal of the user may inactivate the second whisper group and activate the first whisper group.
- the terminal of the user may inactivate the first whisper group and may process a call with the entire participants of the group call again as displayed on the first screen 900 .
- the swipe used herein refers to an example of quickly activating a whisper group. Also, it may be easily understood from the foregoing description that the whisper group may become activate and/or inactive through the aforementioned touch gesture.
- FIG. 17 illustrates an example of a screen of a terminal that is included in a whisper group and receiving a whisper according to at least one example embodiment. If another participant sets a whisper group by including a user, the whisper group including the user may be automatically generated. Referring to FIG. 17 , if the whisper group is active and a whisper packet is received, a terminal of the user may highlight and display objects corresponding to the participants, for example, User 1, User 6, User 10, and User 12, of the whisper group as displayed on a ninth screen 1700 .
- audio transferred through a whisper packet may be modulated and thereby output.
- the terminal of the user may modulate the audio included in the whisper packet through howling processing and then output the modulated audio through a speaker.
- the corresponding whisper group may be automatically set with respect to each of the other participants, for example, User 6, User 10, and User 12, of the whisper group, such that all of the participants of the whisper group may transfer a whisper to the corresponding whisper group.
- FIG. 18 is a flowchart illustrating an example of a group call method according to at least one example embodiment.
- the group call method according to the example embodiment may be performed by the computer apparatus 200 that implements a client participating in a group call session.
- the processor 220 of the computer apparatus 200 may be configured to execute a control instruction according to a code of at least one program or a code of an OS included in the memory 210 .
- the processor 220 may control the computer apparatus 200 to perform operations 1810 to 1840 included in the group call method of FIG. 18 in response to the control instruction provided from the code stored in the computer apparatus 200 .
- the computer apparatus 200 may participate in a group call session.
- a group call with other participants of the corresponding group call session may proceed.
- packets including videos and/or audio may be broadcasted to participants of the group call session.
- the computer apparatus 200 may designate at least one participant among a plurality of participants that participates in the group call session as a whisper target in response to a touch gesture on a touchscreen included in the computer apparatus 200 in a state in which the group call session is maintained.
- the touchscreen may be included in the I/O apparatus 250 of FIG. 2 .
- the computer apparatus 200 may display an object corresponding to each of the plurality of participants on the touchscreen and may identify an object indicated by a touch that occurs on the touchscreen and is maintained during a preset period of time. In this case, the computer apparatus 200 may designate a participant corresponding to the identified object as the whisper target while the touch is maintained. If the corresponding touch is released, the corresponding participant may be released from the whisper target. That is, the user may quickly designate the whisper target and may transmit a whisper to the designated whisper target by simply touching an object of a desired participant among objects respectively corresponding to the participants displayed on the touchscreen. The user may also quickly release the whisper target by releasing the touch of the object.
- the computer apparatus 200 may set a whisper group including at least one participant among the plurality of participants in a state in which the group call session is maintained, and may activate the whisper group in response to a touch gesture preset for the touchscreen.
- the computer apparatus 200 may activate the whisper group in response to a swipe gesture in a first direction on the touchscreen.
- the computer apparatus 200 may inactivate the whisper group.
- the computer apparatus 200 may designate at least one participant included in the whisper group as the whisper target while the whisper group is active. That is, the user may activate and/or inactivate the whisper group through a simple touch gesture and may simply transmit a whisper only to participants of the whisper group among the entire participants.
- the computer apparatus 200 may display objects respectively corresponding to the plurality of participants and a whisper group setting area on the touchscreen.
- the user may move an object of a participant the user desires to include in the whisper group to the whisper group setting area through a drag-and-drop gesture.
- the computer apparatus 200 may easily and quickly set the whisper group by identifying the object that is moved to the whisper group setting area through the drag-and-drop gesture and by including a participant corresponding to the identified object in the whisper group.
- the computer apparatus 200 may display objects respectively corresponding to the plurality of participants and a whisper group generation area on the touchscreen.
- the user may select a participant by selecting an object of a participant the user desires to include in the whisper group, for example, by touching an area on which the corresponding object is displayed, in a state in which the whisper group generation button is active.
- the computer apparatus 200 may easily and quickly set the whisper group by identifying an object indicated by a touch that occurs on the touchscreen in a state in which the whisper group generation button is activate and by including a participant corresponding to the identified object in the whisper group.
- the computer apparatus 200 may configure a whisper packet to transfer, to the whisper target, at least one of video and audio that are input through an input device included in the computer apparatus 200 while the group call session is designated.
- the input device may include, for example, a camera and/or a microphone and may also be included in the I/O apparatus 250 of FIG. 2 .
- the computer apparatus 200 may transmit the whisper packet to a server through the group call session.
- the server may quickly process the whisper by transferring the whisper packet only to the corresponding participant.
- a method of transferring, by the server, the whisper packet to the corresponding participant is described above in detail with reference to FIGS. 3 to 8 .
- the computer apparatus 200 may receive a whisper packet from the server. For example, if another participant of the group call session designates the user as a whisper target, a whisper packet from a terminal of the other participant may be transmitted to the computer apparatus 200 through the server.
- the computer apparatus 200 may modulate an audio included in the received whisper packet and may output the modulated audio through an output device included in the computer apparatus 200 , for example, a speaker included in the computer apparatus 200 .
- the computer apparatus 200 in displaying objects respectively corresponding to the plurality of participants on the touchscreen, may highlight and display an object corresponding to a participant having transmitted the received whisper packet or may highlight and display objects corresponding to participants of a whisper group set by the participant having transmitted the received whisper group. For example, if the other participant designates only the user as the whisper target and transmits a whisper packet, an object corresponding to the other participant that transmits the whisper packet may be highlighted and displayed and an audio of the corresponding participant included in the whisper packet may be modulated and output.
- the corresponding whisper packet may be transferred to participants included in the corresponding whisper group.
- the computer apparatus 200 may modulate and output audio of all whisper packets transmitted from the participants included in the whisper group.
- the computer apparatus 200 may highlight objects corresponding to the participants included in the whisper group and may display the same on the touchscreen.
- a processing device may be implemented using one or more general-purpose or special purpose computers, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a programmable logic unit (PLU), a microprocessor or any other device capable of responding to and executing instructions in a defined manner.
- the processing device may run an operating system (OS) and one or more software applications that run on the OS.
- the processing device also may access, store, manipulate, process, and create data in response to execution of the software.
- a processing device may include multiple processing elements and multiple types of processing elements.
- a processing device may include multiple processors or a processor and a controller.
- different processing configurations are possible, such as parallel processors.
- the software may include a computer program, a piece of code, an instruction, or some combination thereof, for independently or collectively instructing or configuring the processing device to operate as desired.
- Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical equipment, computer record medium or device, or in a propagated signal wave capable of providing instructions or data to or being interpreted by the processing device.
- the software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion.
- the software and data may be stored by one or more computer readable record mediums.
- the methods according to the example embodiments may be recorded in non-transitory computer-readable storage media including program instructions to implement various operations embodied by a computer.
- the media may also include, alone or in combination with the program instructions, data files, data structures, and the like.
- the media and program instructions may be those specially designed and constructed for the purposes, or they may be of the kind well-known and available to those having skill in the computer software arts.
- non-transitory computer-readable storage media examples include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVD; magneto-optical media such as floptical disks; and hardware devices that are specially to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like.
- Examples of other media may include recording media and storage media managed by an Appstore that distributes applications or a site, a server, and the like that supplies and distributes other various types of software.
- Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
- the described hardware devices may be to act as one or more software modules in order to perform the operations of the above-described embodiments, or vice versa.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Business, Economics & Management (AREA)
- Telephonic Communication Services (AREA)
- User Interface Of Digital Computer (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
- This application claims priority to Korean Patent Application No. 10-2019-0126918, filed Oct. 14, 2019 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
- Apparatuses, systems, and methods according to example embodiments relate to group calling.
- A conference call refers to a group call that allows a plurality of users to talk at the same time. A group call may be implemented using a real-time call service, and may transfer audio data and media, such as video, to a call counterpart. However, in the related art data transferred through the group call is transmitted to each of the users participating in a corresponding instance of the group call. For example, in an instance of a group call participated in by five
users including user 1,user 2,user 3,user 4, anduser 5, data of audio/video of theuser 1 is transferred to each of the remaining users, that is, theuser 2, theuser 3, theuser 4, and theuser 5. That is, in the group call according to the related art, data is broadcasted at all times. - Example embodiments provide a group call method, apparatus and system in which a desired participant or participant group may be spontaneously selected during the progress of a group call to allow whispering with the selected participant or participant group.
- According to an aspect of an example embodiment, there is provided a non-transitory computer-readable record medium storing instructions that, when executed by a processor of a computer apparatus including a touchscreen and an input device, cause the computer apparatus to execute a group call method including: participating in a group call session with a plurality of participants; designating at least one participant among the plurality of participants as a whisper target based on a first touch gesture on the touchscreen during the group call session; generating a whisper packet configured to control a server to transfer, only to the whisper target, at least one of video and audio that are input through the input device while the whisper target is designated; and transmitting the whisper packet to the server through the group call session.
- The designating may include: displaying a plurality of objects respectively corresponding to the plurality of participants on the touchscreen; identifying an object from among the plurality of objects indicated by a touch on the touchscreen that is maintained for a preset period of time; and designating a participant corresponding to the identified object as the whisper target while the touch is maintained.
- The designating may include: setting a whisper group including the at least one participant during the group call session; activating the whisper group based on a second touch gesture on the touchscreen; and designating the at least one participant included in the whisper group as the whisper target while the whisper group is active.
- The setting of the whisper group may include setting at least two whisper groups, each of which includes at least one participant, and the activating of the whisper group may include selectively activating a single whisper group among the at least two whisper groups based on the second touch gesture.
- The whisper group may be automatically set for each of the at least one participant included in the whisper group through the server based on the whisper group being set.
- The setting of the whisper group may include: displaying a plurality of objects respectively corresponding to the plurality of participants and a whisper group setting area on the touchscreen; identifying an object from among the plurality of objects that is moved to the whisper group setting area through a drag-and-drop gesture; and including a participant from among the plurality of participants that corresponds to the identified object in the whisper group.
- The setting of the whisper group may include: displaying a plurality of objects respectively corresponding to the plurality of participants and a whisper group generation button in a whisper group generation area on the touchscreen; activating a whisper group member selection mode based on selection of the whisper group generation button; identifying an object indicated by a touch on the touchscreen in a state in which the whisper group member selection mode is active; and including a participant from among the plurality of participants that corresponds to the identified object in the whisper group.
- The activating may include activating the whisper group based on a swipe gesture in a first direction on the touchscreen.
- The group call method may include: receiving a whisper packet from the server; modulating an audio signal indicated by the received whisper packet; and outputting the modulated audio through an output device of the computer apparatus.
- The group call method may further include: displaying a plurality of objects respectively corresponding to the plurality of participants on the touchscreen; receiving a whisper packet from the server; and highlighting and displaying an object from among the plurality of objects corresponding to a participant having transmitted the received whisper packet.
- According to an aspect of an example embodiment, there is provided a group call method including: participating in a group call session with a plurality of participants; designating at least one participant among the plurality of participants a whisper target based on a first touch gesture on a touchscreen during the group call session; generating a whisper packet configured to control a server to transfer, only to the whisper target, at least one of video and audio that are input through an input device while the whisper target is designated; and transmitting the whisper packet to the server through the group call session.
- The designating may include: displaying a plurality of objects respectively corresponding to the plurality of participants on the touchscreen; identifying an object from among the plurality of objects indicated by a touch on the touchscreen that is maintained for a preset period of time; and designating a participant corresponding to the identified object as the whisper target while the touch is maintained.
- The designating may include: setting a whisper group including the at least one participant during the group call session; activating the whisper group based on a second touch gesture; and designating the at least one participant included in the whisper group as the whisper target while the whisper group is active.
- The setting of the whisper group may include setting at least two whisper groups, each of which includes at least one participant, and the activating of the whisper group may include selectively activating a single whisper group among the at least two whisper groups based on the second touch gesture.
- The whisper group may be automatically set for each of the at least one participant included in the whisper group through the server based on the whisper group being set.
- The setting of the whisper group may include: displaying a plurality of objects respectively corresponding to the plurality of participants and a whisper group setting area on the touchscreen; identifying an object from among the plurality of objects that is moved to the whisper group setting area through a drag-and-drop gesture; and including a participant from among the plurality of participants that corresponds to the identified object in the whisper group.
- The setting of the whisper group may include: displaying a plurality of objects respectively corresponding to the plurality of participants and a whisper group generation button in a whisper group generation area on the touchscreen; activating a whisper group member selection mode based on selection of the whisper group generation button; identifying an object indicated by a touch on the touchscreen in a state in which the whisper group member selection mode is active; and including a participant from among the plurality of participants that corresponds to the identified object in the whisper group.
- The activating may include activating the whisper group based on a swipe gesture in a first direction on the touchscreen.
- The group call method may further include: receiving a whisper packet from the server; modulating an audio signal indicated by the received whisper packet; and outputting the modulated audio through an output device.
- The group call method may further include: displaying a plurality of objects respectively corresponding to the plurality of participants on the touchscreen; receiving a whisper packet from the server; and highlighting and displaying an object from among the plurality of objects corresponding to a participant having transmitted the received whisper packet.
- According to an aspect of an example embodiment, there is provided a computer apparatus including: a touchscreen; an input device; at least one memory configured to store computer-readable instructions; and at least one processor configured to execute the computer-readable instructions to: participate in a group call session with a plurality of participants, designate at least one participant among the plurality of participants as a whisper target based on a first touch gesture on the touchscreen during the group call session, generate a whisper packet configured to control a server to transfer, only to the whisper target, at least one of video and audio that are input through the input device while the whisper target is designated, and transmit the whisper packet to the server through the group call session.
- The at least one processor may be further configured to execute the computer-readable instructions to: display a plurality of objects respectively corresponding to the plurality of participants on the touchscreen, identify an object from among the plurality of objects indicated by a touch on the touchscreen that is maintained for a preset period of time, and designate a participant corresponding to the identified object as the whisper target while the touch is maintained.
- The at least one processor may be further configured to execute the computer-readable instructions to: set a whisper group including the at least one participant during the group call session, activate the whisper group based on a second touch gesture on the touchscreen, and designate the at least one participant included in the whisper group as the whisper target while the whisper group is active.
- The computer apparatus may further include an output device, and the at least one processor is further configured to execute the computer-readable instructions to: display a plurality of objects respectively corresponding to the plurality of participants on the touchscreen, receive a whisper packet from the server, modulate an audio signal indicated by the received whisper packet, output the modulated audio through the output device included in the computer apparatus, and highlight an object from among the plurality of objects corresponding to a participant having transmitted the received whisper packet.
- The above and/or other aspects will be more apparent by describing certain example embodiments, with reference to the accompanying drawings, in which:
-
FIG. 1 is a diagram illustrating an example of a network environment according to at least one example embodiment; -
FIG. 2 is a diagram illustrating an example of a computer apparatus according to at least one example embodiment; -
FIG. 3 illustrates an example of a group call system according to at least one example embodiment; -
FIGS. 4 to 7 illustrate examples of a group call process according to at least one example embodiment; -
FIG. 8 illustrates an example of an extended container format based on an extended transfer protocol according to at least one example embodiment; -
FIGS. 9 and 10 illustrate examples of a screen for selecting a whisper target according to at least one example embodiment; -
FIG. 11 illustrates an example of a screen in the case of receiving a whisper packet according to at least one example embodiment; -
FIGS. 12 and 13 illustrate examples of setting a whisper group according to at least one example embodiment; -
FIGS. 14 and 15 illustrate other examples of setting a whisper group according to at least one example embodiment; -
FIG. 16 illustrates an example of activating a whisper group according to at least one example embodiment; -
FIG. 17 illustrates an example of a screen in the case of being included in a whisper group and receiving a whisper according to at least one example embodiment; and -
FIG. 18 is a flowchart illustrating an example of a group call method according to at least one example embodiment. - Example embodiments are described in greater detail below with reference to the accompanying drawings.
- In the following description, like drawing reference numerals are used for like elements, even in different drawings. The matters defined in the description, such as detailed construction and elements, are provided to assist in a comprehensive understanding of the example embodiments. However, it is apparent that the example embodiments can be practiced without those specifically defined matters. Also, well-known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.
- One or more example embodiments will be described in detail with reference to the accompanying drawings. Example embodiments, however, may be embodied in various different forms, and should not be construed as being limited to only the illustrated embodiments. Rather, the illustrated embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey the concepts of this disclosure to those skilled in the art. Accordingly, known processes, elements, and techniques, may not be described with respect to some example embodiments. Unless otherwise noted, like reference characters denote like elements throughout the attached drawings and written description, and thus descriptions will not be repeated.
- Although the terms “first,” “second,” “third,” etc., may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections, should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer, or section, from another region, layer, or section. Thus, a first element, component, region, layer, or section, discussed below may be termed a second element, component, region, layer, or section, without departing from the scope of this disclosure.
- Spatially relative terms, such as “beneath,” “below,” “lower,” “under,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below,” “beneath,” or “under,” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” may encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. In addition, when an element is referred to as being “between” two elements, the element may be the only element between the two elements, or one or more other intervening elements may be present.
- As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups, thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed products. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. For example, the expression, “at least one of a, b, and c,” should be understood as including only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or any variations of the aforementioned examples. Also, the term “exemplary” is intended to refer to an example or illustration.
- When an element is referred to as being “on,” “connected to,” “coupled to,” or “adjacent to,” another element, the element may be directly on, connected to, coupled to, or adjacent to, the other element, or one or more other intervening elements may be present. In contrast, when an element is referred to as being “directly on,” “directly connected to,” “directly coupled to,” or “immediately adjacent to,” another element there are no intervening elements present.
- Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. Terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and/or this disclosure, and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
- Example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed in more detail below. Although discussed in a particular manner, a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc. For example, functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order.
- Units and/or devices according to one or more example embodiments may be implemented using hardware and/or a combination of hardware and software. For example, hardware devices may be implemented using processing circuitry such as, but not limited to, a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner.
- Software may include a computer program, program code, instructions, or some combination thereof, for independently or collectively instructing or configuring a hardware device to operate as desired. The computer program and/or program code may include program or computer-readable instructions, software components, software modules, data files, data structures, and/or the like, capable of being implemented by one or more hardware devices, such as one or more of the hardware devices mentioned above. Examples of program code include both machine code produced by a compiler and higher level program code that is executed using an interpreter.
- For example, when a hardware device is a computer processing device (e.g., a processor), Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a microprocessor, etc., the computer processing device may be configured to carry out program code by performing arithmetical, logical, and input/output operations, according to the program code. Once the program code is loaded into a computer processing device, the computer processing device may be programmed to perform the program code, thereby transforming the computer processing device into a special purpose computer processing device. In a more specific example, when the program code is loaded into a processor, the processor becomes programmed to perform the program code and operations corresponding thereto, thereby transforming the processor into a special purpose processor.
- Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, or computer record medium or device, capable of providing instructions or data to, or being interpreted by, a hardware device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. In particular, for example, software and data may be stored by one or more computer readable record mediums, including the tangible or non-transitory computer-readable storage media discussed herein.
- According to one or more example embodiments, computer processing devices may be described as including various functional units that perform various operations and/or functions to increase the clarity of the description. However, computer processing devices are not intended to be limited to these functional units. For example, in one or more example embodiments, the various operations and/or functions of the functional units may be performed by other ones of the functional units. Further, the computer processing devices may perform the operations and/or functions of the various functional units without sub-dividing the operations and/or functions of the computer processing units into these various functional units.
- Units and/or devices according to one or more example embodiments may also include one or more storage devices. The one or more storage devices may be tangible or non-transitory computer-readable storage media, such as random access memory (RAM), read only memory (ROM), a permanent mass storage device (such as a disk drive, solid state (e.g., NAND flash) device, and/or any other like data storage mechanism capable of storing and recording data. The one or more storage devices may be configured to store computer programs, program code, instructions, or some combination thereof, for one or more operating systems and/or for implementing the example embodiments described herein. The computer programs, program code, instructions, or some combination thereof, may also be loaded from a separate computer readable record medium into the one or more storage devices and/or one or more computer processing devices using a drive mechanism. Such separate computer readable record medium may include a Universal Serial Bus (USB) flash drive, a memory stick, a Blu-ray/DVD/CD-ROM drive, a memory card, and/or other like computer readable storage media. The computer programs, program code, instructions, or some combination thereof, may be loaded into the one or more storage devices and/or the one or more computer processing devices from a remote data storage device via a network interface, rather than via a local computer readable record medium. Additionally, the computer programs, program code, instructions, or some combination thereof, may be loaded into the one or more storage devices and/or the one or more processors from a remote computing system that is configured to transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, over a network. The remote computing system may transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, via a wired interface, an air interface, and/or any other like medium.
- The one or more hardware devices, the one or more storage devices, and/or the computer programs, program code, instructions, or some combination thereof, may be specially designed and constructed for the purposes of the example embodiments, or they may be known devices that are altered and/or modified for the purposes of example embodiments.
- A hardware device, such as a computer processing device, may run an operating system (OS) and one or more software applications that run on the OS. The computer processing device also may access, store, manipulate, process, and create data in response to execution of the software. For simplicity, one or more example embodiments may be exemplified as one computer processing device; however, one skilled in the art will appreciate that a hardware device may include multiple processing elements and multiple types of processing elements. For example, a hardware device may include multiple processors or a processor and a controller. In addition, other processing configurations are possible, such as parallel processors.
- Although described with reference to specific examples and drawings, modifications, additions and substitutions of example embodiments may be variously made according to the description by those of ordinary skill in the art. For example, the described techniques may be performed in an order different with that of the methods described, and/or components such as the described system, architecture, devices, circuit, and the like, may be connected or combined to be different from the above-described methods, or results may be appropriately achieved by other components or equivalents.
- Hereinafter, example embodiments are described with reference to the accompanying drawings.
- A group call system according to example embodiments may be implemented by at least one computer system, a group call apparatus according to example embodiments may be implemented by at least one computer apparatus included in the group call system, and a group call method according to example embodiments may be performed through at least one computer apparatus included in the group call system. Here, a computer program according to example embodiments may be installed and executed on the computer apparatus, and the computer apparatus may perform the group call method under control of the executed computer program. The computer program may be stored in a computer-readable storage medium to execute the group call method on a computer in conjunction with the computer apparatus.
-
FIG. 1 illustrates an example of a network environment according to at least one example embodiment. Referring toFIG. 1 , the network environment may include a plurality ofelectronic devices servers 150 and 160, and anetwork 170.FIG. 1 is provided as an example only. A number of electronic devices or a number of servers is not limited thereto. Also, the network environment ofFIG. 1 is provided to describe one example among environments applicable to the example embodiments. An environment applicable to the example embodiments is not limited to the network environment ofFIG. 1 . - Each of the plurality of
electronic devices electronic devices FIG. 1 illustrates a shape of a smartphone as an example of theelectronic device 110, theelectronic device 110 used herein may refer to one of various types of physical computer apparatuses capable of communicating with otherelectronic devices servers 150 and 160 over thenetwork 170 in a wireless or wired communication manner. - The communication scheme is not limited and may include a near field wireless communication scheme between devices as well as a communication scheme using a communication network (e.g., a mobile communication network, wired Internet, wireless Internet, and a broadcasting network) includable in the
network 170. For example, thenetwork 170 may include at least one of network topologies that include a personal area network (PAN), a local area network (LAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), a broadband network (BBN), and Internet. Also, thenetwork 170 may include at least one of network topologies that include a bus network, a star network, a ring network, a mesh network, a star-bus network, a tree or hierarchical network, and the like. However, they are provided as examples only. - Each of the
servers 150 and 160 may be configured as a computer apparatus or a plurality of computer apparatuses that provides an instruction, a code, a file, content, a service, etc., through communication with the plurality ofelectronic devices network 170. For example, theserver 150 may be a system that provides a service (e.g., a group call service or an audio conferencing service, a messaging service, a mail service, a social network service, a map service, a translation service, a financial service, a payment service, a search service, and a content providing service) to the plurality ofelectronic devices network 170. -
FIG. 2 is a block diagram illustrating an example of a computer apparatus according to at least one example embodiment. Acomputer apparatus 200 shown inFIG. 2 may correspond to any one of the plurality ofelectronic devices servers 150 and 160. - Referring to
FIG. 2 , thecomputer apparatus 200 may include amemory 210, aprocessor 220, acommunication interface 230, and an input/output (I/O) interface 240. Thememory 210 may include a permanent mass storage device, such as random access (RAM), read only memory (ROM), and a disc drive, as a non-transitory computer-readable storage medium. Here, the permanent mass storage device, such as ROM and disc drive, may be included in thecomputer apparatus 200 as a separate permanent storage device different from thememory 210. Also, an operating system (OS) and at least one program code may be stored in thememory 210. Such software components may be loaded from another non-transitory computer-readable storage medium to thememory 210. The other non-transitory computer-readable storage medium may include a non-transitory computer-readable storage medium, for example, a floppy drive, a disk, a tape, a DVD/CD-ROM drive, a memory card, etc. According to other example embodiments, software components may be loaded to thememory 210 through thecommunication interface 230, instead of, or in addition to, the non-transitory computer-readable storage medium. For example, the software components may be loaded to thememory 210 of thecomputer apparatus 200 based on a computer program installed by files received over thenetwork 170. - The
processor 220 may be configured to process computer-readable instructions of a computer program by performing basic arithmetic operations, logic operations, and I/O operations. The computer-readable instructions may be provided from thememory 210 or thecommunication interface 230 to theprocessor 220. For example, theprocessor 220 may be configured to execute received instructions in response to a program code stored in a storage device, such as thememory 210. - The
communication interface 230 may include a transceiver (transmitter and receiver), and may provide a function for communication between thecomputer apparatus 200 and another apparatus, for example, the aforementionedelectronic devices servers 150 and 160, over thenetwork 170. For example, theprocessor 220 of thecomputer apparatus 200 may transfer data, a file, a request or an instruction created based on the program code stored in the storage device, such as thememory 210, etc., to other apparatuses over thenetwork 170 under control of thecommunication interface 230. Thecommunication interface 230 may receive a signal, an instruction, data, a file, etc., from another apparatus. For example, a signal, an instruction, data, etc., received through thecommunication interface 230 may be transferred to theprocessor 220 or thememory 210, and a file, etc., may be stored in a storage medium, for example, the permanent storage device, further includable in thecomputer apparatus 200. - The I/O interface 240 may be a device used to interface with an I/
O apparatus 250. For example, an input device may include a device, such as a microphone, a keyboard, and a mouse, and an output device may include a device, such as a display device and a speaker. As another example, the I/O interface 240 may be a device for interface with an apparatus in which an input function and an output function are integrated into a single function, such as a touchscreen. The I/O apparatus 250 may be configured as a single apparatus with thecomputer apparatus 200. For example, a touchscreen, a microphone, a speaker, etc., may be included in thecomputer apparatus 200, such as a smartphone. - According to other example embodiments, the
computer apparatus 200 may include a number of components greater than or less than a number of components shown inFIG. 2 . For example, thecomputer apparatus 200 may include at least a portion of the I/O apparatus 250, or may further include other components, for example, a transceiver, a database (DB), and the like. -
FIG. 3 illustrates an example of a group call system according to at least one example embodiment. The group call system may include aserver 310 configured to provide a group call service andclient devices 320 for a plurality of participants. In contrast to the related art, communication between theserver 310 and theclient devices 320 may be performed through an extended transfer protocol in which destination information is included in a packet according to example embodiments. For example, a case in which aclient device 1 321, aclient device 2 322, and aclient device 3 323 among theclient devices 320 participate in a single group call instance may be considered. In this case, each of theclient device 1 321, theclient device 2 322, and theclient device 3 323 that participate in the corresponding group call instance may designate a specific participant and may transmit a packet to a client device corresponding to the specific participant. For example, theclient device 1 321 may designate the participant corresponding theclient device 3 323 and may transfer media, such as audio or video, to only theclient device 3 323. In this regard, theclient device 3 323 may be a whisper target and theclient device 1 may whisper to theclient device 3 323. - That is, according to the example embodiment, a participant of a group call service may transmit data to all of participants of a corresponding group call instance and may also transmit data to a specific participant or a small group of specific participants (i.e., a whisper target). For example, the
client device 1 321 may transmit data to theclient device 2 322 but not transmit data to theclient device 3 323. For example, theclient device 321 1 may transmit data to theclient device 2 322 and theclient device 3 323, but not transmit data to the client device n. -
FIGS. 4 to 7 illustrate examples of a group call process according to at least one example embodiment. -
FIG. 4 illustrates an example in whichUser 1 410,User 2 420, andUser 3 430 participate in a group call instance. Here, each of theUser 1 410, theUser 2 420, and theUser 3 430 may correspond to one of theclient devices 320 ofFIG. 3 . Here, theserver 310 may manage a table that includes a source identifier (Srcid) and connecting information for each of theUser 1 410, theUser 2 420, and theUser 3 430 as shown in a table 440. The source identifier (Srcid) refers to a unique value for each group call and, generally, may be used to identify a source of data received at theserver 310 from the users (e.g., an audio source of a user A or a video source of a user B). In addition, according to some example embodiments, the source identifier may be used to identify a destination of which user data transmitted from one of theclient devices 320 to theserver 310 is to be transferred to in association with connecting information, that is, to identify destination information of the data. For example, the source identifier (Srcid) and the connecting information may be managed as a single table with other information about theUser 1 410, theUser 2 420, and theUser 3 430. In response to receiving, from a client device of each user, a request for initiating or participating in a group call with call related information, such as a type of media to be transmitted or received, theserver 310 may manage a table, such as the table 440, by allocating at least one source identifier based on call related information received from the client device. Theserver 310 may transmit, to the client device initiating or participating in the group call, the source identifier allocated to the corresponding client device. Also, if other users are participating in the group call, theserver 310 may transmit source identifiers of at least a portion of the other users participating in the group call to the client device of the user participating in the group call. Also, theserver 310 may transmit source identifiers of at least a portion of other users participating in the group call to the client device of the user participating in the group call in response to a request from the client device or based on determination of theserver 310, during the group call. For example, theUser 1 410 may useclient device 1 321, theUser 2 420 may useclient device 2 322 and theUser 3 430 may useclient device 3 323. Theclient device 1 321 and theclient device 2 322 may each include an audio device and a video device. Theclient device 3 323 may include an audio device but not a video device. TheUser 3 430, usingclient device 3 323 that does not include a video device, may join a group call which uses audio and video between theUser 1 410, usingclient device 1 321, and theUser 2 420, usingclient device 2. In response to theserver 310 receiving a request from theUser 3 430 for participating in the group call with call related information of theUser 3 430, theserver 310 may allocate a source identifier Src5 to theUser 3 430 and may store the source identifier Src5 in the table 440 in association with connecting information ip3/port1 corresponding to the source identifier Src5. Here, theserver 310 may transmit, to theclient device 3 323 of theUser 3 430, source identifier information of theUser 1 410 and theUser 2 420 already participating in the group call with the source identifier Src5 allocated to theUser 3 430. For example, theserver 310 may transmit, to theclient device 3 323 of theUser 3 430, source identifier information Src1 for audio of theUser 1 410 and source identifier information Src3 for audio of theUser 2 420. For example, theserver 310 may not transmit, to theclient device 3 323 of theUser 3 430, source identifier information Src2 for video of theUser 1 410 and source identifier information Src4 for video of theUser 2 420. Also, theserver 310 may notify theUser 1 410 and theUser 2 420 already participating in the group call that the source identifier of theUser 3 430 is Src5. - Here, referring to
FIG. 4 , it is assumed that, when theUser 1 410, theUser 2 420, and theUser 3 430 are participating in the group call, theUser 1 410 transmits a packet P1 and a packet P2 to theserver 310 without designating at least a portion of users. Here, each of the packets P1 and P2 may include a source identifier. For example, when the packet P1 includes audio data and the packet P2 includes video data, the packet P1 may include a source identifier Src1 of theUser 1 410 for audio and the packet P2 may include a source identifier Src2 of theUser 1 410 for video. In this case, theserver 310 may identify that the packet P1 includes audio data based on the source identifier Src1 included in the packet P1, and may transfer the packet P1 to user devices of theUser 2 420 and theUser 3 430 having audio sources through ip2/port1 and ip3/port1 by referring to the table 440. Also, theserver 310 may identify that the packet P2 includes video data based on the source identifier Src2 included in the packet P2, and may transfer the packet P2 to the user device of theUser 2 420 having the video source through ip2/port2 by referring to the table 440. Here, because a video source is not allocated to theUser 3 430, the packet P2 is not transferred to theUser 3 430. -
FIG. 5 illustrates an example of a process in which theUser 1 410 transfers audio and video of theUser 1 410 only to theUser 2 420 during a group call. In response to receiving a request from theUser 1 410 for transferring audio and video only to theUser 2 420 during the group call, theserver 310 may identify destination information in a packet transmitted from theUser 1 410 and transfer the packet only to theUser 2 420 based on the destination information in the packet. For example, theUser 1 410 may designateUser 2 420 as a whisper target and whisper to theUser 2 420 by transferring audio and video only to theUser 2 420. The example illustrated inFIG. 5 differs from an example in which a destination is not specified. During the group call, a request for transferring audio and video only to theUser 2 420 may be received from theUser 1 410. In response thereto, a source identifier Src3 may be designated as destination information in a packet P3 that includes the audio of theUser 1 410 and a source identifier Src4 may be designated as destination information in a packet P4 that includes the video of theUser 1 410. For example, theserver 310 may designate the destination information in response to the request. For example, theclient device 321 may designate the destination information in response to the request based on source identifiers of other users received from theserver 310, for example, by referring to a table 510 ofFIG. 5 . In this case, theserver 310 may transfer the packet P3 to ip2/port1 of theUser 2 420 corresponding to the source identifier Src3 designated in the packet P3 and may transfer the packet P4 to ip2/port2 of theUser 2 420 corresponding to the source identifier Src4 designated in the packet P4, by referring to the table 440 ofFIG. 4 . Such a source identifier may be designated in each packet received from theUser 1 410 until the request for transferring audio and video of theUser 1 410 only to theUser 2 420 during the group call is released from theUser 1 410. For example, the source identifier may be designated in each packet while theUser 2 420 is designated as the whisper target. -
FIG. 6 illustrates an example of a process in which theUser 1 410 transfers only video of theUser 1 410 to theUser 2 420 and transfers only audio of theUser 1 410 to theUser 3 430 during a group call. In response to receiving a request from theUser 1 410 for transferring video of theUser 1 410 only to theUser 2 420 and audio of theUser 1 410 only to theUser 3 430, theserver 310 may identify destination information in a packet transmitted from theUser 1 410 and transfer the packet, based on the destination information in the packet, only to theUser 2 420 if the packet is a video packet and transfer the packet only to theUser 3 430 if the packet is an audio packet. The example illustrated inFIG. 6 differs from an example in which a destination is not specified. During the group call, the request for transferring video of theUser 1 410 only to theUser 2 420 and audio of theUser 1 410 only to theUser 3 430 may be received from theUser 1 410. In response thereto, a source identifier Src4 may be designated as destination information in a packet P5 for the video of theUser 1 410 and a source identifier Src5 may be designated as destination information in a packet P6 for the audio of theUser 1 410. For example, theserver 310 may designate the destination information in response to the request. For example, theclient device 321 may designate the destination information in response to the request based on source identifiers of other users received from theserver 310, for example, by referring to a table 510 ofFIG. 6 . In this case, theserver 310 may transfer the packet P5 to ip2/port2 of theUser 2 420 corresponding to the source identifier Src4 designated in the packet P5 and may transfer the packet P6 to ip3/port1 of theUser 3 430 corresponding to the source identifier Src5 designated in the packet P6, by referring to the table 440 ofFIG. 4 . - As described above, a related transfer protocol may be extended to include designate destination information. Such related transfer protocols transfer a packet to all of participants in a form of broadcast and thus, do not need to designate a destination. Accordingly, there is no field for including destination information. The example embodiment may extend the related transfer protocol by adding a field for designating destination information using a reserved area in a header of a packet according to the related transfer protocol.
- Also, at least two users may be designated for a single packet.
FIG. 7 illustrates a case in which aUser 4 710 and aUser 5 720 as well as theUser 1 410, theUser 2 420, and theUser 3 430 further participate in a group call instance is considered. For example, each of theUser 4 710 and theUser 5 720 may use one of theclient devices 320 ofFIG. 3 .FIG. 7 illustrates an example in which theUser 1 410 designates theUser 2 420 and theUser 5 720 and transfers audio data. In response to receiving a request from theUser 1 410 for transferring audio only to theUser 2 420 and theUser 5 720, theserver 310 may identify destination information in a packet transmitted from theUser 1 410 and may transfer the packet only to theUser 2 420 and theUser 5 720 based on the destination information in the packet. The example illustrated inFIG. 7 differs from an example in which a destination is not specified. During the group call, a request for transferring audio only to theUser 2 420 and theUser 5 720 may be received from theUser 1 410. In response thereto, source identifiers Src3 and Src8 may be designated as destination information in a packet P7 that includes audio of theUser 1 410. For example, theserver 310 may designate the destination information in response to the request. For example, theclient device 321 may designate the destination information in response to the request based on source identifiers of other users received from theserver 310, for example, by referring to a table 730 ofFIG. 7 . In this case, theserver 310 may transfer the packet P7 to each of theUser 2 420 and theUser 5 720 through connecting information of theUser 2 420 corresponding to the source identifier Src3 and connecting information of theUser 5 720 corresponding to the source identifier Src8, which are designated in the packet P7. -
FIG. 8 illustrates an example of an extended container format based on an extended transfer protocol according to at least one example embodiment.FIG. 8 illustrates acontainer format 800 of an extended transfer protocol of a real-time transport protocol (RTP). Here, destination synchronization sources (dSSRCs) may be newly defined to manage destinations to correspond to synchronization sources (SSRCs) for managing sources in the extended transfer protocol of the RTP. Here, afirst box 810 indicated with dotted lines may represent a field for recording dSSRCs (D-SSRCs) in an RTP packet header. Here, recording a dSSRC may represent recording an identifier of the dSSRC. Also, D-CNT for counting a number of dSSRCs to correspond to CC for counting a number of SSRCs may be newly defined. InFIG. 8 , asecond box 820 indicated with dotted lines may represent a field for counting a number of dSSRCs. In this case, a client may designate another client to which a packet is to be transferred by adding a dSSRC to the packet, and theserver 310 may verify a destination to which the packet is to be transferred through the dSSRC included in the packet received from the client and may transmit the packet to the verified destination. -
FIGS. 9 and 10 illustrate examples of a screen for selecting a whisper target according to at least one example embodiment. - Referring to
FIG. 9 , afirst screen 900 may be a terminal screen of a user that participates in a group call. For example, terminal screen may be displayed on a terminal, which may be one of theclient devices 320 ofFIG. 3 . The terminal screen of the user may be configured through a touchscreen. Here, videos included in packets received from a plurality of participants of a group call may be displayed on thefirst screen 900 through objects for the respective corresponding participants. The packets may be routed through a server for the group call and transferred to each of the participants. Also, audio, for example, voices of the respective participants, included in the packets may be output through an output device, for example, a speaker included in a terminal of a user. Also, video and/or audio input from the terminal of the user through an input device, for example, a camera and/or a microphone may be routed and may be transferred to each of the participants. - Referring to
FIG. 10 , asecond screen 1000 represents an example in which a user selects a participant that a user desires to whisper with during a group call. For example, the user may indicate a participant that the user desires to whisper with by touching an area of an object corresponding to the participant among objects (objects that represent videos of the participants) displayed on a terminal screen of the user. Here, a terminal of the user may identify an object on which the touch is being maintained and may configure a whisper packet such that video and/or audio that are input while the touch is maintained may be transferred only to a participant corresponding to the identified object. For example, thesecond screen 1000 represents an example in which the user inputs a long tap gesture on anarea 1010 of an object on which a video ofUser 6 is displayed. In this case, the terminal of the user may configure a whisper packet such that video and/or audio input during the long tap gesture may be transferred only toUser 6 and may transfer the whisper packet to a server for the group call. In this case, the server may transfer the whisper packet only to theUser 6 while the long tap gesture is maintained. Here, if the long tap gesture is released, the input video and/or audio may be transferred to all of the participants of the group call as displayed on thefirst screen 900. That is, the user may quickly and conveniently transmit a whisper to a corresponding participant by simply touching, on a screen, an object corresponding to the participant that the user desires to whisper with during the progress of the group call and may quickly return to the group call by simply releasing the touch. -
FIG. 11 illustrates an example of a screen of a terminal that is receiving a whisper packet according to at least one example embodiment. Athird screen 1100 represents an example in which another participant transmits a whisper. Referring to thethird screen 1100, in response to a whisper received fromUser 1, a video of theUser 1 is graphically distinguished (i.e., highlighted and/or enlarged) and thereby displayed. Here, the terminal of the user may highlight and output a voice of theuser 1 by modulating and outputting the whisper of theUser 1, that is, an audio included in a whisper packet. For example, an audio corresponding to the whisper may be modulated through howling processing and thereby output. In this case, the user may easily identify a participant that transmits the whisper (i.e., a source of the whisper) to the user. -
FIGS. 12 and 13 illustrate examples of setting a whisper group according to at least one example embodiment. - Referring to
FIG. 12 , anarea 1210, indicated with “Drop members here,” for setting a whisper group is displayed on afourth screen 1200. A user may conveniently add a corresponding participant to a whisper group by moving, to thearea 1210 through a drag-and-drop (D&D) gesture, an object corresponding to the participant that the user desires to include in the whisper group among objects on which videos of the participants are displayed on thefourth screen 1200. - For example, multiple participants may be added to the whisper group based on multiple drag-and-drop gestures corresponding to multiple participants. Referring to
FIG. 13 , afirst indication 1310 on afifth screen 1300 represents that a single participant is included in a whisper group capable of including four participants. For example, the single participant may be added to the whisper group in response to the user moving an object corresponding to the single participant to thearea 1210 on thefourth screen 1200. In the same manner, the user may set the whisper group by moving, to thearea 1210, objects corresponding to participants that the user desires to include in the whisper group. A number of participants includable in the whisper group may be readily set depending on example embodiments, without being particularly limited. -
FIGS. 14 and 15 illustrate other examples of setting a whisper group according to at least one example embodiment. - Referring to
FIG. 14 , abutton 1410 for entering a mode for selecting participants to be included in a whisper group is displayed on asixth screen 1400. In response to the user selecting thebutton 1410, for example, touching an area on which thebutton 1410 is displayed with a finger, the corresponding mode may be activated. Also, if the user reselects thebutton 1410 in a state in which the corresponding mode is active, the corresponding mode may be inactivated. - Referring to
FIG. 15 , aseventh screen 1500 represents an example in which, in response to the user selecting an object on which a video of a participant is displayed, for example, touching an area on which an object is displayed, in a state in which a mode for selecting participants to be included in the whisper group is active, the participant corresponding to the selected object participates in the whisper group. For example, referring to theseventh screen 1500,indications User 6 and theUser 11, theUser 6 and theUser 11 are selected for the whisper group. Here, asecond indication 1530 represents a number of participants selected by the user in the aforementioned mode. For example, thesecond indication 1530 represents that two participants are included in the whisper group in which four participants are includable by providing an image in two of four circles in thesecond indication 1530. As shown, the image representing each of the two participants in the whisper group includes an ear. As another example, thesecond indication 1530 may be determined based on a profile of a participant included in the whisper group. As described above, a number of participants included in the whisper group may be readily set depending on example embodiments, without being particularly limited. - According to another example embodiment, a user that sets a whisper group according to one of
FIGS. 12 to 15 may be automatically included in the whisper group without a separate input. -
FIG. 16 illustrates an example of activating a whisper group according to at least one example embodiment.FIG. 16 illustrates an example in which the user activates a whisper group as displayed on aneighth screen 1600 through a preset touch gesture (e.g., a swipe in a first direction in the example embodiment) during the progress of the group call as displayed on thefirst screen 900 ofFIG. 9 . For example, in response to recognizing a swipe from the right to the left on thefirst screen 900, the terminal of the user may activate a preset whisper group. Here, theeighth screen 1600 represents an example of a screen for activating a whisper group that includes four participants, for example,User 1,User 6, User 10, and User 12. Upon activation of the whisper group, a whisper packet may be configured such that video and/or audio input through an input device, for example, a camera and/or a microphone, included in the terminal of the user may be transferred only to the participants of the whisper group while the corresponding whisper group is active. In response to recognizing a swipe in a second direction, for example, a swipe from the left to the right, on theeighth screen 1600, the terminal of the user may inactivate the whisper group and may process again a call with all participants of the group call as displayed on thefirst screen 900. - As another example, a plurality of whisper groups may be set by the user. In detail, a first whisper group including four participants, for example,
User 1,User 6, User 10, and User 12, and a second whisper group including three participants, for example,User 3,User 6, andUser 9, may be set. Here, in response to recognizing a swipe from the right to the left on thefirst screen 900, the terminal of the user may activate the first whisper group. In response to recognizing a swipe from the right to the left one more time, the terminal of the user may activate the second whisper group. Further, in response to recognizing a swipe from the left to the right in a state in which the second whisper group is active, the terminal of the user may inactivate the second whisper group and activate the first whisper group. In response to recognizing a swipe from the left to the right in a state in which the first whisper group is active, the terminal of the user may inactivate the first whisper group and may process a call with the entire participants of the group call again as displayed on thefirst screen 900. The swipe used herein refers to an example of quickly activating a whisper group. Also, it may be easily understood from the foregoing description that the whisper group may become activate and/or inactive through the aforementioned touch gesture. -
FIG. 17 illustrates an example of a screen of a terminal that is included in a whisper group and receiving a whisper according to at least one example embodiment. If another participant sets a whisper group by including a user, the whisper group including the user may be automatically generated. Referring toFIG. 17 , if the whisper group is active and a whisper packet is received, a terminal of the user may highlight and display objects corresponding to the participants, for example,User 1,User 6, User 10, and User 12, of the whisper group as displayed on a ninth screen 1700. Here, as described above, audio transferred through a whisper packet may be modulated and thereby output. For example, the terminal of the user may modulate the audio included in the whisper packet through howling processing and then output the modulated audio through a speaker. If a single user, for example, theUser 1 sets the whisper group, the corresponding whisper group may be automatically set with respect to each of the other participants, for example,User 6, User 10, and User 12, of the whisper group, such that all of the participants of the whisper group may transfer a whisper to the corresponding whisper group. -
FIG. 18 is a flowchart illustrating an example of a group call method according to at least one example embodiment. The group call method according to the example embodiment may be performed by thecomputer apparatus 200 that implements a client participating in a group call session. In this case, theprocessor 220 of thecomputer apparatus 200 may be configured to execute a control instruction according to a code of at least one program or a code of an OS included in thememory 210. Here, theprocessor 220 may control thecomputer apparatus 200 to performoperations 1810 to 1840 included in the group call method ofFIG. 18 in response to the control instruction provided from the code stored in thecomputer apparatus 200. - Referring to
FIG. 18 , inoperation 1810, thecomputer apparatus 200 may participate in a group call session. When thecomputer apparatus 200 participates in the group call session, a group call with other participants of the corresponding group call session may proceed. As described above, during the group call, packets including videos and/or audio may be broadcasted to participants of the group call session. - In
operation 1820, thecomputer apparatus 200 may designate at least one participant among a plurality of participants that participates in the group call session as a whisper target in response to a touch gesture on a touchscreen included in thecomputer apparatus 200 in a state in which the group call session is maintained. Here, the touchscreen may be included in the I/O apparatus 250 ofFIG. 2 . - According to an example embodiment, the
computer apparatus 200 may display an object corresponding to each of the plurality of participants on the touchscreen and may identify an object indicated by a touch that occurs on the touchscreen and is maintained during a preset period of time. In this case, thecomputer apparatus 200 may designate a participant corresponding to the identified object as the whisper target while the touch is maintained. If the corresponding touch is released, the corresponding participant may be released from the whisper target. That is, the user may quickly designate the whisper target and may transmit a whisper to the designated whisper target by simply touching an object of a desired participant among objects respectively corresponding to the participants displayed on the touchscreen. The user may also quickly release the whisper target by releasing the touch of the object. - According to another example embodiment, the
computer apparatus 200 may set a whisper group including at least one participant among the plurality of participants in a state in which the group call session is maintained, and may activate the whisper group in response to a touch gesture preset for the touchscreen. For example, thecomputer apparatus 200 may activate the whisper group in response to a swipe gesture in a first direction on the touchscreen. In response to a swipe gesture in a second direction opposite to the first direction while the whisper group is active, thecomputer apparatus 200 may inactivate the whisper group. Here, thecomputer apparatus 200 may designate at least one participant included in the whisper group as the whisper target while the whisper group is active. That is, the user may activate and/or inactivate the whisper group through a simple touch gesture and may simply transmit a whisper only to participants of the whisper group among the entire participants. - As an example embodiment for setting the whisper group, the
computer apparatus 200 may display objects respectively corresponding to the plurality of participants and a whisper group setting area on the touchscreen. Here, the user may move an object of a participant the user desires to include in the whisper group to the whisper group setting area through a drag-and-drop gesture. In this case, thecomputer apparatus 200 may easily and quickly set the whisper group by identifying the object that is moved to the whisper group setting area through the drag-and-drop gesture and by including a participant corresponding to the identified object in the whisper group. - As another example embodiment for setting the whisper group, the
computer apparatus 200 may display objects respectively corresponding to the plurality of participants and a whisper group generation area on the touchscreen. In this case, the user may select a participant by selecting an object of a participant the user desires to include in the whisper group, for example, by touching an area on which the corresponding object is displayed, in a state in which the whisper group generation button is active. In this case, thecomputer apparatus 200 may easily and quickly set the whisper group by identifying an object indicated by a touch that occurs on the touchscreen in a state in which the whisper group generation button is activate and by including a participant corresponding to the identified object in the whisper group. - In
operation 1830, thecomputer apparatus 200 may configure a whisper packet to transfer, to the whisper target, at least one of video and audio that are input through an input device included in thecomputer apparatus 200 while the group call session is designated. Here, a method of configuring the whisper packet is described above in detail with reference toFIGS. 3 to 8 . Here, the input device may include, for example, a camera and/or a microphone and may also be included in the I/O apparatus 250 ofFIG. 2 . - In
operation 1840, thecomputer apparatus 200 may transmit the whisper packet to a server through the group call session. In this case, the server may quickly process the whisper by transferring the whisper packet only to the corresponding participant. Here, a method of transferring, by the server, the whisper packet to the corresponding participant is described above in detail with reference toFIGS. 3 to 8 . - In
operation 1850, thecomputer apparatus 200 may receive a whisper packet from the server. For example, if another participant of the group call session designates the user as a whisper target, a whisper packet from a terminal of the other participant may be transmitted to thecomputer apparatus 200 through the server. - In
operation 1860, thecomputer apparatus 200 may modulate an audio included in the received whisper packet and may output the modulated audio through an output device included in thecomputer apparatus 200, for example, a speaker included in thecomputer apparatus 200. Alternatively, thecomputer apparatus 200, in displaying objects respectively corresponding to the plurality of participants on the touchscreen, may highlight and display an object corresponding to a participant having transmitted the received whisper packet or may highlight and display objects corresponding to participants of a whisper group set by the participant having transmitted the received whisper group. For example, if the other participant designates only the user as the whisper target and transmits a whisper packet, an object corresponding to the other participant that transmits the whisper packet may be highlighted and displayed and an audio of the corresponding participant included in the whisper packet may be modulated and output. As another example, if a participant included in the whisper group transmits a whisper packet, the corresponding whisper packet may be transferred to participants included in the corresponding whisper group. In this case, thecomputer apparatus 200 may modulate and output audio of all whisper packets transmitted from the participants included in the whisper group. Here, thecomputer apparatus 200 may highlight objects corresponding to the participants included in the whisper group and may display the same on the touchscreen. - As described above, according to some example embodiments, it is possible to select a desired participant or participant group spontaneously during the progress of a group call with the entire participants of the group call and to allow whispering with the selected participant or participant group.
- The systems or the apparatuses described herein may be implemented using hardware components, software components, and/or a combination thereof. For example, a processing device may be implemented using one or more general-purpose or special purpose computers, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a programmable logic unit (PLU), a microprocessor or any other device capable of responding to and executing instructions in a defined manner. The processing device may run an operating system (OS) and one or more software applications that run on the OS. The processing device also may access, store, manipulate, process, and create data in response to execution of the software. For purpose of simplicity, the description of a processing device is used as singular; however, one skilled in the art will appreciated that a processing device may include multiple processing elements and multiple types of processing elements. For example, a processing device may include multiple processors or a processor and a controller. In addition, different processing configurations are possible, such as parallel processors.
- The software may include a computer program, a piece of code, an instruction, or some combination thereof, for independently or collectively instructing or configuring the processing device to operate as desired. Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical equipment, computer record medium or device, or in a propagated signal wave capable of providing instructions or data to or being interpreted by the processing device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. In particular, the software and data may be stored by one or more computer readable record mediums.
- The methods according to the example embodiments may be recorded in non-transitory computer-readable storage media including program instructions to implement various operations embodied by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The media and program instructions may be those specially designed and constructed for the purposes, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of non-transitory computer-readable storage media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVD; magneto-optical media such as floptical disks; and hardware devices that are specially to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. Examples of other media may include recording media and storage media managed by an Appstore that distributes applications or a site, a server, and the like that supplies and distributes other various types of software. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be to act as one or more software modules in order to perform the operations of the above-described embodiments, or vice versa.
- The foregoing embodiments are merely examples and are not to be construed as limiting. The present teaching can be readily applied to other types of apparatuses. Also, the description of the exemplary embodiments is intended to be illustrative, and not to limit the scope of the claims, and many alternatives, modifications, and variations will be apparent to those skilled in the art.
Claims (25)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020190126918A KR102187005B1 (en) | 2019-10-14 | 2019-10-14 | Method and system for group call using whisper |
KR10-2019-0126918 | 2019-10-14 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210112104A1 true US20210112104A1 (en) | 2021-04-15 |
US11570221B2 US11570221B2 (en) | 2023-01-31 |
Family
ID=73776683
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/069,526 Active US11570221B2 (en) | 2019-10-14 | 2020-10-13 | Method and system for group call using whisper |
Country Status (4)
Country | Link |
---|---|
US (1) | US11570221B2 (en) |
JP (1) | JP2021064944A (en) |
KR (1) | KR102187005B1 (en) |
CN (1) | CN112738013B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021211274A1 (en) | 2021-10-06 | 2023-04-06 | Heinlein Support GmbH | Method for establishing a virtual subspace communication between at least two main room participants of a virtual main room communication |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023141285A (en) * | 2022-03-23 | 2023-10-05 | 株式会社Jvcケンウッド | Conference support device and conference support method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050245317A1 (en) * | 2004-04-30 | 2005-11-03 | Microsoft Corporation | Voice chat in game console application |
US20070156811A1 (en) * | 2006-01-03 | 2007-07-05 | Cisco Technology, Inc. | System with user interface for sending / receiving messages during a conference session |
US20070276908A1 (en) * | 2006-05-23 | 2007-11-29 | Cisco Technology, Inc. | Method and apparatus for inviting non-rich media endpoints to join a conference sidebar session |
US20080137558A1 (en) * | 2006-12-12 | 2008-06-12 | Cisco Technology, Inc. | Catch-up playback in a conferencing system |
US20160205049A1 (en) * | 2015-01-08 | 2016-07-14 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6608636B1 (en) * | 1992-05-13 | 2003-08-19 | Ncr Corporation | Server based virtual conferencing |
US20040006595A1 (en) * | 2002-07-03 | 2004-01-08 | Chiang Yeh | Extended features to conferencing system using a web-based management interface |
KR100640324B1 (en) * | 2004-08-19 | 2006-10-30 | 삼성전자주식회사 | Method of group call service using push to talk scheme in mobile communication terminal |
KR100625665B1 (en) | 2005-04-26 | 2006-09-20 | 에스케이 텔레콤주식회사 | Group telephone call server having mode conversion function and method for mode conversion |
US20070233802A1 (en) * | 2006-02-02 | 2007-10-04 | Sonim Technology, Inc. | Methods and arrangements for implementing whisper mode conversations during a multiparty telecommunication session |
US20120017149A1 (en) * | 2010-07-15 | 2012-01-19 | Jeffrey Lai | Video whisper sessions during online collaborative computing sessions |
US9338197B2 (en) * | 2010-11-01 | 2016-05-10 | Google Inc. | Social circles in social networks |
KR102050814B1 (en) * | 2013-04-02 | 2019-12-02 | 삼성전자주식회사 | Apparatus and method for private chatting in group chats |
US20140310680A1 (en) * | 2013-04-15 | 2014-10-16 | Massively Parallel Technologies, Inc. | System And Method For Collaboration |
US10257360B2 (en) * | 2013-10-09 | 2019-04-09 | Swn Communications Inc. | System and method to improve management during a conference call |
KR20150066421A (en) * | 2013-12-06 | 2015-06-16 | (주)브라이니클 | Message sending method of application |
US9307089B2 (en) * | 2014-08-27 | 2016-04-05 | Verizon Patent And Licensing Inc. | Conference call systems and methods |
US20170310717A1 (en) * | 2016-04-22 | 2017-10-26 | Hey There, LLC | System and method for instantiating a hidden secondary chat session for a primary chat session |
US20170351476A1 (en) * | 2016-06-03 | 2017-12-07 | Avaya Inc. | Create private interaction workspace |
-
2019
- 2019-10-14 KR KR1020190126918A patent/KR102187005B1/en active IP Right Grant
-
2020
- 2020-10-13 CN CN202011090124.5A patent/CN112738013B/en active Active
- 2020-10-13 JP JP2020172416A patent/JP2021064944A/en active Pending
- 2020-10-13 US US17/069,526 patent/US11570221B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050245317A1 (en) * | 2004-04-30 | 2005-11-03 | Microsoft Corporation | Voice chat in game console application |
US20070156811A1 (en) * | 2006-01-03 | 2007-07-05 | Cisco Technology, Inc. | System with user interface for sending / receiving messages during a conference session |
US20070276908A1 (en) * | 2006-05-23 | 2007-11-29 | Cisco Technology, Inc. | Method and apparatus for inviting non-rich media endpoints to join a conference sidebar session |
US20080137558A1 (en) * | 2006-12-12 | 2008-06-12 | Cisco Technology, Inc. | Catch-up playback in a conferencing system |
US20160205049A1 (en) * | 2015-01-08 | 2016-07-14 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021211274A1 (en) | 2021-10-06 | 2023-04-06 | Heinlein Support GmbH | Method for establishing a virtual subspace communication between at least two main room participants of a virtual main room communication |
Also Published As
Publication number | Publication date |
---|---|
CN112738013A (en) | 2021-04-30 |
US11570221B2 (en) | 2023-01-31 |
CN112738013B (en) | 2024-09-24 |
JP2021064944A (en) | 2021-04-22 |
KR102187005B1 (en) | 2020-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11470127B2 (en) | Method, system, and non-transitory computer-readable record medium for displaying reaction during VoIP-based call | |
US11233837B2 (en) | Method and system for group call using unicast and multicast | |
US11582052B2 (en) | Method, system, and non-transitory computer readable record medium for managing messages based on context | |
US9558162B2 (en) | Dynamic multimedia pairing | |
US10924519B2 (en) | Method, apparatus, system, and non-transitory computer readable medium for interworking between applications of devices | |
US10834468B2 (en) | Method, system, and non-transitory computer readable medium for audio feedback during live broadcast | |
US11575529B2 (en) | Method, system, and non-transitory computer readable record medium for controlling joining chatroom based on location | |
US12052210B2 (en) | Method and system for providing answer message to query message | |
US10873617B2 (en) | Method and system for streaming data over a network | |
US20150163067A1 (en) | Control of computing device use during conferences | |
US11343468B2 (en) | Method, system, and non-transitory computer readable record medium for providing communication using video call bot | |
US20160261653A1 (en) | Method and computer program for providing conference services among terminals | |
US11570221B2 (en) | Method and system for group call using whisper | |
US20240007510A1 (en) | METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORD MEDIUM FOR SHARING CONTENT DURING VoIP-BASED CALL | |
US20230017421A1 (en) | Method and system for processing conference using avatar | |
US20220300144A1 (en) | Method, system, and non-transitory computer readable record medium for providing chatroom in 3d form | |
JP2015529902A (en) | Collaboration environment and views | |
WO2023025139A1 (en) | Page switching method and apparatus, electronic device, and storage medium | |
US11258838B2 (en) | Method, system, and non-transitory computer readable record medium for processing chatroom based on usage | |
US20190132551A1 (en) | Conferencing apparatus and method for right management thereof | |
US10511644B2 (en) | Joining executable component to online conference | |
US20220130393A1 (en) | Method, system, and non-transitory computer readable record medium to record conversations in connection with video communication service | |
US11032408B2 (en) | Method and system for automatically connecting calls of voice over internet protocol (VOIP) service | |
US20240187360A1 (en) | Communication system using user interfaces for dynamic modification of chat communication elements of instant messenger |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: LINE PLUS CORPORATION, KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KIM, NA YOUNG;REEL/FRAME:054059/0896 Effective date: 20201005 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |