CN114816317A - Processing method, device and equipment for online conference and storage medium - Google Patents

Processing method, device and equipment for online conference and storage medium Download PDF

Info

Publication number
CN114816317A
CN114816317A CN202210409716.1A CN202210409716A CN114816317A CN 114816317 A CN114816317 A CN 114816317A CN 202210409716 A CN202210409716 A CN 202210409716A CN 114816317 A CN114816317 A CN 114816317A
Authority
CN
China
Prior art keywords
participant
participants
audio information
input
online conference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210409716.1A
Other languages
Chinese (zh)
Inventor
高志稳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ioco Communication Software Co ltd
Original Assignee
Shenzhen Ioco Communication Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ioco Communication Software Co ltd filed Critical Shenzhen Ioco Communication Software Co ltd
Priority to CN202210409716.1A priority Critical patent/CN114816317A/en
Publication of CN114816317A publication Critical patent/CN114816317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application discloses a processing method, a processing device, processing equipment and a storage medium for an online conference, and belongs to the technical field of computers. The processing method of the online conference comprises the following steps: displaying the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic equipment; when a first participant produces a sound, determining a first partition in which the first participant is located; the first partition is at least one of a plurality of partitions; the partitions are obtained by dividing a preset interface according to the position arrangement information and the number of the participants, and the partitions correspond to the participants one by one; the first participant is any one or more of the participants; and outputting audio information sent by the first participant, wherein the phase of the audio information corresponds to the position of the first partition.

Description

Processing method, device and equipment for online conference and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a processing method, a processing device, processing equipment and a storage medium for an online conference.
Background
With the progress and development of scientific technology, network technology is developed more and more, for example, users can perform network social contact such as videos and voices through the network technology, so that distance cost can be reduced, and communication can be more convenient and faster.
Taking an online conference through a network technology as an example, at present, when a participant speaks, other participants cannot effectively identify the source of the sound.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a device, and a storage medium for processing an online conference, which can solve a problem that when a participant speaks, other participants cannot effectively identify the sound source.
In a first aspect, an embodiment of the present application provides a method for processing an online conference, which is applied to a first electronic device, and the method includes:
displaying the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic equipment;
when a first participant produces a sound, determining a first partition in which the first participant is located; the first partition is at least one of a plurality of partitions; the subareas are obtained by dividing a preset interface according to the position arrangement information and the number of the participants, and the subareas correspond to the participants one by one; the first participant is any one or more of the participants;
and outputting audio information sent by the first participant, wherein the phase of the audio information corresponds to the position of the first partition.
In a second aspect, an embodiment of the present application provides an apparatus for processing an online conference, which is applied to a first electronic device, and includes:
the display module is used for displaying the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic equipment;
the determining module is used for determining a first partition where the first participant is located when the first participant makes a sound; the first partition is at least one of a plurality of partitions; the subareas are obtained by dividing a preset interface according to the position arrangement information and the number of the participants, and the subareas correspond to the participants one by one; the first participant is any one or more of the participants;
and the output module is used for outputting audio information sent by the first participant, and the phase position of the audio information corresponds to the position of the first partition.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a display screen, a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the processing method for an online conference according to the first aspect.
In a fourth aspect, the present invention provides a readable storage medium, on which a program or instructions are stored, where the program or instructions, when executed by a processor, implement the steps of the processing method for an online conference according to the first aspect.
In a fifth aspect, the present application provides a computer program product, and when executed by a processor of an electronic device, the instructions of the computer program product cause the electronic device to perform the steps of the processing method for an online conference according to the first aspect.
In a sixth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, and the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method for processing an online conference according to the first aspect.
In the embodiment of the application, the position arrangement information of the participants participating in the online conference is displayed on the preset interface of the first electronic device, when the first participant makes a sound, the first partition where the first participant is located is determined, and the audio information sent by the first participant is output. Because the phase position of the audio information corresponds to the position of the partition where the participant is located, other participants can determine the partition where the participant is located according to the phase position of the audio information output by the first electronic device, and then determine the identity of the participant according to the corresponding relation between the partition and the participant, so that the source of sound can be effectively identified.
Drawings
Fig. 1 is a scene schematic diagram of a processing method for an online conference according to an embodiment of the present application;
fig. 2 is a flowchart of a processing method for an online conference according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a first arrangement of positions provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a second arrangement of positions provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a third arrangement of positions provided by an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a process of moving the position of a second participant to a second target position according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a process for interchanging the positions of participants according to an embodiment of the present application;
fig. 8 is a schematic diagram of a partitioning default interface according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of another partitioning default interface provided in the embodiment of the present application;
fig. 10 is a schematic diagram of another partitioning default interface provided in the embodiment of the present application;
fig. 11 is a schematic display diagram of a first file and a first directory provided in an embodiment of the present application;
fig. 12 is a schematic diagram illustrating a display process of a first interface according to an embodiment of the present application;
fig. 13 is a schematic diagram illustrating a display process of a second control according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a processing apparatus for an online conference according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 16 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
As described above, at present, when a participant speaks, other participants cannot effectively identify the source of the sound.
Therefore, the embodiments of the present application provide a processing method, an apparatus, a device, and a storage medium for an online conference, which can solve the problem that when a participant speaks, other participants cannot effectively identify the sound source.
Fig. 1 is a scene schematic diagram of a processing method for an online conference, which may include a plurality of electronic devices, where the electronic devices may support functions of network data transmission, video recording, audio recording, and the like, and may also support functions of wired or wireless earphones. Illustratively, the electronic device may be a mobile phone, a notebook computer, a tablet computer, a palm computer, or the like. Fig. 1 includes 5 electronic devices, which are an electronic device 100, an electronic device 101, an electronic device 102, an electronic device 103, and an electronic device 104.
For example, one of the electronic devices may be used as a main electronic device, and initiate an online conference invitation in a wired or wireless manner, so that the other electronic devices join the online conference. Fig. 1 takes an electronic device 100 as an example of a main electronic device, and invites an electronic device 101, an electronic device 102, an electronic device 103, and an electronic device 104 to join an online conference. The online conference may be a video conference or may be a voice conference only.
The method, the apparatus, the device and the storage medium for processing an online conference provided by the embodiments of the present application are described in detail with reference to fig. 1.
Fig. 2 is a flowchart of a processing method for an online conference according to an embodiment of the present application, where the method may be applied to any electronic device shown in fig. 1.
As shown in fig. 2, the processing method of the online conference may include the following steps:
and S210, displaying the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic equipment.
And S220, when the first participant produces sound, determining a first partition where the first participant is located.
Wherein the first partition is at least one of a plurality of partitions; the subareas are obtained by dividing a preset interface according to the position arrangement information and the number of the participants, and the subareas correspond to the participants one by one; the first participant is any one or more of the participants.
And S230, outputting the audio information sent by the first participant.
Wherein the phase of the audio information corresponds to the position of the first partition.
In the embodiment of the application, the position arrangement information of the participants participating in the online conference is displayed on the preset interface of the first electronic device, when the first participant makes a sound, the first partition where the first participant is located is determined, and the audio information sent by the first participant is output. Because the phase position of the audio information corresponds to the position of the partition where the participant is located, other participants can determine the partition where the participant is located according to the phase position of the audio information output by the first electronic device, and then determine the identity of the participant according to the corresponding relation between the partition and the participant, so that the source of sound can be effectively identified.
The above steps are described in detail below, specifically as follows:
in S210, the first electronic device may be any one of the electronic devices shown in fig. 1, and the preset interface of the first electronic device may be a display interface of the first electronic device, that is, an interface capable of displaying a video conference or a voice conference. The participants join the online conference through the first electronic equipment, and each first electronic equipment can display all the participants joining the online conference.
The position arrangement information may be a position arrangement mode of each participant on a preset interface, and the position may be a virtual position of the participant, that is, the position arrangement mode of the participant on the preset interface is different from the position arrangement mode of the offline participant.
The position arrangement information may include at least one of a first position arrangement, a second position arrangement, and a third position arrangement;
the first position arrangement mode can comprise that the participants are sequentially arranged along the horizontal direction of the preset interface;
the second position arrangement mode may include arranging the participants around the first target position as a center, the first target position being a position on the preset interface;
the third positional arrangement includes the participants arranged along a square table.
Illustratively, the positions of the participants can be arranged according to the identity information of the participants to obtain the position arrangement mode of the participants.
The identity information may include, but is not limited to, names, genders, ages, positions, etc. of the participants. For example, the arrangement may be based on the size of the position or age of the participant. The three position arrangement modes will be described below with reference to arrangement according to the height of the job.
Exemplarily, referring to fig. 3, fig. 3 is a schematic diagram of a first position arrangement provided in an embodiment of the present application. The conference participants are arranged side by side along the horizontal direction of the preset interface in a first position arrangement mode, wherein the positions are arranged at the middlest with the highest positions, are symmetrically arranged along the middle in sequence according to the heights of the positions, and are respectively A1 (second lowest), B1 (second highest), C1 (highest), D1 (second highest), E1 (conference host) from left to right, and the conference host is placed at the last position by default. A1, B1, C1, D1 and E1 are numbers of different participants, and the numbers of the participants can be determined and distributed by the first electronic equipment based on the identity information of the participants so as to uniquely identify the participants.
Exemplarily, referring to fig. 4, fig. 4 is a schematic diagram of a second position arrangement provided in the embodiment of the present application. The second position arrangement, also referred to as a round table arrangement, is arranged around the first target position by the participant centered on the first target position. The first target position may be a position pre-designated on the preset interface, for example, the center of the preset interface may be used as the first target position, that is, each participant is arranged around the center of the preset interface.
As shown in fig. 4, a2 is a conference host, and is arranged from top to bottom according to the height of the position from left to right, and is E2 (highest), D2 (next highest), C2 (next lowest), and B2 (lowest).
Exemplarily, referring to fig. 5, fig. 5 is a schematic diagram of a third position arrangement provided in the embodiment of the present application. The third position arrangement, also referred to as a square table arrangement, is where the participants are arranged along the square table arrangement. As shown in fig. 5, a3 is the conference host, arranged from left to right and from top to bottom, and is B3 (highest), C3 (next highest), D3 (next lowest) and E3 (lowest).
The positions of all the participants are arranged according to the identity information of the participants, and a basis is provided for subsequent transmission of directional audio information, so that the participants can determine the identities of the current sounding participants, the sense of reality of the participants is improved, and the diversity of online conferences is increased.
In practical applications, the position arrangement information of each participant may be in other position arrangement manners besides the above position arrangement manner, and the embodiment of the present application is not particularly limited.
In some embodiments, the positions of the participants in the preset interface may also be adjusted according to the importance of the conference content to meet the conference needs, and based on this, after S210, the processing method of the online conference may further include the following steps:
receiving a first input to a second participant, the second participant being any one or more of the participants;
and responding to the first input, moving the position of the second participant to a second target position, wherein the second target position is a position on a preset interface, and the phase of the audio information sent by the second participant after the position is moved corresponds to the second target position.
The first input may be used to adjust the position of the second participant on the preset interface, and may include, for example, a long press and drag operation, such as pressing the second participant interface for a long time, entering a character position editing state, and then dragging the second participant to a second target position. The embodiment of the application does not limit the dragging track. The second target location may be a location in the default interface other than the locations of the other participants.
Illustratively, referring to FIG. 6, the position of participant A1 can be adjusted from the leftmost position to between participant C1 and participant D1 as needed for the meeting. Wherein, the dotted line is a dragging track. When the position of the participant changes, the phase of the audio information of the participant changes. For example, in FIG. 6, when participant A1 is positioned to be between participant C1 and participant D1 from the leftmost position, the phase of the audio information generated by participant A1 is also adjusted to be between participant C1 and participant D1 from the leftmost position.
The embodiment of the application can adjust the positions of the participants according to the conference requirements on the basis of automatically arranging the positions, and adjust the phase of the audio information along with the positions, so that the audio information corresponds to the adjusted positions, the sense of reality of the participants is improved, and the diversity of the online conference is improved.
In some embodiments, the positions of the participants in the preset interface may also be interchanged according to the conference requirement, and based on this, after S210, the processing method of the online conference may further include the following steps:
receiving a second input to a third participant, the third participant being any one or more of the participants;
and responding to the second input, interchanging the position of the third participant and the position of the fourth participant, enabling the position of the fourth participant on the preset interface and the end position of the second input to meet the preset position relationship, and enabling the phases of the audio information sent by the third participant and the fourth participant after the positions are interchanged to correspond to the interchanged positions respectively.
The second input may be used to interchange the positions of any two participants in the preset interface, and illustratively, the second input may include long press and drag operations, and the embodiment of the present application does not limit a specific drag trajectory.
The fourth participant is the participant corresponding to the end position of the second input, illustratively, referring to FIG. 7, the end position of the second input corresponds to participant B2, and participant B2 may be referred to as the fourth participant. FIG. 7 illustrates a third participant as participant E2, such that the position of third participant E2 can be interchanged with the position of fourth participant B2 via the second input. Wherein, the dotted line is a dragging track. The phase of the audio information of the third participant E2 output by the first electronic device is also adjusted from the original position to the interchanged position, corresponding to the adjusted position of the third participant E2. Fourth participant B2 is similar.
According to the embodiment of the application, the positions of the participants can be exchanged according to meeting requirements on the basis of automatic arrangement, and the positions of the audio information are adjusted along with the positions of the participants, so that the audio information corresponds to the adjusted positions, the sense of reality of the participants is improved, and the diversity of online meetings is improved.
In S220, in order to improve the recognition degree of the audio information and make the participants better determine the identity information of the current participant who utters sound, the embodiment of the present application may divide the predetermined interface according to the position arrangement information of the participants and the number of the participants, so as to obtain the partition where each participant is located. The incidence relation between each partition and the participant can be set according to actual needs.
For example, for the first position arrangement manner, the preset interface may be divided according to the positions of the participants at the left and right edges of the preset interface and the first preset position, so as to obtain the partition where each participant is located.
The first preset position may be a position in front of the screen of the first electronic device by a preset distance from the screen, and as shown in fig. 8, the position of the participant a1, the position of the participant E1, and the preset position may form an area having an angle X. For example, the angle X may be divided according to the number of participants to obtain the partition in which each participant is located. The dividing manner is not limited in the embodiment of the application, for example, the angle X may be equally divided, or may not be equally divided.
As shown in FIG. 8, for example, the area to the lower left of participant A1 may be the partition in which participant A3878 is located, the area formed by A1OB1 may be the partition in which participant B1 is located, the area formed by B1OC1 may be the partition in which participant C1 is located, the area formed by C1OD1 may be the partition in which participant D1 is located, and the area formed by D1OE1 may be the partition in which participant E1 is located. Wherein O is a first predetermined position.
For example, for the second position arrangement manner, the preset interface may be divided by using the first target position as a center of a circle according to an angle formed by each participant around the first target position and the number of the participants, so as to obtain a partition where each participant is located.
As shown in fig. 9, the angle formed by each participant around the first target position is 360 degrees, and there are 5 participants, and in one embodiment, the angle of 360 degrees may be divided into 5 equal parts by taking the first target position as a center of circle, that is, 5 partitions are obtained, and each partition corresponds to one participant. For example, the area formed by A2OD2 may be the partition in which participant A2 is located, the area formed by D2OB2 may be the partition in which participant D2 is located, and so on, and the area formed by E2OA2 may be the partition in which participant E2 is located. And O is the first target position.
For example, for the third position arrangement manner, the position of each participant and the second preset position may be respectively connected, and the preset interface may be divided to obtain the partition where each participant is located. The second predetermined location may be a location in the predetermined interface other than the location of each participant.
As shown in FIG. 10, the area to the bottom left of D3O may be referred to as the partition for participant D3, the area formed by D3OB3 may be referred to as the partition for participant B3, and so on, and the area formed by C3OE3 may be referred to as the partition for participant E3. And O is a second preset position.
Each partition corresponds to the participants one to one, so that after the audio information is determined from which partition, the audio information can be determined from which participant, the identities of the sounding participants can be determined, and the diversity of online conferences is increased.
In S230, the first electronic device may output audio information emitted by the sound-emitting participant, and because the positions and angles of the respective partitions are different, the phases of the audio information output by the first electronic device are also different, so that the audio information has a certain directivity, and ceremonial feelings and experiences of the online conference are improved.
Compared with the traditional surround sound effect, the embodiment of the application can simulate the first electronic equipment as the sound equipment with a fixed position in the space, so that the audio information heard by the participants is transmitted from a certain specific direction, the online conference has a certain spatial audio effect, and the participant experience of the participants is improved.
On the basis, the participant of the first electronic device can also determine the identity information of the currently sounding participant, for example, the participant can determine the partition where the participant is located according to the phase of the audio information output by the first electronic device, and further determine the participant associated with the partition, so that the participant of the first electronic device can clearly identify which participant is speaking through the audio information.
In some embodiments, the first electronic device may further record and record videos and audio information of each participant in a whole process, so that subsequent participants can play back conveniently. For example, the recorded video and audio information of each participant may be stored in a general file.
In some embodiments, some participants may only want to play back their video and audio information, or only want to play back the video and audio information of a specific participant, and in order to meet the requirements of different participants, for example, video clips and corresponding audio information of each participant may be extracted and stored in corresponding folders, respectively.
Based on this, in some embodiments, after S230, the processing method of the online conference may further include the steps of:
and displaying a file identifier and a first directory of a first file, wherein the first file comprises videos of all participants and audio information sent by the participants, the videos are recorded in the conference process, the number of the first directory corresponds to the number of the participants, and the video and audio information of the participants are stored in a second file corresponding to the first directory.
The first file is the general file that stores the video and audio information of each participant. The first directory is a directory for storing video and audio information of different participants, so that the video and audio information of different participants can be conveniently searched. The second file corresponds to the first directory for storing video and audio information of a single participant, and the number of the first directory is the same as that of the participants, namely, several participants correspond to several first directories.
Exemplarily, referring to fig. 11, fig. 11 is a schematic display diagram of a first file and a first directory provided in an embodiment of the present application. The total video and audio recording file 1100 is a file identifier of a first file, and the video of the participant a1, the video of the participant B1, and the video of the participant C1 are respectively a first directory 1101.
Considering that the same participant may have multiple video clips, different video clips of the same participant may be named separately for the convenience of subsequent viewing. For example, for participant a1, containing three video clips, each video clip may be named separately, e.g., a1 video-1, a1 video-2, a1 video-3, respectively. Multiple video clips of the same participant may be stored in the same second file. Each video segment may take picture silence as an end mark, the duration of picture silence may be n seconds, and the size of n may be set according to actual needs.
The embodiment of the application records the whole course of the conference, stores the recorded video and audio information of each participant in a general file, extracts the video and audio information of different participants at the same time, and stores the video and audio information in the files corresponding to the directories respectively, so that on one hand, the participants can be conveniently checked subsequently, and on the other hand, the checking requirements of different participants can be met.
In some embodiments, the participant can play back pre-recorded video and audio information, and based on this, the processing method of the online conference can further include the following steps:
receiving a third input to the file identification;
in response to a third input, displaying a first interface, the first interface including a first control;
receiving a fourth input to the first control;
in response to the fourth input, the video and audio information of the participant is played.
The third input may be operations such as clicking or touching the file identifier, and when the participant needs to play back the entire video, the third input may be performed on the file identifier, and at this time, the first electronic device may display a first interface, where the first interface is a playing interface of video and audio information, and the first interface may include a first control, and the first control may be used to control playing of the video and audio information. Illustratively, the first control may be a button.
Exemplarily, referring to fig. 12, fig. 12 is a schematic diagram illustrating a display process of a first interface according to an embodiment of the present application, where the first interface may include, in addition to a first control 1200, position arrangement information of a participant, so that the participant can conveniently view the position arrangement information.
The fourth input is used for playing video and audio information of the participant, and the fourth input can be operations such as clicking or touching. For example, by clicking the first control 1200, the video and audio information of each participant can be played in the recording order.
According to the embodiment of the application, when the participants click the file identification of the first file, the position arrangement information and the first control of each participant can be displayed, and the purpose of playing the whole-course video and audio information is achieved through the first control.
In some embodiments, after playing the video and the audio information of the participants in response to the fourth input, the processing method of the online conference may further include the steps of:
receiving a fifth input to the first control;
displaying a second control in response to a fifth input;
receiving a sixth input to the second control;
in response to the sixth input, target audio information for a target participant is played, the target participant being a participant associated with the second control.
The fifth input is used for pausing the playing of the whole-course video and audio information, and can be a click operation or a touch operation. The second control is used for controlling the playing of video and audio information of a specific participant, and can be a button for example. In some embodiments, one participant may correspond to one second control, so that the playing of video and audio information of each participant can be conveniently controlled.
Illustratively, referring to FIG. 13, when a participant clicks on the first control 1200 to pause the play of the full-range video and audio information, second controls 1300 may be displayed, one for each participant 1300.
The sixth input may be an input to play the target audio information of the target participant, and illustratively, the sixth input may be a single-click operation, such as when the participant clicks the second control 1300, video and audio information of the participant associated with the second control 1300 may be directly played. For another example, when the participant has a plurality of video segments and audio information, for example, when the participant clicks the second control 1300, a link corresponding to each video segment in the directory may be displayed, and the corresponding video segment and audio information may be played by clicking the link, so that different video segments and audio information of the same participant may be selectively played.
Illustratively, the sixth input may also be a double-click operation, such as when a participant double-clicks on the second control 1300, video and audio information of the participant associated with the second control 1300 may be played full screen.
Through the playback mode, the playback requirements of different participants can be met, and the diversity of online conferences is increased.
Of course, other playback methods may be used in addition to the above playback method.
The embodiment of the application can not only combine the spatial audio technology to enable the audio information to have certain directivity, so that the participants can determine the identities of the current sounding participants, the video and audio information can be recorded, the playback requirements of different participants can be met through a specific playback mode, and the diversity of online conferences is increased.
It should be noted that, in the processing method for an online conference provided in the embodiment of the present application, the execution subject may be a processing apparatus for an online conference, or a processing module in the processing apparatus for an online conference, which is used for executing the processing method for an online conference. In this embodiment, a processing method for executing an online conference by using a processing device of an online conference is taken as an example, and the processing device of an online conference provided in this embodiment is described. The processing means of the online conference may be applied to the first electronic device.
Fig. 14 is a schematic structural diagram of a processing apparatus for an online conference according to an embodiment of the present application.
As shown in fig. 14, the processing device 1400 for online meeting may include:
a display module 1401, configured to display, on a preset interface of a first electronic device, position arrangement information of participants participating in an online conference;
a determining module 1402, configured to determine a first partition in which the first participant is located when the first participant makes a sound; the first partition is at least one of a plurality of partitions; the subareas are obtained by dividing a preset interface according to the position arrangement information and the number of the participants, and the subareas correspond to the participants one by one; the first participant is any one or more of the participants;
and an output module 1403, configured to output audio information sent by the first participant, where a phase of the audio information corresponds to a position of the first partition.
In the embodiment of the application, the position arrangement information of the participants participating in the online conference is displayed on the preset interface of the first electronic device, when the first participant makes a sound, the first partition where the first participant is located is determined, and the audio information sent by the first participant is output. Because the phase position of the audio information corresponds to the position of the partition where the participant is located, other participants can determine the partition where the participant is located according to the phase position of the audio information output by the first electronic device, and then determine the identity of the participant according to the corresponding relation between the partition and the participant, so that the source of sound can be effectively identified.
In some possible implementations of the embodiments of the present application, the position arrangement information includes at least one of a first position arrangement manner, a second position arrangement manner, and a third position arrangement manner;
the first position arrangement mode comprises that the participants are sequentially arranged along the horizontal direction of the preset interface;
the second position arrangement mode comprises that the participants are arranged around the first target position by taking the first target position as a center, and the first target position is a position on a preset interface;
the third positional arrangement includes the participants arranged along a square table.
In some possible implementations of embodiments of the present application, the apparatus may further include:
the receiving module is used for receiving a first input to a second participant after the display module 1401 displays the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic device, wherein the second participant is any one or more of the participants;
and the moving module is used for responding to the first input and moving the position of the second participant to a second target position, the second target position is a position on a preset interface, and the phase of the audio information sent by the second participant after the position is moved corresponds to the second target position.
In some possible implementations of the embodiment of the application, the receiving module is further configured to receive a second input to a third participant after the display module 1401 displays the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic device, where the third participant is any one or more of the participants.
In some possible implementations of embodiments of the present application, the apparatus may further include:
and the interchanging module is used for responding to the second input, interchanging the position of the third participant and the position of the fourth participant, enabling the position of the fourth participant on the preset interface to meet the preset position relation with the end position of the second input, and enabling the phases of the audio information sent by the third participant and the fourth participant after the positions are interchanged to correspond to the positions after the interchanging.
In some possible implementations of the embodiment of the present application, the receiving module is further configured to receive a third input to the file identifier after the output module 1403 outputs the audio information sent by the first participant;
a display module 1401, further configured to display, in response to a third input, a first interface, where the first interface includes a first control;
the receiving module is further used for receiving a fourth input of the first control;
in some possible implementations of embodiments of the present application, the apparatus may further include:
and the playing module is used for responding to the fourth input and playing the video and the audio information sent by the participants.
In some possible implementations of the embodiment of the application, the receiving module is further configured to receive a fifth input to the first control after the playing module plays the video and the audio information of the participant in response to the fourth input;
a display module 1401, further configured to display a second control in response to a fifth input;
the receiving module is further used for receiving a sixth input of the second control;
and the playing module is also used for responding to the sixth input and playing the target audio information of the target participant, wherein the target participant is the participant associated with the second control.
The processing device applied to the online conference in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The electronic device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The electronic device in the embodiment of the present application may be an electronic device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The processing device applied to the online conference provided in the embodiment of the present application can implement each process in the processing method embodiments applied to the online conference in fig. 1 to 13, and is not described here again to avoid repetition.
As shown in fig. 15, an electronic device 1500 according to an embodiment of the present application is further provided, which includes a display screen 1501, a processor 1502, a memory 1503, and a program or an instruction stored in the memory 1503 and executable on the processor 1502, where the program or the instruction when executed by the processor 1502 implements each process of the above-mentioned embodiment of the processing method for an online conference, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
In some possible implementations of embodiments of the present Application, the processor 1502 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of embodiments of the present Application.
In some possible implementations of embodiments of the present application, the Memory 1503 may include Read-Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash Memory devices, electrical, optical, or other physical/tangible Memory storage devices. Thus, in general, the memory 1503 includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to the processing methods of online conferencing in accordance with embodiments of the present application.
Fig. 16 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 1600 includes, but is not limited to: radio frequency unit 1601, network module 1602, audio output unit 1603, input unit 1604, sensor 1605, display unit 1606, user input unit 1607, interface unit 1608, memory 1609, and processor 1610.
Those skilled in the art will appreciate that the electronic device 1600 may further include a power supply (e.g., a battery) for supplying power to various components, which may be logically coupled to the processor 1610 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 16 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
Wherein the display unit 1606 is configured to: displaying the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic equipment;
processor 1610 is configured to: when a first participant produces a sound, determining a first partition where the first participant is located; the first partition is at least one of a plurality of partitions; the partitions are obtained by dividing a preset interface according to the position arrangement information and the number of the participants, and the partitions correspond to the participants one by one; the first participant is any one or more of the participants;
the audio output unit 1603 is used for: and outputting audio information sent by the first participant, wherein the phase of the audio information corresponds to the position of the first partition.
In the embodiment of the application, the position arrangement information of the participants participating in the online conference is displayed on the preset interface of the first electronic device, when the first participant makes a sound, the first partition where the first participant is located is determined, and the audio information sent by the first participant is output. Because the phase position of the audio information corresponds to the position of the partition where the participant is located, other participants can determine the partition where the participant is located according to the phase position of the audio information output by the first electronic device, and then determine the identity of the participant according to the corresponding relation between the partition and the participant, so that the source of sound can be effectively identified.
In some possible implementations of the embodiments of the present application, the position arrangement information includes at least one of a first position arrangement manner, a second position arrangement manner, and a third position arrangement manner;
the first position arrangement mode comprises that the participants are sequentially arranged along the horizontal direction of a preset interface;
the second position arrangement mode comprises that the participants are arranged around the first target position by taking the first target position as the center, and the first target position is the position on the preset interface;
the third positional arrangement includes the participants arranged along a square table.
In some possible implementations of embodiments of the present application, the processor 1610 may be further configured to: receiving a first input to a second participant, the second participant being any one or more of the participants;
and responding to the first input, moving the position of the second participant to a second target position, wherein the second target position is a position on a preset interface, and the phase of the audio information sent by the second participant after the position is moved corresponds to the second target position.
In some possible implementations of embodiments of the present application, the processor 1610 may be further configured to: receiving a second input to a third participant, the third participant being any one or more of the participants;
and responding to the second input, interchanging the position of the third participant and the position of the fourth participant, enabling the position of the fourth participant on the preset interface and the end position of the second input to meet the preset position relationship, and enabling the phases of the audio information sent by the third participant and the fourth participant after the positions are interchanged to correspond to the interchanged positions respectively.
In some possible implementations of embodiments of the application, the display unit 1606 may be further configured to: after the audio output unit 1603 outputs the audio information sent by the first participant, a file identifier of a first file and a first directory are displayed, the first file comprises videos of all participants recorded in the conference process and the audio information sent by the participants, the number of the first directory corresponds to the number of the participants, and the video and audio information of the participants are stored in a second file corresponding to the first directory.
In some possible implementations of embodiments of the present application, the processor 1610 may be further configured to: receiving a third input to the file identification;
the display unit 1606 may also be used to: in response to a third input, displaying a first interface, the first interface including a first control;
processor 1610 is also operable to: receiving a fourth input to the first control; in response to the fourth input, the video and audio information of the participant is played.
In some possible implementations of embodiments of the present application, the processor 1610 may be further configured to: receiving a fifth input to the first control;
the display unit 1606 may also be used to: displaying a second control in response to a fifth input;
processor 1610 is also operable to: receiving a sixth input to the second control; in response to the sixth input, target audio information for a target participant is played, the target participant being a participant associated with the second control.
It should be understood that in the embodiment of the present application, the input Unit 1604 may include a Graphics Processing Unit (GPU) 16041 and a microphone 16042, and the Graphics processor 16041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1606 may include a display panel 16061, and the display panel 16061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1607 includes a touch panel 16071 and other input devices 16072. Touch panel 16071, also referred to as a touch screen. The touch panel 16071 may include two parts of a touch detection device and a touch controller. Other input devices 16072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. Processor 1610 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the embodiment of the processing method applied to the online conference, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, and examples of the computer readable storage medium include non-transitory computer readable storage media such as ROM, RAM, magnetic or optical disks, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction, so as to implement the above-mentioned processes applied to the embodiment of the photographing method for electronic devices, and achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A processing method for an online conference is applied to a first electronic device, and the method comprises the following steps:
displaying the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic equipment;
when a first participant produces a sound, determining a first partition in which the first participant is located; the first partition is at least one of a plurality of partitions; the subareas are obtained by dividing the preset interface according to the position arrangement information and the number of the participants, and the subareas correspond to the participants one by one; the first participant is any one or more of the participants;
and outputting audio information sent by the first participant, wherein the phase of the audio information corresponds to the position of the first partition.
2. The method of claim 1, wherein the position arrangement information includes at least one of a first position arrangement, a second position arrangement, and a third position arrangement;
the first position arrangement mode comprises that the participants are sequentially arranged along the horizontal direction of the preset interface;
the second position arrangement mode comprises that the first target position is used as a center, the participants are arranged around the first target position, and the first target position is a position on the preset interface;
the third positional arrangement comprises the participant arranged along a square table.
3. The method of claim 1, wherein after the displaying the location arrangement information of the participants participating in the online conference on the preset interface of the first electronic device, the method further comprises:
receiving a first input to a second participant, the second participant being any one or more of the participants;
and responding to the first input, moving the position of the second participant to a second target position, wherein the second target position is a position on the preset interface, and the phase of the audio information sent by the second participant after the position is moved corresponds to the second target position.
4. The method of claim 1, wherein after the displaying the location arrangement information of the participants participating in the online conference on the preset interface of the first electronic device, the method further comprises:
receiving a second input to a third participant, the third participant being any one or more of the participants;
and responding to the second input, interchanging the position of the third participant and the position of the fourth participant, wherein the position of the fourth participant on the preset interface and the end position of the second input meet a preset position relationship, and the phases of the audio information sent by the third participant and the fourth participant after the positions are interchanged respectively correspond to the positions after the interchanging.
5. The method of claim 1, wherein after outputting the audio information from the first participant, the method further comprises:
the method comprises the steps of displaying a file identification and a first directory of a first file, wherein the first file comprises videos of all the participants and audio information sent by the participants, the videos are recorded in a conference process, the number of the first directory corresponds to the number of the participants, and the video and audio information of the participants are stored in a second file corresponding to the first directory.
6. The method of claim 5, wherein after outputting the audio information from the first participant, the method further comprises:
receiving a third input to the file identification;
in response to the third input, displaying a first interface, the first interface comprising a first control;
receiving a fourth input to the first control;
in response to the fourth input, playing the video and the audio information of the participant.
7. The method of claim 6, wherein after playing the video and audio information of the participant in response to the fourth input, the method further comprises:
receiving a fifth input to the first control;
displaying a second control in response to the fifth input;
receiving a sixth input to the second control;
in response to the sixth input, playing target audio information of a target participant, the target participant being a participant associated with the second control.
8. An apparatus for processing an online conference, applied to a first electronic device, the apparatus comprising:
the display module is used for displaying the position arrangement information of the participants participating in the online conference on a preset interface of the first electronic equipment;
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a first partition where a first participant is located when the first participant makes a sound; the first partition is at least one of a plurality of partitions; the subareas are obtained by dividing the preset interface according to the position arrangement information and the number of the participants, and the subareas correspond to the participants one by one; the first participant is any one or more of the participants;
and the output module is used for outputting audio information sent by the first participant, and the phase position of the audio information corresponds to the position of the first partition.
9. An electronic device, comprising: display screen, processor, memory and program or instructions stored on the memory and executable on the processor, which when executed by the processor implement the steps of the processing method of an online conference according to any of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer program instructions, which, when executed by a processor, carry out the steps of the method of processing an online conference according to any one of claims 1-7.
11. A computer program product, characterized in that instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform the steps of the method of processing an online conference according to any of claims 1-7.
CN202210409716.1A 2022-04-19 2022-04-19 Processing method, device and equipment for online conference and storage medium Pending CN114816317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210409716.1A CN114816317A (en) 2022-04-19 2022-04-19 Processing method, device and equipment for online conference and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210409716.1A CN114816317A (en) 2022-04-19 2022-04-19 Processing method, device and equipment for online conference and storage medium

Publications (1)

Publication Number Publication Date
CN114816317A true CN114816317A (en) 2022-07-29

Family

ID=82504822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210409716.1A Pending CN114816317A (en) 2022-04-19 2022-04-19 Processing method, device and equipment for online conference and storage medium

Country Status (1)

Country Link
CN (1) CN114816317A (en)

Similar Documents

Publication Publication Date Title
US9961119B2 (en) System and method for managing virtual conferencing breakout groups
CN111866539B (en) Live broadcast interface switching method and device, terminal and storage medium
US11470127B2 (en) Method, system, and non-transitory computer-readable record medium for displaying reaction during VoIP-based call
US20180067641A1 (en) Social networking application for real-time selection and sorting of photo and video content
US20150089394A1 (en) Meeting interactions via a personal computing device
CN112243583B (en) Multi-endpoint mixed reality conference
CN105653167A (en) Online live broadcast-based information display method and client
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN106105174A (en) Automatic camera selects
CN204721476U (en) Immersion and interactively video conference room environment
CN109729367A (en) The method, apparatus and electronic equipment of live media content information are provided
US9813255B2 (en) Collaboration environments and views
WO2022116033A1 (en) Collaborative operation method and apparatus, and terminal and storage medium
TW201917556A (en) Multi-screen interaction method and apparatus, and electronic device
US11040278B2 (en) Server device distributing video data and replay data and storage medium used in same
JP2016063477A (en) Conference system, information processing method and program
WO2019104533A1 (en) Video playing method and apparatus
WO2022183967A1 (en) Video picture display method and apparatus, and device, medium and program product
CN114816317A (en) Processing method, device and equipment for online conference and storage medium
JP2023554031A (en) Video conference display methods, equipment, terminal devices and storage media
JP2020144725A (en) Information processing system and control method thereof
JP5994898B2 (en) Information processing apparatus, information processing apparatus control method, and program
CN115348240B (en) Voice call method, device, electronic equipment and storage medium for sharing document
US20230199037A1 (en) Virtual relocation during network conferences
WO2022188145A1 (en) Method for interaction between display device and terminal device, and storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination