CN115396404B - Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene - Google Patents

Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene Download PDF

Info

Publication number
CN115396404B
CN115396404B CN202210945934.7A CN202210945934A CN115396404B CN 115396404 B CN115396404 B CN 115396404B CN 202210945934 A CN202210945934 A CN 202210945934A CN 115396404 B CN115396404 B CN 115396404B
Authority
CN
China
Prior art keywords
explanation
content
page
presenter
conference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210945934.7A
Other languages
Chinese (zh)
Other versions
CN115396404A (en
Inventor
唐串串
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Happycast Technology Co Ltd
Original Assignee
Shenzhen Happycast Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Happycast Technology Co Ltd filed Critical Shenzhen Happycast Technology Co Ltd
Priority to CN202210945934.7A priority Critical patent/CN115396404B/en
Publication of CN115396404A publication Critical patent/CN115396404A/en
Application granted granted Critical
Publication of CN115396404B publication Critical patent/CN115396404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay

Abstract

The embodiment of the application discloses a synchronous screen projection method and a related device for a talkback explanation position in a cloud conference scene, wherein the method comprises the following steps: acquiring audio information of a speaker in a cloud conference; acquiring an explanation page corresponding to the audio information of the speaker; determining the explanation position of the main speaker according to the audio information of the main speaker and the page content of the explanation page; according to the explanation position of the speaker, updating page content of the explanation page; and sending the updated explanation page to the local equipment participating in the cloud conference. According to the method and the device, synchronous screen projection of the explanation positions of the main speakers to the local equipment in the cloud conference scene can be realized, and user experience is improved.

Description

Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene
Technical Field
The application relates to the technical field of data processing, in particular to a synchronous screen throwing method and a related device for a talker explanation position in a cloud conference scene.
Background
In the current application program with the cloud conference function, only the conference participants in the same space as the main speaker can indicate the explanation position of the main speaker through gestures of the main speaker or physical tools such as a laser pen, and the conference participants which are not in the same space as the main speaker cannot determine the position of the explanation page corresponding to the explanation content of the main speaker, so that the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a synchronous screen-throwing method and a related device for a talker explanation position in a cloud conference scene, which aim to synchronously throw the talker explanation position to local equipment in the cloud conference scene and improve user experience.
In a first aspect, an embodiment of the present application provides a method for synchronously projecting a talker explanation position in a cloud conference scene, which is applied to a server, and the method includes:
acquiring audio information of a presenter in a cloud conference, wherein the cloud conference refers to a conference group created by a conference creator at a cloud end through first local equipment, and the presenter refers to a conference participant who acquires control authority of a conference desktop of the cloud conference through second local equipment;
acquiring an explanation page corresponding to the audio information of the presenter, wherein the page content of the explanation page is content information contained in shared content, and the shared content is content information uploaded to a cloud space of the cloud conference by conference participants of the cloud conference through third local equipment;
determining the explanation position of the host according to the audio information of the host and the page content of the explanation page;
updating page content of the explanation page according to the explanation position of the main speaker;
And sending the updated explanation page to the local equipment participating in the cloud conference.
In a second aspect, an embodiment of the present application provides a method for synchronously projecting a talker explanation position in a cloud conference scene, where the method is applied to a second local device, and the apparatus includes:
receiving audio information of a speaker from a server;
receiving an updated explanation page from a server, wherein the updated explanation page refers to an explanation page after page content of the explanation page is updated according to a talker explanation position, and the talker explanation position refers to a position corresponding to audio information of the talker in the page content of the explanation page;
displaying the updated explanation page;
and playing the audio information of the speaker.
In a third aspect, an embodiment of the present application provides a synchronous screen-throwing device for a talker explanation position in a cloud conference scene, where the device includes:
the sending unit is used for sending the updated explanation page to the local equipment participating in the cloud conference;
the receiving unit is used for receiving audio information and shared content of a presenter in the cloud conference, wherein the cloud conference is a conference group created by a conference creator in a cloud end through a first local device, the presenter is a conference participant who obtains control authority of a conference desktop of the cloud conference through a second local device, and the shared content is content information uploaded to a cloud space of the cloud conference by the conference participant of the cloud conference through a third local device;
The processing unit is used for acquiring an explanation page corresponding to the audio information of the presenter, wherein the page content of the explanation page is content information included in the shared content; the processing unit is also used for determining the explanation position of the presenter according to the audio information of the presenter and the page content of the explanation page; and updating the page content of the explanation page according to the explanation position of the host speaker.
In a fourth aspect, embodiments of the present application provide a server comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the first aspect of embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform some or all of the steps as described in the first aspect of the present embodiment.
It can be seen that, the synchronous screen projection method and the related device for the explanation position of the presenter in the cloud conference scene described in the embodiment can acquire the audio information of the presenter in the cloud conference; acquiring an explanation page corresponding to the audio information of the speaker; determining the explanation position of the main speaker according to the audio information of the main speaker and the page content of the explanation page; according to the explanation position of the speaker, updating page content of the explanation page; and sending the updated explanation page to the local equipment participating in the cloud conference. Therefore, according to the explanation page with updated explanation positions of the main talkers, the explanation positions of the main talkers can be displayed to the conference participants participating in the cloud conference, so that the conference participants can quickly locate the explanation contents of the main talkers in the explanation page, and the participation experience of the conference participants can be improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a network architecture according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a synchronous screen-throwing method of a talker explanation position in a cloud conference scene applied to a server according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an explanation page according to an embodiment of the present application;
fig. 5 is a schematic diagram of an updated explanation page corresponding to the explanation page in fig. 4;
fig. 6 is a functional unit composition block diagram of a synchronous screen throwing device for a talker explanation position in a cloud conference scene provided by the embodiment of the application;
fig. 7 is a functional unit composition block diagram of a synchronous screen throwing device for a talker explanation position in another cloud conference scene provided by the embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The key concepts and terms involved in the present application include, but are not limited to, the following:
(1) The shared content is content information uploaded to a cloud space of the cloud conference by meeting participants of the cloud conference through third local equipment, and the content information comprises information such as files, real-time screen recording pictures and webpages. The files such as document files (txt, doc, docx, ppt, pptx, pdf, etc.), cad files, audio files, picture files, video files, etc., and the live recording pictures such as split-screen mirror pictures, full-screen pictures, etc. of the local device are not limited herein.
(2) The explanation page is a page created according to the shared content selected by the presenter of the cloud conference. At least one explanation page may be created based on one shared content. The page content of the explanation page is at least part of content information included in the shared content.
(3) Cloud conference refers to a conference group created in cloud space by a conference creator through a first local device. The conference creator of the cloud conference may participate in the conference as one of conference participants of the cloud conference, may not participate in the conference, or may exit the cloud conference during the conference, which is not limited herein. If the meeting creator is a member of a meeting participant of the cloud meeting, the third local device that the meeting participant uploads the shared content and the first local device that the meeting creator creates the meeting group may be the same local device.
(4) The presenter refers to a meeting participant who obtains the control authority of the meeting desktop of the cloud meeting through the second local equipment. The meeting personnel who obtain the control authority of the conference desktop of the cloud conference can be any meeting personnel who participate in the cloud conference.
(5) Cloud space refers to a resource space in which cloud end is used for running and storing data of a cloud conference. The cloud end can correspond to a server cluster or a single server under a cloud technology architecture, and the cloud end is used for supporting a user to create a conference group at the cloud server and provide cloud conference services.
(6) The local equipment comprises terminal equipment and conference equipment.
The terminal device is a device connected with the server and capable of performing information interaction with the server, for example, sending information to the server, receiving information pushed by the server, and the like. The terminal device may be a smart phone, a portable computer, a desktop computer, a smart television, or may be a device capable of performing information interaction with a server, such as a smart watch, a smart bracelet, etc., which is not limited herein. Herein, the first local device for creating the cloud conference and the third local device for uploading the shared content are both terminal devices.
The conference device is a device connected to the server and capable of performing information interaction with the server, for example, sending information to the server, receiving information pushed by the server, and the like. The conference device may be a large screen device, a projection device, etc., without limitation.
The information sent by the conference device to the server is used for accessing the conference device to the cloud conference, and the information sent by the terminal device to the server can be used for not only accessing the terminal device to the cloud conference, but also feeding back the shared content of the cloud space of the cloud conference and/or the operation executed by the conference desktop to the server.
In the current application program with the cloud conference function, only the conference participants in the same space as the main speaker can indicate the explanation position of the main speaker through gestures of the main speaker or physical tools such as a laser pen, and the conference participants which are not in the same space as the main speaker cannot determine the position of the explanation page corresponding to the explanation content of the main speaker, so that the user experience is poor.
In order to solve the above problems, the present application provides a synchronous screen projection method and related device for a talker explanation position in a cloud conference scene, and the following detailed description is given with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a network architecture according to an embodiment of the present application. As shown in fig. 1, the network architecture may include a server 100 and a local device 200, and the local device 200 may include one or more terminal devices 200a and conference devices 200b, and the number of terminal devices 200a and the number of conference devices 200b are not limited herein. As shown in fig. 1, the local devices 200 have a plurality, and the plurality of local devices may specifically include a terminal device 200a and a conference device 200b, and each of the terminal device 200a and the conference device 200b shown in fig. 1 may respectively make a network connection with the server 100, so that each of the terminal device 200a and the conference device 200b may perform data interaction with the server 100 through the network connection.
The server 100 as shown in fig. 1 may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, web services, cloud communications, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. Servers include, but are not limited to, servers hosting IOS systems, android systems, microsoft systems, or other operating systems, and are not limited herein.
The composition structure of the server 100 in the present application may be shown in fig. 2, and fig. 2 is a schematic structural diagram of a server according to an embodiment of the present application. The server 100 may comprise a processor 110, a memory 120, a communication interface 130, and one or more programs 121, wherein the one or more programs 121 are stored in the memory 120 and configured to be executed by the processor 110, and wherein the one or more programs 121 comprise instructions for performing any of the method embodiments described below.
Wherein the communication interface 130 is used to support communication between the server 100 and other devices. The processor 110 may be, for example, a central processing unit (Central Processing Unit, CPU), a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an Application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, elements and circuits described in connection with the disclosure of embodiments of the application. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
The memory 120 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example but not limitation, many forms of random access memory (random access memory, RAM) are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
In particular implementations, the processor 110 is configured to perform any of the steps performed by the server in the method embodiments described below, and when performing a data transmission operation, such as sending an explanation page, the communication interface 130 may be selectively invoked to complete the corresponding operation.
It should be noted that the above-mentioned schematic structural diagram of the server is merely an example, and more or fewer devices may be specifically included, which is not limited only herein.
Referring to fig. 3, fig. 3 is a flowchart of a method for synchronously projecting a talker explanation position in a cloud conference scene according to an embodiment of the present application, where the method may be executed by a server, and the method may be applied to the server 100 shown in fig. 1 or fig. 2, and as shown in fig. 3, the method for synchronously projecting a talker explanation position in a cloud conference scene includes:
step S101, audio information of a speaker in a cloud conference is acquired.
The cloud conference and the presenter refer to the foregoing description, and are not described herein.
The server can acquire the audio equipment uploaded by the audio acquisition equipment of the speaker from the cloud space, wherein the audio acquisition equipment is used for acquiring the audio information of the speaker. The audio acquisition device may be the same device as the second local device of the presenter, or may be a different device from the second local device of the presenter.
Step S102, acquiring an explanation page corresponding to the audio information of the presenter.
The shared content and the explanation page may refer to the foregoing description, and are not described herein.
The explanation page corresponding to the audio information of the presenter can be an explanation page currently used by the presenter, that is, the presenter controls the cloud conference through the second local device to stay on the explanation page in the display interface of the second local device. The explanation page corresponding to the audio information of the presenter may be a plurality of explanation pages corresponding to the shared content taught by the presenter. The specific number of the explanation pages corresponding to the audio information of the presenter is not further limited, and can be set according to actual requirements.
In a specific implementation, if the obtained explanation page is the explanation page after the replacement instruction is executed when the fact that the second local equipment of the presenter sends the page replacement instruction is monitored, one explanation page corresponding to the audio information of the presenter is obtained at the moment. Or when the shared content has a plurality of topics, at least one explanation page corresponding to the next topic may be acquired before the explanation page is replaced with the next topic, and at this time, the acquired explanation page corresponding to the audio information of the presenter includes at least one explanation page. Or, the explanation page corresponding to the audio information of the presenter may be periodically acquired, for example, the explanation page may be acquired once after a preset period of time passes, where the duration of the preset period of time may be set according to the requirement, and at this time, the acquired explanation page corresponding to the audio information of the presenter includes at least one explanation page. The scheme does not limit the specific form of the explanation page, and can be set according to actual requirements.
And step S103, determining the explanation position of the presenter according to the audio information of the presenter and the page content of the explanation page.
In the specific implementation, the server can periodically acquire the audio information uploaded by the audio acquisition equipment of the presenter from the cloud space, and the audio information used for determining the presenter explanation position is the periodically acquired audio information, so that the corresponding explanation position of the presenter can be continuously updated, and the accuracy of the determined presenter explanation position is ensured. The audio time length obtained each time is the period time length, so that the content taught by the speaker is not missed, and the execution reliability of the method is ensured.
And step S104, updating page contents of the explanation page according to the explanation position of the presenter.
The operation of updating the content of the explanation page can enable the content corresponding to the explanation position of the determined main speaker to be highlighted so as to be convenient for human eyes to recognize.
In a specific implementation, if the content corresponding to the lecture position of the presenter is in the currently used lecture page, the operation of updating the content of the lecture page is performed as follows: content corresponding to the presenter's lecture position is highlighted within the lecture page currently in use. If there are a plurality of explanation pages corresponding to the audio information of the presenter, and the content corresponding to the presenter explanation position is not currently in use, the operation of updating the content of the explanation page is performed as follows: content corresponding to the presenter's explanation position is highlighted on the corresponding explanation page.
Step S105, the updated explanation page is sent to the local equipment participating in the cloud conference.
The server can send the updated explanation page to local equipment other than the local equipment of the presenter, so that other conference participants who are not in the same space with the presenter can quickly locate the presenter explanation position through the updated explanation page displayed in the local equipment display interface. Or, the server may send the updated explanation page to the local devices of all the participants participating in the cloud conference, so that when the presenter does not indicate the explanation position, other people in the same space with the presenter can quickly locate the presenter's explanation position through the updated explanation page.
It can be seen that, the synchronous screen projection method and the related device for the explanation position of the presenter in the cloud conference scene described in the embodiment can acquire the audio information of the presenter in the cloud conference; acquiring an explanation page corresponding to the audio information of the speaker; determining the explanation position of the main speaker according to the audio information of the main speaker and the page content of the explanation page; according to the explanation position of the speaker, updating page content of the explanation page; and sending the updated explanation page to the local equipment participating in the cloud conference. Therefore, according to the explanation page with updated explanation positions of the main talkers, the explanation positions of the main talkers can be displayed to the conference participants participating in the cloud conference, so that the conference participants can quickly locate the explanation contents of the main talkers in the explanation page, and the participation experience of the conference participants can be improved. In addition, the operations such as explanation page creation are completed in the server, and compared with the operations such as the real-time acquisition and uploading of video frames corresponding to display contents by the local equipment, the method has the advantages that the data size required to be processed by the local equipment is smaller, and the configuration requirement on the local equipment is lower. Therefore, the scheme provided by the embodiment of the application has wider application range.
In one possible example, the determining the lecturer location according to the audio information of the lecturer and the page content of the lecture page includes: comparing the audio information of the speaker with the first content in the page content of the explanation page to obtain a first comparison result; the explanation content of one explanation page comprises at least one first content; and determining the explanation position of the presenter according to the first comparison result.
The first content may be text information or image information. When the first content is text information, one first content can correspond to one text content, one sentence text content, one line text content or the like; alternatively, when the text information in the explanation page is distributed to form at least one region, one first content may also correspond to the text content of one region. When the first content is image information, a first content may correspond to an independent image, or a first content may correspond to a content of a certain area in an image or a certain feature in an image. The specific form of the first content is not further limited herein, and may be specifically set according to the need. It will be appreciated that other forms of instructional content, such as forms, may be broken down into corresponding textual or graphical information.
The explanation content of one explanation page comprises at least one first content which can be text information; alternatively, the at least one first content included in the explanation content of one explanation page may be image information; alternatively, when the explanation content of one explanation page includes at least two first contents, the at least two first contents may include both text information and image information.
In a specific implementation, the audio information of the presenter may be compared with each of the first contents in sequence. Or the audio information of the main speaker is synchronously compared with each first content, so that the efficiency is improved, and the determination time of the explanation position of the main speaker is shortened.
In this example, in order to facilitate highlighting of the presenter's explanation position, the first comparison includes a number of first contents smaller than the number of first contents included in the explanation page. If the number of the first contents included in the first comparison result is equal to the number of the first contents included in the explanation page, the explanation page is not updated.
In this example, if at least two first contents are included in the explanation page corresponding to the audio information of the presenter, the audio information of the presenter may be compared with each first content, respectively, to obtain a first comparison result. The first comparison result may be one first content in the lecture page for the lecturer lecture position, or when a plurality of first contents are included in the lecture page, the first comparison result may be at least two first contents in the lecture page for the lecturer lecture position. For example, if at least two pieces of text information are included in the explanation page corresponding to the audio information of the presenter, one piece of text information may correspond to one first content. At this time, the audio information of the presenter can be compared with each piece of text information, respectively, so as to obtain a first comparison result. The first comparison result can be text information of a section in the explanation page at the explanation position of the presenter; or, if the explanation page includes multiple pieces of text information, the first comparison result may be that the explanation position of the presenter is at least two pieces of text information in the explanation page. For example, if the explanation page includes a content a, a content B, and a content C, the first comparison result may be that the content corresponding to the explanation position of the presenter is the content a; alternatively, the first comparison result may be that contents corresponding to the lecturer explanation position are a content and C content. It can be understood that the content corresponding to the explanation position of the presenter may be the whole text information of the first content corresponding to the first comparison result, or may be a text information or a phrase in the text information.
Therefore, the first content corresponding to the audio information of the presenter can be rapidly positioned by comparing the audio information of the presenter with the first content in the explanation page, and the explanation position of the presenter is determined. According to the embodiment, the explanation page can be divided into a plurality of first contents when the page contents are more, so that the task amount of single comparison of the server is reduced, and the processing pressure of the server is reduced.
In one possible example, the page content of the explanation page includes at least two first contents; the comparing the audio information of the presenter with the first content in the page content of the explanation page includes: determining whether the at least two first contents comprise first contents provided with explanation marks, wherein the first contents provided with the explanation marks refer to first contents corresponding to the historic explanation positions of the main speaker; and if so, comparing the audio information of the speaker with the first content which is not provided with the explained mark.
The first content provided with the explanation mark refers to the first content which is confirmed to be the explanation position of the presenter in the process of explaining the shared content by the presenter, namely the first content which is confirmed to be the explanation position of the presenter in the process of historically confirming the explanation position of the presenter, and can also be understood to be the content which is already taught by the presenter.
In particular, after each time the first comparison result is obtained, the server may set the interpreted identifier for the first content corresponding to the first comparison result, so as to identify classification when performing the subsequent steps.
In a specific implementation, the server may determine whether the first content with the explanation identifier set in the explanation content of the explanation page first, and if not, compare the audio information of the presenter with each first content. If so, the audio information of the presenter is compared with each first content which is not set with the explanation identification. At this time, if the comparison is successful, the presenter explains that the position is the position corresponding to the first content that the comparison is successful; if the comparison fails, the audio information of the speaker is compared with the first content provided with the explained identification.
It will be appreciated that a presenter will typically speak one content after the next content is spoken. Therefore, by comparing the audio information of the presenter with the first content which is not taught by the presenter, the efficiency of obtaining the first comparison result can be improved, the time difference between obtaining the audio information of the presenter and obtaining the first comparison result can be shortened, and the efficiency of determining the teaching position of the presenter can be improved.
In one possible example, if the page content of the explanation page includes at least two first contents, the audio information of the presenter and the first contents adjacent to the presenter explanation position determined last time may be compared first, so as to shorten the time difference between obtaining the audio information of the presenter and obtaining the first comparison result, and improve the accuracy of the determined presenter explanation position.
In one possible example, the determining the presenter explanation location according to the first comparison result includes: determining first content corresponding to the explanation position of the presenter according to the first comparison result; judging whether the number of third contents is larger than a preset number, wherein the third contents are contents corresponding to the first contents, and the first contents comprise at least one third content; if yes, comparing the audio information of the speaker with second content to obtain a second comparison result, wherein the second content is corresponding to the first content, and the first content comprises at least one second content; and determining the explanation position of the presenter according to the second comparison result.
If the first content only comprises text information, the second content is the text information; if the first content only comprises image information, the second content is the image information; if the first content includes text information and image information, the second content is text information or image information. For example, when the first content includes a plurality of pieces of text, one second content may correspond to one piece of text content, one sentence of text content, one line of text content, or the like. When the first content includes only a piece of text, one second content may correspond to the content of one sentence of text, the content of one line of text, or the like. When the first content includes at least two images, a second content may correspond to an independent image, or a second content may correspond to the content of a certain area in an image or a certain feature in an image.
When the first content is text information, the number of the third content may be paragraph number, sentence number, text number, etc. When the first content is image information, the third content may be the feature quantity or the like. For example, if the first content is a text content, the second content is a sentence content, and the number of third content may be the number of sentences in the text, or the number of words in the text.
The preset number may be one, two or more, and may be specifically set according to actual requirements, which is not specifically limited herein. For example, when the first content is a piece of text, the third content may be defined as the number of words, where the number of the third content is the number of words included in the first content, and the preset number may be N number of words.
In a specific implementation, if the number of the third contents is greater than the preset number, the audio information of the presenter needs to be further compared with each second content. And if the number of the third contents is smaller than the preset number, taking the first contents corresponding to the first comparison result as the finally determined explanation positions of the presenter.
It will be appreciated that the second content may perform the steps of the above embodiments as the first content when audio information of the presenter is compared with the respective second content.
Therefore, by adopting the scheme provided by the embodiment, the accuracy of the explanation position of the presenter can be improved, and the situation that the conference participants cannot accurately find the position of the explanation content of the presenter due to excessive content corresponding to the result of only one comparison can be avoided. The processing pressure of single comparison of the server can be avoided from being reduced, and the performance requirement on single comparison of the server is reduced.
In one possible example, the comparing the audio information of the presenter and the first content in the page content of the explanation page to obtain a first comparison result includes: analyzing the audio information of the speaker to obtain text data; if the first content is text information, matching the text data with the text information to obtain matching similarity; if the first content is image information, matching the text data with a preset feature tag of the image information to obtain matching similarity; comparing the matching similarity of all the first contents to obtain the first content with the highest matching similarity; the first content with the highest matching similarity is the lecturer explanation position.
The manner of parsing the audio information of the speaker may be a manner of converting voice into text or a manner of semantic parsing, which is not limited herein. Correspondingly, the text data can be text obtained by directly converting the audio information of the speaker; or, the text data may be an analysis text obtained by performing semantic analysis on the audio information of the presenter, and the analysis result included in the analysis text may be a sentence or a phrase.
The preset feature labels are text labels correspondingly set by the server according to the image content. The preset feature tag can be a text tag set by the server before the talker explains the corresponding shared content, so that the high efficiency of position determination of the talker explanation is ensured.
In a specific implementation, if the first content is text information, the manner of matching the text data and the text information may be: and matching the text with the first content to take the number of words of the same text in the text and the first content as a result corresponding to the matching similarity, wherein the matching similarity is higher as the number of words is larger. Alternatively, the manner of matching text data and text information may be: and acquiring each character feature corresponding to the first content, wherein the character features can be key character content or generalized expression corresponding to the first content preset by the server according to the shared content. And comparing the analysis text with each character feature to take the character feature quantity of the first content, which is successfully matched with the analysis text, as a result corresponding to the matching similarity, wherein the matching similarity is higher as the number of the matched features is higher. Or, in order to further improve the accuracy of the positioning position, the manner of matching text data and text information may be: and weighting the first matching similarity obtained by comparing the first content with the text and the second matching similarity obtained by comparing the first content with the text to obtain the matching similarity corresponding to the first content, wherein the weight ratio of the first matching similarity to the second matching similarity can be set according to requirements without further limitation.
In a specific implementation, if the first content is image information. The manner of matching text data and text information may be: and matching the preset feature labels of the analysis text and the image information, so that the number of the successfully matched preset feature labels is used as a result corresponding to the matching similarity, and the matching similarity is higher as the number of the successfully matched feature labels is higher. Specifically, sentences included in the preset feature tag and the analysis text may be compared, or phrases included in the preset feature tag and the analysis text may be compared.
In a specific implementation, in order to obtain the highest matching similarity conveniently, all matching similarities corresponding to the first contents may be ordered sequentially, where the matching similarity is higher as the order is earlier.
Therefore, the accuracy of the determined talker explanation position can be ensured by calculating the matching similarity corresponding to each first content and determining the first content corresponding to the highest matching similarity as the content corresponding to the talker explanation position. And, through converting the audio information of the speaker into text data, and set up the corresponding preset feature label to the picture, confirm the mode of the first comparison result, compare with searching the mode that the picture corresponding to audio information of the speaker compares with the first content again, can improve the efficiency of comparison.
Further, in an embodiment, before comparing the matching similarities of all the first contents, the matching similarities of all the first contents may be compared with the preset similarity. If the comparison result is: and if all the matching similarities are smaller than the preset similarity, the matching failure is indicated, and the explanation page is not updated at the moment. In this case, the content taught by the audio information of the presenter is content unrelated to the conference content. If the comparison result is: and determining the position of the explanation content corresponding to the matching similarity as the explanation position of the presenter if only one matching similarity is larger than or equal to the preset similarity. If the comparison result is: and if the at least two matching similarities are greater than or equal to the preset similarity, executing the steps of: and comparing the matching similarity of all the first contents, and determining the first content with the highest matching similarity as the explanation position of the presenter. Therefore, the accuracy of determining the explanation position of the presenter can be further improved through the mode.
In one possible example, the updating the page content of the lecture page according to the lecturer lecture location includes: and setting an explanation position mark for content information corresponding to the explanation position of the main speaker in the page content of the explanation page to obtain the updated explanation page.
The explanation position mark can be a directional icon, such as a cursor or other forms of icons, which is arranged corresponding to the explanation position of the main speaker; or, the explanation position mark can also set a background color for the content corresponding to the explanation position of the talker or set a mark frame and the like to mark the content of other pages which are obviously different from the explanation page. The specific form of the explanation position mark is not further limited herein, and may be specifically set as required.
The meeting personnel viewing the explanation page may use full screen mode or non-full screen mode. Taking a full-screen mode as an example, referring to fig. 4 and 5, fig. 4 is a schematic structural diagram of an explanation page provided by an embodiment of the present application; fig. 5 is a schematic diagram of an updated explanation page corresponding to the explanation page in fig. 4. The explanation page may include an area for displaying page content of the explanation page, as well as other functional components. The functional components may include: the conference duration recording component, the full screen exiting component, the display component for displaying the current speaker and the current speaker, the microphone component for speaking and the like can be specifically set according to actual requirements. Referring to fig. 4 and 5, in this example, the first content in the explanation page is text information, and the explanation position of the presenter finally determined by the method is a sentence of text content. Referring to fig. 5, in order to highlight the lecturer explanation position, the explanation position mark provided correspondingly is provided with a ground color for the lecturer explanation position corresponding position.
Therefore, the explanation position mark is arranged on the explanation position of the main speaker in the explanation page, the updated explanation page is obtained, the explanation position of the main speaker can be quickly identified by the updated explanation page obtained by the participant through the embodiment, other explanation contents in the explanation page can be watched, and the explanation of the main speaker can be combined and understood by the participant.
In other embodiments, according to the lecturer explanation position, the way to update the page content of the explanation page may further be: and highlighting the content corresponding to the explanation position of the presenter on the explanation page in a popup window mode. Or only the content corresponding to the lecturer lecture position may be used as the lecture content in the updated lecture page. Thus, the content corresponding to the explanation position of the presenter can be displayed to the presenter more intuitively.
Corresponding to the embodiment shown in fig. 3, the embodiment of the application further provides a method for synchronously projecting the explanation positions of the main speaker in the cloud conference scene, which is applied to the local equipment, wherein the local equipment is connected with the server, and the method for synchronously projecting the explanation positions of the main speaker in the cloud conference scene comprises the following steps: receiving audio information of a speaker from a server; receiving updated explanation pages from a server; displaying the updated explanation page; and playing the audio information of the speaker.
The local device may be a second local device of the presenter, or the local device may be a local device of other participants, where the second local device may receive a cloud conference interface, such as a conference desktop or an explanation interface, from a server, and display the cloud conference interface in an interface of the second local device.
The updated explanation page is an explanation page after the page content of the explanation page is updated according to the explanation position of the host, and the explanation position of the host refers to a position corresponding to the audio information of the host in the page content of the explanation page. Reference is specifically made to the above, and no further description is given here.
According to the local equipment, the conference participants can intuitively identify the position of the lecturer, so that the effectiveness of conference communication can be improved, and the user experience can be improved. In addition, the operations such as explanation page creation are completed in the server, and compared with the operations such as collecting and uploading video frames displayed by the local equipment in real time, the method has the advantages that the data size required to be processed by the local equipment is smaller, and the configuration requirement on the local equipment is lower. Therefore, the scheme provided by the embodiment of the application has wider application range.
The present application may divide the functional units of the server according to the above-described method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Fig. 6 is a functional unit composition block diagram of a synchronous screen throwing device for a talker explanation position in a cloud conference scene provided by the embodiment of the application. The synchronous screen-throwing device 300 of the talker explanation position in the cloud conference scene can be applied to the server 100 in the network architecture as shown in fig. 1, and the synchronous screen-throwing device 300 of the talker explanation position in the cloud conference scene comprises:
a sending unit 310, configured to send an updated explanation page to a local device participating in the cloud conference;
the receiving unit 320 is configured to receive audio information and shared content of a presenter in a cloud conference, where the cloud conference refers to a conference group created by a conference creator in a cloud end through a first local device, the presenter refers to a participant who obtains control rights of a conference desktop of the cloud conference through a second local device, and the shared content refers to content information uploaded by the participant of the cloud conference to a cloud space of the cloud conference through a third local device;
a processing unit 330, configured to obtain an explanation page corresponding to the audio information of the presenter, where page content of the explanation page is content information included in the shared content; the processing unit is also used for determining the explanation position of the presenter according to the audio information of the presenter and the page content of the explanation page; and updating the page content of the explanation page according to the explanation position of the host speaker.
In one possible example, in determining the lecturer location based on the audio information of the lecturer and the page content of the lecture page, the processing unit is specifically configured to: comparing the audio information of the speaker with the first content in the page content of the explanation page to obtain a first comparison result; the explanation content of one explanation page comprises at least one first content; and determining the explanation position of the presenter according to the first comparison result.
In one possible example, the page content of the explanation page includes at least two first contents; in the aspect of comparing the audio information of the presenter with the first content in the page content of the explanation page, the processing unit is specifically configured to: determining whether the at least two first contents comprise first contents provided with explanation marks, wherein the first contents provided with the explanation marks refer to first contents corresponding to the historic explanation positions of the main speaker; and if so, comparing the audio information of the speaker with the first content which is not provided with the explained mark.
In one possible example, in the determining the presenter explanation position according to the first comparison result, the processing unit is specifically configured to: determining first content corresponding to the explanation position of the presenter according to the first comparison result; judging whether the number of third contents is larger than a preset number, wherein the third contents are contents corresponding to the first contents, and the first contents comprise at least one third content; if yes, comparing the audio information of the speaker with second content to obtain a second comparison result, wherein the second content is corresponding to the first content, and the first content comprises at least one second content; and determining the explanation position of the presenter according to the second comparison result. In one possible example, in the aspect of comparing the audio information of the presenter and the first content in the page content of the explanation page, the processing unit is specifically configured to: analyzing the audio information of the speaker to obtain text data; if the first content is text information, matching the text data with the text information to obtain matching similarity; if the first content is image information, matching the text data with a preset feature tag of the image information to obtain matching similarity; comparing the matching similarity of all the first contents to obtain the first content with the highest matching similarity; the first content with the highest matching similarity is the lecturer explanation position.
In one possible example, in updating the page content of the lecture page according to the lecturer lecture position, the processing unit is specifically configured to: and setting an explanation position mark for content information corresponding to the explanation position of the main speaker in the page content of the explanation page to obtain the updated explanation page.
The functional unit composition block diagram of the synchronous screen projection device 300 of the explanation position of the presenter in the cloud conference scene provided by the embodiment of the application can also be shown in fig. 7. In fig. 7, a synchronous screen-throwing device for a talker explanation position in a cloud conference scene includes: a processing module 350 and a communication module 340. The processing module 350 is configured to control and manage actions of the synchronous screen-casting device 300 at a presenter-interpretation position in a cloud conference scenario, for example, steps performed by the receiving unit 320, the processing unit 330, the sending unit 310, and/or other processes for performing the techniques described herein. The communication module 340 is used for supporting interaction between the synchronous screen-throwing device 300 and other devices of the explanation position of the presenter in the cloud conference scene. As shown in fig. 7, the synchronous screen-throwing device 300 of the lecturer explanation position in the cloud conference scene may further include a storage module 360, where the storage module 360 is configured to store program codes and data of the synchronous screen-throwing device 300 of the lecturer explanation position in the cloud conference scene.
The processing module 350 may be a processor or controller, such as a central processing unit (Central Processing Unit, CPU), a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with the disclosure of embodiments of the application. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like. The communication module 340 may be a transceiver, a Radio Frequency (RF) circuit, a communication interface, or the like. The storage module 360 may be a memory.
All relevant contents of each scenario related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein. The synchronous screen projection device 300 of the talker explanation position in the cloud conference scene can execute the steps executed by the server in the synchronous screen projection method of the talker explanation position in the cloud conference scene shown in fig. 3.
The embodiment of the present application also provides a computer storage medium storing a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a server.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (8)

1. The synchronous screen projection method for the explanation positions of the principals in the cloud conference scene is characterized by being applied to a server, and comprises the following steps:
acquiring audio information of a presenter in a cloud conference, wherein the cloud conference refers to a conference group created by a conference creator at a cloud end through first local equipment, and the presenter refers to a conference participant who acquires control authority of a conference desktop of the cloud conference through second local equipment;
Acquiring an explanation page corresponding to the audio information of the presenter, wherein the page content of the explanation page is content information contained in shared content, and the shared content is content information uploaded to a cloud space of the cloud conference by conference participants of the cloud conference through third local equipment;
comparing the audio information of the speaker with first content in the page content of the explanation page to obtain a first comparison result, and determining the explanation position of the speaker according to the first comparison result; wherein the page content of one explanation page comprises at least one first content; when the page content of the explanation page includes at least two first contents, the comparing the audio information of the presenter with the first contents in the page content of the explanation page includes: determining whether the at least two first contents comprise first contents provided with explanation marks, wherein the first contents provided with the explanation marks refer to first contents corresponding to the historic explanation positions of the main speaker; if yes, comparing the audio information of the speaker with the first content which is not provided with the explained mark;
updating page content of the explanation page according to the explanation position of the main speaker;
And sending the updated explanation page to the local equipment participating in the cloud conference.
2. The method of claim 1, wherein determining the presenter explanation location based on the first comparison result comprises:
determining first content corresponding to the explanation position of the presenter according to the first comparison result;
judging whether the number of third contents is larger than a preset number, wherein the third contents are contents corresponding to the first contents, and the first contents comprise at least one third content;
if yes, comparing the audio information of the speaker with second content to obtain a second comparison result, wherein the second content is corresponding to the first content, and the first content comprises at least one second content;
and determining the explanation position of the presenter according to the second comparison result.
3. The method of claim 1, wherein the comparing the audio information of the presenter with the first content of the page content of the explanation page to obtain the first comparison result comprises:
analyzing the audio information of the speaker to obtain text data;
if the first content is text information, matching the text data with the text information to obtain matching similarity;
If the first content is image information, matching the text data with a preset feature tag of the image information to obtain matching similarity;
comparing the matching similarity of all the first contents to obtain the first content with the highest matching similarity; the first content with the highest matching similarity is the lecturer explanation position.
4. The method of claim 1, wherein updating the page content of the lecture page based on the presenter lecture location comprises:
and setting an explanation position mark for content information corresponding to the explanation position of the main speaker in the page content of the explanation page to obtain the updated explanation page.
5. The synchronous screen projection method for the explanation positions of the principals in the cloud conference scene is characterized by being applied to local equipment, and comprises the following steps:
receiving audio information of a speaker from a server;
receiving an updated explanation page from a server, wherein the updated explanation page refers to an explanation page after page content of the explanation page is updated according to a talker explanation position, the talker explanation position refers to a position corresponding to audio information of a talker, the page content of one explanation page comprises at least one first content, when the page content of the explanation page comprises at least two first contents, and when the at least two first contents comprise first contents provided with explanation marks, the explanation position refers to a position corresponding to the audio information of the talker without the first content provided with the explanation marks, and the first content provided with the explanation marks refers to first content corresponding to a talker history explanation position;
Displaying the updated explanation page;
and playing the audio information of the speaker.
6. Synchronous screen throwing device of presenter's explanation position in cloud meeting scene, characterized in that is applied to the server, the device includes:
the sending unit is used for sending the updated explanation page to the local equipment participating in the cloud conference;
the receiving unit is used for receiving audio information and shared content of a presenter in the cloud conference, wherein the cloud conference is a conference group created by a conference creator in a cloud end through a first local device, the presenter is a conference participant who obtains control authority of a conference desktop of the cloud conference through a second local device, and the shared content is content information uploaded to a cloud space of the cloud conference by the conference participant of the cloud conference through a third local device;
the processing unit is used for acquiring an explanation page corresponding to the audio information of the presenter, wherein the page content of the explanation page is content information included in the shared content; the processing unit is further used for comparing the audio information of the main speaker with first content in the page content of the explanation page to obtain a first comparison result, and determining the explanation position of the main speaker according to the first comparison result; wherein the page content of one explanation page comprises at least one first content; when the page content of the explanation page includes at least two first contents, the comparing the audio information of the presenter with the first contents in the page content of the explanation page includes: determining whether the at least two first contents comprise first contents provided with explanation marks, wherein the first contents provided with the explanation marks refer to first contents corresponding to the historic explanation positions of the main speaker; if yes, comparing the audio information of the speaker with the first content which is not provided with the explained mark; the processing unit is also used for updating the page content of the explanation page according to the explanation position of the host speaker.
7. A server comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
8. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the steps in the method according to any one of claims 1-5.
CN202210945934.7A 2022-08-08 2022-08-08 Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene Active CN115396404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210945934.7A CN115396404B (en) 2022-08-08 2022-08-08 Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210945934.7A CN115396404B (en) 2022-08-08 2022-08-08 Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene

Publications (2)

Publication Number Publication Date
CN115396404A CN115396404A (en) 2022-11-25
CN115396404B true CN115396404B (en) 2023-09-05

Family

ID=84119044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210945934.7A Active CN115396404B (en) 2022-08-08 2022-08-08 Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene

Country Status (1)

Country Link
CN (1) CN115396404B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102209080A (en) * 2010-03-30 2011-10-05 刘盛举 Terminal system for synchronous teaching or conferences and control method thereof
CN107553505A (en) * 2017-10-13 2018-01-09 刘杜 Autonomous introduction system platform robot and explanation method
WO2018120821A1 (en) * 2016-12-26 2018-07-05 北京奇虎科技有限公司 Method and device for producing presentation
CN111933131A (en) * 2020-05-14 2020-11-13 联想(北京)有限公司 Voice recognition method and device
CN112988315A (en) * 2021-05-19 2021-06-18 全时云商务服务股份有限公司 Method, system and readable storage medium for personalized viewing of shared desktop
CN114679437A (en) * 2022-03-11 2022-06-28 阿里巴巴(中国)有限公司 Teleconference method, data interaction method, device, and computer storage medium
CN114827102A (en) * 2022-06-30 2022-07-29 深圳乐播科技有限公司 Information security control method based on cloud conference and related device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180205797A1 (en) * 2017-01-15 2018-07-19 Microsoft Technology Licensing, Llc Generating an activity sequence for a teleconference session
US10439835B2 (en) * 2017-08-09 2019-10-08 Adobe Inc. Synchronized accessibility for client devices in an online conference collaboration

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102209080A (en) * 2010-03-30 2011-10-05 刘盛举 Terminal system for synchronous teaching or conferences and control method thereof
WO2018120821A1 (en) * 2016-12-26 2018-07-05 北京奇虎科技有限公司 Method and device for producing presentation
CN107553505A (en) * 2017-10-13 2018-01-09 刘杜 Autonomous introduction system platform robot and explanation method
CN111933131A (en) * 2020-05-14 2020-11-13 联想(北京)有限公司 Voice recognition method and device
CN112988315A (en) * 2021-05-19 2021-06-18 全时云商务服务股份有限公司 Method, system and readable storage medium for personalized viewing of shared desktop
CN114679437A (en) * 2022-03-11 2022-06-28 阿里巴巴(中国)有限公司 Teleconference method, data interaction method, device, and computer storage medium
CN114827102A (en) * 2022-06-30 2022-07-29 深圳乐播科技有限公司 Information security control method based on cloud conference and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于VC的远程同声传译系统在北京冬奥组委国际会议中的设计和应用;华侃;;电声技术(第06期);全文 *

Also Published As

Publication number Publication date
CN115396404A (en) 2022-11-25

Similar Documents

Publication Publication Date Title
US9712569B2 (en) Method and apparatus for timeline-synchronized note taking during a web conference
US20200106813A1 (en) Method and system for sharing annotated conferencing content among conference participants
CN108292301B (en) Contextual note taking
US10630734B2 (en) Multiplexed, multimodal conferencing
CN104951546B (en) Method and device for subscribing message in instant messaging software
US10613825B2 (en) Providing electronic text recommendations to a user based on what is discussed during a meeting
US20170280099A1 (en) Automatic expansion and derivative tagging
US20160329050A1 (en) Meeting assistant
CN109274999A (en) A kind of video playing control method, device, equipment and medium
US9525896B2 (en) Automatic summarizing of media content
US11824647B2 (en) Promotion of users in collaboration sessions
US9811594B2 (en) Automatic explanation of presented abbreviations
CN117356082A (en) Enhancing control of user interface formats for message threads based on device form factor or topic priority
KR20170126667A (en) Method for generating conference record automatically and apparatus thereof
US11558440B1 (en) Simulate live video presentation in a recorded video
CN115345126A (en) Contextual real-time content highlighting on shared screens
US20190146645A1 (en) Replaying event-based sessions between a user device and an agent device
CN115396404B (en) Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene
US20140222840A1 (en) Insertion of non-realtime content to complete interaction record
US20220383907A1 (en) Method for processing video, method for playing video, and electronic device
US11769504B2 (en) Virtual meeting content enhancement triggered by audio tracking
US11086592B1 (en) Distribution of audio recording for social networks
CN106992971B (en) Interactive terminal switching method and device and interactive recording and broadcasting system
US20190146807A1 (en) Establishing an event-based session between a user device and an agent device
CN115328381B (en) Page pushing method, device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant