CN111405230A - Conference information processing method and device, electronic equipment and storage medium - Google Patents

Conference information processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111405230A
CN111405230A CN202010093489.7A CN202010093489A CN111405230A CN 111405230 A CN111405230 A CN 111405230A CN 202010093489 A CN202010093489 A CN 202010093489A CN 111405230 A CN111405230 A CN 111405230A
Authority
CN
China
Prior art keywords
conference
text information
terminal
dimensional code
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010093489.7A
Other languages
Chinese (zh)
Other versions
CN111405230B (en
Inventor
钟文亮
徐力
袁占涛
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN202010093489.7A priority Critical patent/CN111405230B/en
Publication of CN111405230A publication Critical patent/CN111405230A/en
Application granted granted Critical
Publication of CN111405230B publication Critical patent/CN111405230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9554Retrieval from the web using information identifiers, e.g. uniform resource locators [URL] by using bar codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the invention provides a conference information processing method, a device, equipment and a storage medium, wherein the method is applied to a terminal and comprises the following steps: acquiring a conference record, wherein the conference record comprises a plurality of sections of text information; extracting at least part of text information from the conference record according to a preset condition; generating a two-dimensional code corresponding to the at least part of the text information; and displaying the two-dimensional code so as to push text information corresponding to the two-dimensional code to a user when the two-dimensional code is scanned. When the technical scheme of the invention is adopted, the conference record is extracted according to the preset condition, so that at least part of the extracted text information is the text information meeting the preset condition, and thus, a user can obtain the part of the text information by scanning the two-dimensional code, and the efficiency of obtaining the text information is further improved.

Description

Conference information processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a conference information processing method and apparatus, an electronic device, and a storage medium.
Background
The video conference refers to a conference in which two or more individuals or groups in different places transmit audio, video and file data to each other through a transmission line and multimedia equipment, so as to realize real-time and interactive communication and realize a teleconference. The participants or managers can only watch the audio and video in the video conference and hear the voice of the speaker generally, but cannot acquire the text information of the speaker in the video conference and can only record the video conference in a manual recording mode.
In the related art, a video conference is recorded by adopting a video conference recording mode, but the video conference is restored by the recording mode, a user with a large file downloads the video conference for a long time, and the user watches a part of the content by dragging a progress bar of the recorded video file when knowing the main content of the video conference. Therefore, there is a problem in the related art that it is inefficient to know the contents of the video conference.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are provided to provide a conference information processing method, apparatus, electronic device, and storage medium to overcome the above problems or at least partially solve the above problems.
In order to solve the above problem, a first aspect of the embodiments of the present invention discloses a conference information processing method, where the method is applied to a terminal, and includes:
acquiring a conference record, wherein the conference record comprises a plurality of sections of text information;
extracting at least part of text information from the conference record according to a preset condition;
generating a two-dimensional code corresponding to the at least part of the text information;
and displaying the two-dimensional code so as to push text information corresponding to the two-dimensional code to a user when the two-dimensional code is scanned.
Optionally, after displaying the two-dimensional code, the method further includes:
receiving a request signaling sent by intelligent equipment in communication connection with the terminal when the intelligent equipment scans and displays the two-dimensional code;
and responding to the request signaling, and sending text information corresponding to the scanned two-dimensional code to the intelligent equipment.
Optionally, obtaining a meeting record comprises:
acquiring a conference record in a video conference currently participated in;
after the two-dimensional code is displayed, the method further comprises:
storing the two-dimension code to a preset list to obtain a two-dimension code list;
sending the two-dimension code list to a conference terminal participating in the video conference, and sending text information corresponding to each two-dimension code in the two-dimension code list to a control terminal in the video conference; and when scanning the two-dimensional codes in the two-dimensional code list, the user acquires text information which is sent by the control terminal and corresponds to the scanned two-dimensional codes.
Optionally, the terminal is a video networking terminal, the video networking terminal is respectively in communication connection with a conference management system and a conference scheduling device, the conference management system is in communication connection with the conference scheduling device to obtain a conference record, including:
and in the current video conference, receiving a conference record pushed by the conference management system, wherein the conference record is generated by the conference scheduling equipment according to the received multiple sections of text information and is sent to the conference management system, and the multiple sections of text information are information corresponding to audio data collected by a terminal which speaks currently in the video conference.
Optionally, before extracting at least part of text information from the meeting record according to a preset condition, the method further includes:
and converting the multiple sections of text information into subtitles, and synchronously displaying the subtitles and the currently played video conference picture.
Optionally, the step of extracting at least part of text information from the meeting record according to the preset condition is performed in one of the following manners;
acquiring the selected subtitle based on the selection operation of the user on the currently displayed subtitle, and extracting text information corresponding to the selected subtitle from the conference record;
extracting at least part of text information comprising preset keywords from the conference records;
and determining a speaking terminal in the video conference, and extracting at least part of text information corresponding to the speaking terminal from the conference record.
Optionally, the method further comprises: adding a label to the two-dimensional code in one of the following ways:
determining a participant terminal corresponding to the at least part of text information in a plurality of terminals participating in the video conference, and taking the name of the participant terminal as a label of the two-dimensional code;
taking the speaking time period corresponding to at least part of the text information as a label of the two-dimensional code;
and taking the keywords included in the at least part of text information as the labels of the two-dimensional codes.
In a second aspect of the embodiments of the present invention, there is provided a conference information processing apparatus, where the apparatus is applied to a terminal, and the apparatus includes:
the recording acquisition module is used for acquiring a conference record, wherein the conference record comprises a plurality of sections of text information;
the information extraction module is used for extracting at least part of text information from the conference record according to preset conditions;
the two-dimensional code generating module is used for generating a two-dimensional code corresponding to the at least part of text information;
and the display module is used for displaying the two-dimensional code so as to push text information corresponding to the two-dimensional code to a user when the two-dimensional code is scanned.
Optionally, the apparatus further comprises:
the signaling receiving module is used for receiving a request signaling sent by intelligent equipment in communication connection with the terminal when the intelligent equipment scans the two-dimensional code in the display;
and the information sending module is used for responding to the request signaling and sending the text information corresponding to the scanned two-dimensional code to the intelligent equipment.
Optionally, the record obtaining module is specifically configured to obtain a conference record in a currently participating video conference;
the device further comprises:
the list obtaining module is used for storing the two-dimension codes to a preset list to obtain a two-dimension code list;
the list sending module is used for sending the two-dimension code list to the conference participating terminals participating in the video conference and sending text information corresponding to each two-dimension code in the two-dimension code list to the control terminal in the video conference; and when scanning the two-dimensional codes in the two-dimensional code list, the user acquires text information which is sent by the control terminal and corresponds to the scanned two-dimensional codes.
Optionally, the terminal is a video networking terminal, the video networking terminal is respectively in communication connection with a conference management system and a conference scheduling device, the conference management system is in communication connection with the conference scheduling device, the record acquisition module is specifically configured to receive a conference record pushed by the conference management system in a currently-performed video networking conference, the conference record is generated and sent to the conference management system by the conference scheduling device according to received multiple pieces of text information, wherein the multiple pieces of text information are information corresponding to audio data collected by the terminal which speaks currently in the video networking video conference.
Optionally, the apparatus further comprises:
and the subtitle display module is used for converting the multiple sections of text information into subtitles and synchronously displaying the subtitles and the currently played video conference picture.
Optionally, the information extraction module includes at least one of the following units:
the first extraction unit is used for acquiring the selected subtitle based on the selection operation of the user on the currently displayed subtitle and extracting text information corresponding to the selected subtitle from the conference record;
the second extraction unit is used for extracting at least part of text information comprising preset keywords from the conference records;
and a third extraction unit, configured to identify an originating terminal in the video conference, and extract at least part of text information corresponding to the originating terminal from the conference record.
Optionally, the apparatus further includes a tag adding module, and the tag adding module may be configured to add a tag to the two-dimensional code in one of the following manners:
determining a participant terminal corresponding to the at least part of text information in a plurality of terminals participating in the video conference, and taking the name of the participant terminal as a label of the two-dimensional code;
taking the speaking time period corresponding to at least part of the text information as a label of the two-dimensional code;
and taking the keywords included in the at least part of text information as the labels of the two-dimensional codes.
In a third aspect of the embodiments of the present invention, an electronic device is further disclosed, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform one or more meeting information processing methods as described in embodiments of the invention.
In a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is further disclosed, in which a stored computer program causes a processor to execute the conference information processing method according to the embodiments of the present invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the terminal can extract partial text information from the acquired conference record according to the preset condition, generate the two-dimensional code corresponding to the partial text information and display the two-dimensional code, so that when the two-dimensional code is displayed, the text information corresponding to the two-dimensional code can be pushed to a user when the two-dimensional code is scanned. Because the conference record is extracted according to the preset condition, at least part of the extracted text information is the text information meeting the preset condition, and thus, a user can obtain the part of the text information by scanning the two-dimensional code, and the efficiency of obtaining the video conference content is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a diagram of an implementation environment of an embodiment of the present invention;
FIG. 2 is a diagram of yet another environment for implementing an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the steps of a method for processing meeting information according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an example of a method for processing meeting information in the implementation environment shown in FIG. 2;
fig. 5 is a schematic structural diagram of a conference information processing apparatus according to an embodiment of the present invention;
FIG. 6 is a networking schematic of a video network of the present invention;
FIG. 7 is a diagram of a hardware architecture of a node server according to the present invention;
fig. 8 is a schematic diagram of a hardware architecture of an access switch of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
First, an implementation environment of the present invention is described:
referring to fig. 1, which shows an implementation environment diagram of a conference information processing method according to an embodiment of the present invention, as shown in fig. 1, a terminal 1011 may be located in the internet, and may participate in a video conference performed in the internet, where the video conference may be a video conference performed in the internet, the video conference includes a streaming server 102, the streaming server 102 may transmit audio, video, and file data collected by a plurality of terminals (3 terminals are shown in fig. 1), and the terminal 1011 may communicate with the streaming server 102 to obtain a conference record in the video conference from the streaming server 102, where the conference record may be a text corresponding to audio collected by a terminal 1012 or a terminal 1013, and is used to record content of a speaker in the video conference.
Referring to fig. 2, a further implementation environment diagram of the conference information processing method according to the embodiment of the present invention is shown, as shown in fig. 2, a terminal 2011 may be located in a video network, and may be a video network terminal participating in a video conference of the video network, and a conference management system 202 and a conference scheduling device 203 are further deployed in the video network, where the video network terminal 2011 is communicatively connected to the conference management system 202 and the conference scheduling device 203, respectively, and the conference management system 202 is communicatively connected to the conference scheduling device 203.
The conference management system 202 may be configured to initiate a video conference in the video network, perform mutual conversion of audio, video and related files at multiple video network terminals participating in the conference, and the conference scheduling device 203 may be configured to forward a text corresponding to the audio of the multiple video network terminals to the video network terminal that starts subtitle display, so that in the process of performing the video conference, the video network terminals participating in the conference may know the speech content of the current speaker through the displayed subtitles.
For example, the video network terminal 2013 may push text information corresponding to the collected audio of the speaker to the conference management system 202, the conference management system 202 pushes the text information to the conference scheduling device 203, the conference scheduling device 203 generates a conference record, and then sends the conference record to the conference management system 202 again, and the video network terminal 2011 may obtain the conference record corresponding to the video network terminal 2013 from the conference management system 202 in the video conference.
In the implementation environment, the conference can be held through the video networking conference scheduling equipment, and after the conference is started, the conference scheduling equipment also enters the conference in the form of a virtual terminal, so that the video stream and the audio stream of the conference can be acquired by the conference scheduling equipment in the conference due to the conference scheduling equipment entering the conference. The conference scheduling device can be a tablet computer based on an android device, and can be in communication connection with a web end in the conference management system to perform data transmission.
A conference information processing method according to the present application will be described in detail with reference to the implementation environments shown in fig. 1 and fig. 2.
Referring to fig. 3, a flowchart illustrating steps of a conference information processing method according to an embodiment of the present invention is shown, where the method may be applied to a terminal, and specifically may include the following steps:
step S301: and acquiring a conference record, wherein the conference record comprises a plurality of sections of text information.
In particular, in the implementation environment shown in fig. 1, the terminal 1011 may obtain a conference record of a video conference from the streaming media server 102. In the implementation environment shown in fig. 2, terminal 2011 may receive a meeting record pushed by meeting management system 202, which may be generated by meeting scheduling device 203 and pushed to meeting management system 202.
In this embodiment, the conference record in the ongoing video conference can be obtained in the process of the video conference, and the record of the video conference can also be obtained after the video conference is finished.
Specifically, when obtaining a conference record in an ongoing video conference, in the implementation environment shown in fig. 2, the conference record may be generated and sent to the conference management system 202 by the conference scheduling device 203 according to multiple pieces of received text information, where the multiple pieces of text information may be information corresponding to audio data collected by a terminal currently speaking in the video conference. That is, the terminal currently speaking may recognize the audio data collected by itself as a plurality of pieces of text information, and send the plurality of pieces of text information to the conference scheduling apparatus 203 through the conference management system 202.
In this embodiment, the conference record may include multiple pieces of text information. In particular, the plurality of pieces of text information may correspond to the audio of a speaker currently speaking in the video conference when the conference recording is a conference recording in an ongoing video conference. For example, if the speaker makes a 2-minute utterance, the plurality of pieces of text information are information of the 2-minute utterance. When the conference record is a conference record after the video conference is ended, the plurality of pieces of text information may be information corresponding to the audios of all speakers speaking in the video conference. For example, if 10 speakers speak in a video conference, the plurality of pieces of text information may be the speech information of the 10 speakers.
Step S302: and extracting at least part of text information from the conference record according to a preset condition.
In this embodiment, the preset condition may be a condition preset by a user, where the condition may be a time period, a sensitive word, and the condition may be that: and extracting at least part of text information from the conference record according to a certain rule, wherein the extracted at least part of text information is information meeting a preset condition.
For example, if the preset condition is a sensitive word, at least part of the extracted text information is information including the sensitive word. For another example, when the preset condition is a time period, at least part of the extracted text information is information in the time period, that is, speech content in a video conference in a certain time period is extracted.
Step S303: and generating a two-dimensional code corresponding to at least part of the text information.
In this embodiment, a corresponding two-dimensional code may be generated for each text message in at least some text messages, and the at least some text messages may be saved, and at the same time, a corresponding relationship between the at least some saved text messages and the two-dimensional codes may be established.
In a specific implementation manner, when a conference record is a conference record in progress of a video conference, each time a conference record is received, the terminal may extract a part of text information from the conference record according to a preset condition and generate a corresponding two-dimensional code, so that a plurality of two-dimensional codes may be obtained in the progress of the video conference, and when each two-dimensional code is generated, the terminal may add the two-dimensional code to the two-dimensional code list, so that the generated two-dimensional code may be stored in the two-dimensional code list.
In another embodiment, the conference record is a conference record after the video conference is ended, in this case, a plurality of preset conditions may be set, and the terminal may extract at least part of text information corresponding to the preset conditions from the conference record according to each preset condition of the plurality of preset conditions, and generate the corresponding two-dimensional code. In this way, the preset conditions for the text information corresponding to different two-dimensional codes are different. For example, if the preset conditions are sensitive words and time periods, respectively, a two-dimensional code is generated according to the text information extracted by the sensitive words, and another two-dimensional code is generated according to the text information extracted by the time periods. Therefore, the classified extraction and storage of the conference records are realized, and the user can conveniently check the conference records according to the requirements.
Step S304: and displaying the two-dimensional code so as to push text information corresponding to the two-dimensional code to a user when the two-dimensional code is scanned.
In this embodiment, the terminal may display the generated two-dimensional code on a display screen configured for the terminal. When the two-dimensional codes are multiple, all the multiple two-dimensional codes can be displayed on the display screen.
In a specific implementation, in an application scenario a, a user may scan the two-dimensional code through a terminal displaying the two-dimensional code, for example, pressing a right mouse button for a long time, and scanning the two-dimensional code by using a jumped-out two-dimensional code scanning toolbar, so as to display text information corresponding to the two-dimensional code on the terminal.
In another application scenario B, as shown in fig. 1 and fig. 2, a user may scan a two-dimensional code through a third-party smart device to display text information corresponding to the two-dimensional code on the third-party smart device, so that a meeting record may be obtained by multiple users.
In the embodiment of the invention, because the meeting record is extracted according to the preset condition, at least part of the extracted text information is the text information meeting the preset condition. Therefore, no matter the video conference is in progress or the video conference is finished, partial text information in the conference record can be obtained by scanning the generated two-dimensional code, and due to the fact that the text information is scanned and displayed and is extracted from the conference record, the time length of the user for obtaining the text information is shortened. Due to the fact that the preset conditions can be set in advance according to requirements, the user can quickly acquire the concerned content from the conference record, and the efficiency of acquiring the conference record by the user is improved.
In one embodiment, a process of acquiring text information by scanning a two-dimensional code by a user in an application scenario B is disclosed, and specifically, after displaying the two-dimensional code, a terminal may perform the following steps:
step S305: and receiving a request signaling sent by intelligent equipment in communication connection with the terminal when the intelligent equipment scans the two-dimensional code in the display.
In this embodiment, the smart device may be, but is not limited to, the following devices: mobile phones and tablet computers are scanning devices. The intelligent device can be in communication connection with the terminal so as to transmit data between the intelligent device and the terminal.
In specific implementation, a user may scan the two-dimensional code in the display by using the smart device, for example, a scanning applet on a mobile phone may be opened to scan the two-dimensional code. When the intelligent device scans the two-dimensional code, a request signaling can be generated correspondingly, and the terminal can receive the request signaling.
Step S306: and responding to the request signaling, and sending text information corresponding to the scanned two-dimensional code to the intelligent equipment.
In this embodiment, the request signaling is used to request the text information corresponding to the two-dimensional code from the terminal. When the terminal receives the request signaling, the text information corresponding to the scanned two-dimensional code can be extracted, and the text information is sent to the intelligent device to be checked by the user.
By adopting the method, more users can acquire the conference record in a two-dimensional code scanning mode, and the application range is expanded.
In one embodiment, the acquired conference record may be a real-time conference record in an ongoing video conference, and then in step S301, a conference record in a currently participating video conference is acquired. Accordingly, after step S304, the following steps may also be performed:
step S305': and storing the two-dimension code to a preset list to obtain a two-dimension code list.
In this embodiment, the conference record may be a conference record in progress of a video conference, and each time a conference record is received, the terminal may extract a part of text information from the conference record according to a preset condition and generate a corresponding two-dimensional code, so that a plurality of two-dimensional codes may be obtained in the progress of the video conference, and the terminal may add the two-dimensional code to the preset list every time a two-dimensional code is generated, so that the generated two-dimensional code may be stored in the two-dimensional code list.
Step S306': sending the two-dimension code list to a conference terminal participating in the video conference, and sending text information corresponding to each two-dimension code in the two-dimension code list to a control terminal in the video conference; and when scanning the two-dimensional codes in the two-dimensional code list, the user acquires text information which is sent by the control terminal and corresponds to the scanned two-dimensional codes.
In this embodiment, the terminal may send the two-dimensional code list and the two-dimensional code list to other participating terminals participating in the video conference in real time, and send text information corresponding to each two-dimensional code to the control terminal in the video conference. Specifically, in the implementation environment shown in fig. 1, the control terminal may be a streaming media server, and in the implementation environment shown in fig. 2, the control terminal may be a conference management system.
Specifically, when a two-dimensional code is added to each preset list, the obtained two-dimensional code list is sent to the participating terminal, text information corresponding to each two-dimensional code is sent to the control terminal, and the control terminal can store the text information corresponding to each two-dimensional code. The conference terminals participating in the video conference can be distributed at different places, and each conference terminal can be in communication connection with the control terminal.
Therefore, the participating terminal can also display the two-dimension codes in the two-dimension code list, when users in different places scan the two-dimension codes displayed on the participating terminal through the smart phone, text information corresponding to the scanned two-dimension codes can be requested to the control terminal, and the control terminal can send the text information corresponding to the scanned two-dimension codes to the smart phone, so that the users can know the conference content.
For example, as shown in fig. 1, the terminal 1011 sends the two-dimensional code list to the participating terminal 1012 and the participating terminal 1013, and sends text information corresponding to the two-dimensional code to the streaming media server 102, and the user scans the two-dimensional code displayed on the participating terminal 1012 through the smart phone, so that the smart phone sends a request to the streaming media server 102, and the streaming media server 102 can send the text information corresponding to the two-dimensional code to the smart phone used by the user.
For another example, as shown in fig. 2, the video network terminal 2021 sends the two-dimensional code list to the conference terminal 2012, sends the text information corresponding to the two-dimensional code to the conference management system 202, and the user scans the two-dimensional code displayed on the conference terminal 2012 through the smart phone, so that the smart phone sends a request to the conference management system 202, and the conference management system 202 can send the text information corresponding to the two-dimensional code to the smart phone used by the user
In one embodiment, the conference recording may be a conference recording in an ongoing video conference, and before extracting at least part of the text information, the terminal may further perform the following steps:
step S302': and converting the multiple sections of text information into subtitles, and synchronously displaying the subtitles and the currently played video conference picture.
In this embodiment, the terminal may perform the following operations according to a preset subtitle conversion standard, for example: the text information is the speech content of the current speaker in the video conference, so the subtitle can be synchronized with the current video picture in time, and the subtitle can be synchronously displayed with the currently played video picture.
Specifically, the video frame is a video frame acquired by each participant terminal in the current video conference on the conference site, so that when the video conference is performed, each participant terminal can hear the voice of a speaker and see the content spoken by the speaker on the video frame.
In combination with this embodiment, accordingly, in a specific embodiment, at least a part of the text information may be extracted from the meeting record in one of the following three ways:
mode 01: and acquiring the selected subtitle based on the selection operation of the user on the currently displayed subtitle, and extracting text information corresponding to the selected subtitle from the conference record.
In this embodiment, the preset condition may be a selection operation of the displayed subtitle by the user. The displayed caption can be a selectable caption, that is, the caption can be set to a caption selected by a mouse or the like, so that a user can select all or part of the currently displayed caption, and the terminal can acquire the caption selected by the user.
When the method is adopted, the text information can be extracted according to the selection of the user, so that the user can store the concerned conference content in a two-dimensional code mode in the process of carrying out the video conference.
Mode 02: extracting at least part of text information comprising preset keywords from the meeting records.
In this way, in the process of carrying out the video conference, at least part of text information including preset keywords can be extracted from the currently received conference record; or at the end of the video conference, at least part of text information including preset keywords is extracted from the conference record of the video conference.
The keyword may be multiple, for example, in an emergency command video conference for a torrential flood, the keyword may be set to be multiple, such as a water level, a loss, a missing, and the like. Therefore, at least part of text information including the water level, at least part of text information including the loss and at least part of text information including the missing can be respectively extracted from the conference records when the video conference is finished, the classification and the arrangement of the conference records according to the keywords are realized, and the efficiency of acquiring the conference content of the video conference is improved.
Mode 03: and determining a speaking terminal in the video conference, and extracting at least part of text information corresponding to the speaking terminal from the conference record.
In this way, the preset condition may be a speaking terminal, each piece of text information may carry a terminal identifier of the speaking terminal, and when receiving a conference record, the terminal may obtain the speaking terminal speaking in the video conference from the streaming media server (or from the conference control system), and further extract at least part of text information corresponding to the speaking terminal from the conference record according to the terminal identifier of each piece of text information.
When the video conference is finished, at least part of text information corresponding to the plurality of speaking terminals can be respectively extracted from the conference records, and then the conference records are sorted according to speakers, so that the efficiency of acquiring the conference content of the video conference is improved.
With reference to the foregoing embodiment, in an implementation manner, after the two-dimensional code is generated and before the two-dimensional code is displayed, a tag may be further added to the two-dimensional code, where the tag may be used to prompt the content of text information corresponding to the two-dimensional code, or the generation time of the text information, or prompt a terminal from which the text information comes. Specifically, the two-dimensional code may be tagged in one of the following three ways:
the method a: and determining a participant terminal corresponding to the at least part of text information in a plurality of terminals participating in the video conference, and taking the name of the participant terminal as the label of the two-dimensional code.
In this embodiment, at least part of the text information may be extracted according to a preset condition, and then the participant terminal corresponding to the at least part of the text information may be determined, where the name of the participant terminal may be the name of the speaker who uses the participant terminal to speak, that is, the speaker of the at least part of the text information is determined, and then the name of the participant terminal may be used as the tag of the two-dimensional code.
Mode b: and taking the speaking time period corresponding to at least part of the text information as a label of the two-dimensional code.
In this embodiment, each piece of text information may carry a timestamp, and the timestamp may represent speaking time of an audio corresponding to the piece of text information in a video conference, so that a time difference between an earliest timestamp and a latest timestamp in at least part of text information may be determined, and a speaking time period corresponding to the time difference is used as a tag of a two-dimensional code.
Mode c: and taking the keywords included in the at least part of text information as the labels of the two-dimensional codes.
In this embodiment, the keywords may be determined from at least part of the text information, wherein a plurality of keywords may be preset, for example, water level, casualties, and missing, and the keywords included in at least part of the text information may be used as tags. For example, if at least part of the text information includes keywords casualties and missing, the casualties and the missing can be used as tags of the two-dimensional codes.
In practice, when the preset condition is a selection operation of a subtitle by a user, the two-dimensional code may be tagged in any one of three ways. When the preset condition is a keyword or a speech terminal, that is, text information is extracted according to the keyword or the speech terminal, the tag may be consistent with the preset condition, that is, the tag may be added to the two-dimensional code according to the preset condition. For example, the preset condition is keyword: and the water level, when text information including the water level is extracted from the conference record, the keyword water level can be used as a label of the two-dimensional code.
In one embodiment, the terminal may further perform the following steps:
step S307: in a video conference, a speech recognition system is activated when audio data is collected.
In this embodiment, the terminal may detect whether audio data is collected or not in the process of the video conference, and if it is detected that audio data is collected, may activate its own voice recognition system to start the voice recognition system, and the voice recognition system may be configured in the terminal.
Step S308: and identifying the collected audio data by adopting the voice identification system to obtain text information corresponding to the audio data.
In this embodiment, the terminal may send the collected audio data to the voice recognition system, and the voice recognition system may recognize the audio data as text information.
Step S309: and sending the text information to the terminals participating in the video conference.
In a specific implementation, in the implementation environment shown in fig. 1, the terminal may send the text information to the streaming server, so that the streaming server sends the text information to the terminals participating in the video conference. In the implementation environment shown in fig. 2, the terminal may send the text information to the conference scheduling device via the conference management system, and the conference scheduling device may determine, among a plurality of terminals participating in the video conference, terminals that start subtitle display, and form a terminal list, and further send the terminal list and the text information to the conference management system, and the conference management system sends the text information to the terminals that start subtitle display included in the terminal list.
Referring to fig. 4, a schematic diagram of an example of implementing a conference information processing method in the implementation environment shown in fig. 2 is shown, as shown in fig. 4, a video network includes a plurality of terminals, the plurality of terminals constitute a video conference of the video network, the video conference of the video network is initiated by a conference management system and can be controlled by the conference management system, wherein, audio spoken by a speaker in the video conference can be recognized as a text, and the text is pushed by a conference scheduling device to a terminal of the plurality of terminals, which starts caption display, for display.
The conference management system comprises a web end and a scheduling end, and the conference scheduling equipment is in communication connection with the web end of the conference management system.
Taking one terminal 1 and one terminal 2 in fig. 4 as an example, the present example will be explained:
firstly, after a conference is successfully held by a conference scheduling device through a conference management system, a voice recognition system of a terminal 1 is in a standby state, the terminal 1 activates the voice recognition system of the terminal when a speaker speaks, the terminal 1 pushes recognized text information to the conference management system after recognizing voice in real time, the text information is sent to the conference scheduling device through a web end of the conference management system, the conference scheduling device is used for judging the state of each terminal participating in a conference, a terminal list with opened caption display is screened out, a conference record is generated based on the text information, the screened terminal list, the conference record and id of a video conference are sent to the web end of the conference management system, and the conference management system pushes the text information to terminals in the terminal list through the scheduling end for displaying, for example, the terminal 2.
As shown in fig. 4, when the conference management system pushes the conference record to the terminal that starts the subtitle display, it may also verify whether each terminal in the terminal list is legal, and further push the information to the terminal that is verified as legal in the terminal list. Specifically, if the terminal in the terminal list is registered in the network management system OMC of the video network before use and the terminal which reserves the video conference in advance verifies that the terminal is a legal terminal, otherwise, the terminal is verified as an illegal terminal.
Then, the terminal 2 converts the text information into a subtitle and displays the subtitle in synchronization with the video picture of the video conference.
Then, the terminal 2 detects the selection operation of the user on the displayed subtitle, and obtains the subtitle selected by the user.
Next, the terminal 2 extracts text information corresponding to the subtitle selected by the user from the conference record, and generates a two-dimensional code corresponding to the text information.
After that, the terminal 2 may add the name of the terminal 1 carried by the text information as a tag to the two-dimensional code.
Next, the terminal 2 adds the two-dimensional code added with the tag into the list to obtain a two-dimensional code list, and at this time, the two-dimensional code list may include a plurality of historically generated two-dimensional codes.
Then, the terminal 2 displays each two-dimensional code in the two-dimensional code list, and at this time, the user can scan the two-dimensional codes in the two-dimensional code list through the mobile phone.
And then, when receiving a request signaling sent by the mobile phone after scanning the two-dimensional code, the terminal 2 sends the text information corresponding to the scanned two-dimensional code to the mobile phone so as to be conveniently checked by the user.
It should be noted that: in practice, the conference records may be processed according to the method shown in each of the above embodiments according to specific requirements, for example, when the video conference is ended, the conference records are classified according to keywords, names of speaking terminals, or the like, and corresponding two-dimensional codes are generated.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 5, a conference information processing apparatus according to an embodiment of the present invention is shown, where the apparatus may be applied to a terminal, and specifically includes the following modules:
a record obtaining module 501, configured to obtain a meeting record, where the meeting record includes multiple pieces of text information;
an information extraction module 502, configured to extract at least part of text information from the meeting record according to a preset condition;
a two-dimensional code generating module 503, configured to generate a two-dimensional code corresponding to the at least part of text information;
the display module 504 is configured to display the two-dimensional code, so that when the two-dimensional code is scanned, text information corresponding to the two-dimensional code is pushed to a user.
Optionally, the apparatus may further specifically include the following modules:
the signaling receiving module is used for receiving a request signaling sent by intelligent equipment in communication connection with the terminal when the intelligent equipment scans the two-dimensional code in the display;
and the information sending module is used for responding to the request signaling and sending the text information corresponding to the scanned two-dimensional code to the intelligent equipment.
Optionally, the record obtaining module 501 may be specifically configured to obtain a conference record in a currently participating video conference;
accordingly, the apparatus may further include the following modules:
the list obtaining module is used for storing the two-dimension codes to a preset list to obtain a two-dimension code list;
the list sending module is used for sending the two-dimension code list to the conference participating terminals participating in the video conference and sending text information corresponding to each two-dimension code in the two-dimension code list to the control terminal in the video conference; and when scanning the two-dimensional codes in the two-dimensional code list, the user acquires text information which is sent by the control terminal and corresponds to the scanned two-dimensional codes.
Optionally, the terminal may be a video networking terminal, the video networking terminal is respectively in communication connection with a conference management system and a conference scheduling device, the conference management system is in communication connection with the conference scheduling device, the record acquisition module is specifically configured to receive a conference record pushed by the conference management system in a currently-performed video networking conference, the conference record is generated and sent to the conference management system by the conference scheduling device according to received multiple pieces of text information, where the multiple pieces of text information are information corresponding to audio data collected by a terminal that speaks currently in the video networking video conference.
Optionally, the apparatus may further specifically include the following modules:
and the subtitle display module is used for converting the multiple sections of text information into subtitles and synchronously displaying the subtitles and the currently played video conference picture.
Optionally, the information extraction module includes at least one of the following units:
the first extraction unit is used for acquiring the selected subtitle based on the selection operation of the user on the currently displayed subtitle and extracting text information corresponding to the selected subtitle from the conference record;
the second extraction unit is used for extracting at least part of text information comprising preset keywords from the conference records;
and a third extraction unit, configured to identify an originating terminal in the video conference, and extract at least part of text information corresponding to the originating terminal from the conference record.
Optionally, the apparatus may further include a tag adding module, and the tag adding module may be configured to add a tag to the two-dimensional code in one of the following manners:
determining a participant terminal corresponding to the at least part of text information in a plurality of terminals participating in the video conference, and taking the name of the participant terminal as a label of the two-dimensional code;
taking the speaking time period corresponding to at least part of the text information as a label of the two-dimensional code;
and taking the keywords included in the at least part of text information as the labels of the two-dimensional codes. :
for the embodiment of the conference information processing device, since it is basically similar to the embodiment of the conference information processing method, the description is relatively simple, and relevant points can be referred to the partial description of the embodiment of the conference information processing method.
An embodiment of the present invention further provides an electronic device, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform one or more meeting information processing methods as described in embodiments of the invention.
The embodiment of the invention also provides a computer-readable storage medium, and a stored computer program enables a processor to execute the conference information processing method according to the embodiment of the invention.
To facilitate an understanding of the implementation environment shown in fig. 2 of this embodiment, the network of views shown in fig. 2 is described in detail as follows:
the video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network technology (network technology)
Network technology innovation in video networking has improved the traditional Ethernet (Ethernet) to face the potentially huge first video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server technology (Servertechnology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 6, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: server, exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, code board, memory, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node server, access exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, coding board, memory, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 7, the system mainly includes a network interface module 701, a switching engine module 702, a CPU module 703, and a disk array module 704;
the network interface module 701, the CPU module 703 and the disk array module 704 enter the switching engine module 702; the switching engine module 702 performs an operation of looking up the address table 705 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a corresponding queue of the packet buffer 706 based on the packet's steering information; if the queue of the packet buffer 706 is nearly full, discard; the switching engine module 702 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 704 mainly implements control over the hard disk, including initialization, read-write, and other operations; the CPU module 703 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 705 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 704.
The access switch:
as shown in fig. 8, the network interface module mainly includes a network interface module (a downlink network interface module 801, an uplink network interface module 802), a switching engine module 803, and a CPU module 804;
wherein, the packet (uplink data) coming from the downlink network interface module 801 enters the packet detection module 805; the packet detection module 805 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 803, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 802 enters the switching engine module 803; the incoming data packet from the CPU module 804 enters the switching engine module 803; the switching engine module 803 performs an operation of looking up the address table 806 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 803 is from the downstream network interface to the upstream network interface, the packet is stored in a queue of the corresponding packet buffer 807 in association with a stream-id; if the queue of the packet buffer 807 is nearly full, it is discarded; if the packet entering the switching engine module 803 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 807 according to the packet guiding information; if the queue of the packet buffer 807 is nearly full, it is discarded.
The switching engine module 803 polls all packet buffer queues, which may include two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 808 is configured by the CPU module 804, and generates tokens for packet buffer queues from all downlink network interfaces to uplink network interfaces at programmable intervals to control the rate of uplink forwarding.
The CPU module 804 is mainly responsible for protocol processing with the node server, configuration of the address table 806, and configuration of the code rate control module 808.
Ethernet protocol gateway:
as shown in fig. 9, the system mainly includes a network interface module (a downlink network interface module 901 and an uplink network interface module 902), a switching engine module 903, a CPU module 904, a packet detection module 905, a rate control module 908, an address table 906, a packet buffer 907, a MAC adding module 909, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 901 enters the packet detection module 905; the packet detection module 905 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 901 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the label is similar to that of a label of MP L S (Multi-Protocol L abel Switch), and assuming that there are two connections between device a and device B, there are 2 labels for a packet from device a to device B, and there are 2 labels for a packet from device B to device a. the label is divided into an incoming label and an outgoing label, and assuming that the label (incoming label) of a packet entering device a is 0x0000, the label (outgoing label) of the packet leaving device a may become 0x 0001.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
The above is the introduction to the video network and each hardware device in the video network, wherein in the implementation environment shown in fig. 2, the video network terminal may refer to the terminal described in section 1.1.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above-mentioned detailed descriptions of the conference information processing method, apparatus, device and storage medium provided by the present invention, and the specific examples are applied in this document to explain the principle and implementation of the present invention, and the descriptions of the above-mentioned examples are only used to help understanding the method and its core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A conference information processing method is applied to a terminal and comprises the following steps:
acquiring a conference record, wherein the conference record comprises a plurality of sections of text information;
extracting at least part of text information from the conference record according to a preset condition;
generating a two-dimensional code corresponding to the at least part of the text information;
and displaying the two-dimensional code so as to push text information corresponding to the two-dimensional code to a user when the two-dimensional code is scanned.
2. The method of claim 1, wherein after displaying the two-dimensional code, the method further comprises:
receiving a request signaling sent by intelligent equipment in communication connection with the terminal when the intelligent equipment scans and displays the two-dimensional code;
and responding to the request signaling, and sending text information corresponding to the scanned two-dimensional code to the intelligent equipment.
3. The method of claim 1, wherein obtaining a meeting record comprises:
acquiring a conference record in a video conference currently participated in;
after the two-dimensional code is displayed, the method further comprises:
storing the two-dimension code to a preset list to obtain a two-dimension code list;
sending the two-dimension code list to a conference terminal participating in the video conference, and sending text information corresponding to each two-dimension code in the two-dimension code list to a control terminal in the video conference; and when scanning the two-dimensional codes in the two-dimensional code list, the user acquires text information which is sent by the control terminal and corresponds to the scanned two-dimensional codes.
4. The method according to any one of claims 1 to 3, wherein the terminal is a video network terminal, the video network terminal is respectively connected with a conference management system and a conference scheduling device in a communication manner, and the conference management system is connected with the conference scheduling device in a communication manner; acquiring a conference record, comprising:
and in the current video conference, receiving a conference record pushed by the conference management system, wherein the conference record is generated by the conference scheduling equipment according to the received multiple sections of text information and is sent to the conference management system, and the multiple sections of text information are information corresponding to audio data collected by a terminal which speaks currently in the video conference.
5. The method of claim 1, wherein: before extracting at least part of text information from the conference record according to a preset condition, the method further comprises:
and converting the multiple sections of text information into subtitles, and synchronously displaying the subtitles and the currently played video conference picture.
6. The method of claim 5, wherein the step of extracting at least part of the text information from the conference recording according to the preset condition is performed in one of the following manners;
acquiring the selected subtitle based on the selection operation of the user on the currently displayed subtitle, and extracting text information corresponding to the selected subtitle from the conference record;
extracting at least part of text information comprising preset keywords from the conference records;
and determining a speaking terminal in the video conference, and extracting at least part of text information corresponding to the speaking terminal from the conference record.
7. The method of claim 1, further comprising: adding a label to the two-dimensional code in one of the following ways:
determining a participant terminal corresponding to the at least part of text information in a plurality of terminals participating in the video conference, and taking the name of the participant terminal as a label of the two-dimensional code;
taking the speaking time period corresponding to at least part of the text information as a label of the two-dimensional code;
and taking the keywords included in the at least part of text information as the labels of the two-dimensional codes.
8. A conference information processing apparatus, applied to a terminal, comprising:
the recording acquisition module is used for acquiring a conference record, wherein the conference record comprises a plurality of sections of text information;
the information extraction module is used for extracting at least part of text information from the conference record according to preset conditions;
the two-dimensional code generating module is used for generating a two-dimensional code corresponding to the at least part of text information;
and the display module is used for displaying the two-dimensional code so as to push text information corresponding to the two-dimensional code to a user when the two-dimensional code is scanned.
9. An electronic device, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the meeting information processing method of any of claims 1 to 7.
10. A computer-readable storage medium characterized by storing a computer program causing a processor to execute the conference information processing method according to any one of claims 1 to 7.
CN202010093489.7A 2020-02-14 2020-02-14 Conference information processing method and device, electronic equipment and storage medium Active CN111405230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010093489.7A CN111405230B (en) 2020-02-14 2020-02-14 Conference information processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010093489.7A CN111405230B (en) 2020-02-14 2020-02-14 Conference information processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111405230A true CN111405230A (en) 2020-07-10
CN111405230B CN111405230B (en) 2023-06-09

Family

ID=71413854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010093489.7A Active CN111405230B (en) 2020-02-14 2020-02-14 Conference information processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111405230B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672099A (en) * 2020-12-31 2021-04-16 深圳市潮流网络技术有限公司 Subtitle data generation and presentation method, device, computing equipment and storage medium
CN113923066A (en) * 2021-09-22 2022-01-11 苏州科天视创信息科技有限公司 Appointment control method, system and readable storage medium for network conference

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007199866A (en) * 2006-01-24 2007-08-09 Ricoh Co Ltd Meeting recording system
CN106600212A (en) * 2016-11-24 2017-04-26 南京九致信息科技有限公司 Conference record system and method for automatically generating conference record
CN107911646A (en) * 2016-09-30 2018-04-13 阿里巴巴集团控股有限公司 The method and device of minutes is shared, is generated in a kind of meeting
CN109243484A (en) * 2018-10-16 2019-01-18 上海庆科信息技术有限公司 A kind of generation method and relevant apparatus of conference speech record
CN109547728A (en) * 2018-10-23 2019-03-29 视联动力信息技术股份有限公司 A kind of method and system of recorded broadcast source membership and recorded broadcast of session
CN109803111A (en) * 2019-01-17 2019-05-24 视联动力信息技术股份有限公司 A kind of method for watching after the meeting and device of video conference
CN110134756A (en) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 Minutes generation method, electronic device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007199866A (en) * 2006-01-24 2007-08-09 Ricoh Co Ltd Meeting recording system
CN107911646A (en) * 2016-09-30 2018-04-13 阿里巴巴集团控股有限公司 The method and device of minutes is shared, is generated in a kind of meeting
CN106600212A (en) * 2016-11-24 2017-04-26 南京九致信息科技有限公司 Conference record system and method for automatically generating conference record
CN109243484A (en) * 2018-10-16 2019-01-18 上海庆科信息技术有限公司 A kind of generation method and relevant apparatus of conference speech record
CN109547728A (en) * 2018-10-23 2019-03-29 视联动力信息技术股份有限公司 A kind of method and system of recorded broadcast source membership and recorded broadcast of session
CN109803111A (en) * 2019-01-17 2019-05-24 视联动力信息技术股份有限公司 A kind of method for watching after the meeting and device of video conference
CN110134756A (en) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 Minutes generation method, electronic device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672099A (en) * 2020-12-31 2021-04-16 深圳市潮流网络技术有限公司 Subtitle data generation and presentation method, device, computing equipment and storage medium
CN112672099B (en) * 2020-12-31 2023-11-17 深圳市潮流网络技术有限公司 Subtitle data generating and presenting method, device, computing equipment and storage medium
CN113923066A (en) * 2021-09-22 2022-01-11 苏州科天视创信息科技有限公司 Appointment control method, system and readable storage medium for network conference

Also Published As

Publication number Publication date
CN111405230B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN108574688B (en) Method and device for displaying participant information
CN110166728B (en) Video networking conference opening method and device
CN108965224B (en) Video-on-demand method and device
CN109803111B (en) Method and device for watching video conference after meeting
CN110049271B (en) Video networking conference information display method and device
CN110493554B (en) Method and system for switching speaking terminal
CN110572607A (en) Video conference method, system and device and storage medium
CN110557597A (en) video conference sign-in method, server, electronic equipment and storage medium
CN111541859A (en) Video conference processing method and device, electronic equipment and storage medium
CN109743524B (en) Data processing method of video network and video network system
CN110049273B (en) Video networking-based conference recording method and transfer server
CN109788235B (en) Video networking-based conference recording information processing method and system
CN109191808B (en) Alarm method and system based on video network
CN110457575B (en) File pushing method, device and storage medium
CN109963108B (en) One-to-many talkback method and device
CN109768957B (en) Method and system for processing monitoring data
CN111405230B (en) Conference information processing method and device, electronic equipment and storage medium
CN110769179B (en) Audio and video data stream processing method and system
CN108965930B (en) Video data processing method and device
CN110611639A (en) Audio data processing method and device for streaming media conference
CN111131747B (en) Method and device for determining data channel state, electronic equipment and storage medium
CN110381285B (en) Conference initiating method and device
CN110049275B (en) Information processing method and device in video conference and storage medium
CN109544879B (en) Alarm data processing method and system
CN110519546B (en) Method and device for pushing business card information based on video conference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant