CN114422745A - Method and device for rapidly arranging conference summary of audio and video conference and computer equipment - Google Patents

Method and device for rapidly arranging conference summary of audio and video conference and computer equipment Download PDF

Info

Publication number
CN114422745A
CN114422745A CN202210079441.XA CN202210079441A CN114422745A CN 114422745 A CN114422745 A CN 114422745A CN 202210079441 A CN202210079441 A CN 202210079441A CN 114422745 A CN114422745 A CN 114422745A
Authority
CN
China
Prior art keywords
information
topic
conclusion
video
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210079441.XA
Other languages
Chinese (zh)
Inventor
郭皓月
齐潇
段祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youmi Technology Shenzhen Co ltd
Original Assignee
Youmi Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youmi Technology Shenzhen Co ltd filed Critical Youmi Technology Shenzhen Co ltd
Priority to CN202210079441.XA priority Critical patent/CN114422745A/en
Publication of CN114422745A publication Critical patent/CN114422745A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/155Conference systems involving storage of or access to video conference sessions

Abstract

The application relates to a method and a device for quickly arranging meeting summary of an audio and video conference, computer equipment, a storage medium and a computer program product. The method comprises the following steps: setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module; when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area; and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information. By adopting the method, the association between the audio and video conference content and the summary can be realized, and the interaction experience of the user is improved.

Description

Method and device for rapidly arranging conference summary of audio and video conference and computer equipment
Technical Field
The present application relates to the field of network video technologies, and in particular, to a method and an apparatus for quickly organizing meeting summary of an audio/video conference, a computer device, a storage medium, and a computer program product.
Background
In order to slow down the transmission of new coronaviruses, human-to-human interaction has to be reduced around the world. Many people turn to home and work. Education, medical, scientific companies, and state, local, and federal governments, including the world health organization, have to communicate, collaborate, and develop education by way of video conferencing. And the meeting summary is an important part thereof.
In the video conference process, a recording mode is generally adopted to record voice data of a conference site, or a site conference recorder performs manual input recording through a special recording device. However, if a recording method is adopted, the on-site conference recording manuscript which cannot be acquired in time needs to be recorded in a mode of manually listening to the conference recording after meeting. The conference summary of the current audio and video conference is only the viewing of videos, the conference summary of characters needs to use other software, and some software supports the writing of the conference summary in conference software, but only the form of characters is not combined with the audio and video. Therefore, how to associate the content of the audio and video conference with the conference summary content and improve the interactive experience of the user become a problem to be solved urgently.
Disclosure of Invention
In view of the foregoing, there is a need to provide a method, an apparatus, a computer device, a computer readable storage medium, and a computer program product for fast organizing a conference summary of an audio video conference, which can associate content of the audio video conference with the content of the conference summary.
In a first aspect, the application provides a method for quickly arranging meeting summary of an audio and video meeting. The method comprises the following steps:
setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area;
and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the entering of the topic information, the important point information and the conclusion information in the topic area includes:
if the caption information corresponding to the played video is complete, dragging the topic information to the topic module, dragging the important viewpoint information to the important viewpoint module and dragging the conclusion information to the conclusion module from the caption information corresponding to the played video;
if the caption information corresponding to the played video is incomplete, manually inputting topic information to the topic module, manually inputting important viewpoint information to the important viewpoint module and manually inputting conclusion information to the conclusion module.
In one embodiment, the step of separating the video segment related to the topic information, the important point information and the conclusion information from the playing video comprises:
acquiring time information of the topic information, the important viewpoint information and the conclusion information appearing in a playing video;
and independently obtaining a corresponding video clip from the playing video based on the time information.
In one embodiment, the associating the independent video segments with the topic information, the important point information, and the conclusion information further comprises:
if the subject information, the important viewpoint information and the conclusion information change, judging whether time information corresponding to the subject information, the important viewpoint information and the conclusion information changes or not;
and if the corresponding time information changes, correspondingly adjusting the video clips corresponding to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the entering of the subject information, the important point information and the conclusion information in the subject area further comprises:
judging whether the input subject information, important viewpoint information and conclusion information are in the target language, if not, translating the input subject information, important viewpoint information and conclusion information into the target language through a preset translation interface.
In one embodiment, the fast collating of the conference summary of the audio-video conference further comprises:
when the video clips corresponding to the topic information, the important viewpoint information and the conclusion information need to be viewed, the topic module, the important viewpoint module and the conclusion module in the topic area are clicked, and the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played.
In a second aspect, the application further provides a device for rapidly arranging the meeting summary of the audio and video meeting. The device comprises:
the setting module is used for setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
the input module is used for inputting topic information, important viewpoint information and conclusion information in the topic area when the audio and video playing area plays a video;
and the association module is used for independently separating the video clips related to the topic information, the important viewpoint information and the conclusion information from the playing video and associating the independent video clips with the topic information, the important viewpoint information and the conclusion information after the topic information, the important viewpoint information and the conclusion information are all input.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area;
and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area;
and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area;
and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information.
According to the method, the device, the computer equipment, the storage medium and the computer program product for rapidly arranging the conference summary of the audio and video conference, the topic area corresponding to the audio and video playing area is set, topic information, important viewpoint information and conclusion information are input into the topic area when a video is played in the audio and video playing area, video clips related to the topic information, the important viewpoint information and the conclusion information are independent from a played video after the topic information, the important viewpoint information and the conclusion information are input, the independent video clips are related to the topic information, the important viewpoint information and the conclusion information, the association between the content of the audio and video conference and the conference summary content is realized, and the interaction experience of users is improved.
Drawings
Fig. 1 is an application environment diagram of a conference summary quick sorting method of an audio and video conference in an embodiment;
fig. 2 is a schematic flow chart of a method for quickly arranging meeting summary of an audio and video conference in one embodiment;
FIG. 3 is a flow diagram illustrating the steps of video segment independence in one embodiment;
fig. 4 is a block diagram of a fast conference summary sorting device of an audio and video conference in one embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The conference summary rapid sorting method for the audio and video conference, provided by the embodiment of the application, can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In an embodiment, as shown in fig. 2, a method for quickly collating meeting summaries of an audio-video conference is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step 202, setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module.
Specifically, the audio/video playing area is a module for playing audio/video in video conference software or video playing software, and an issue area corresponding to the audio/video playing area is set; the topic area comprises a topic module, an important point module and a conclusion module. The topic module is used for adding topics for playing videos, the important viewpoint module is used for adding important viewpoints for playing videos, and the conclusion module is used for adding conclusions for playing videos. In addition, the program interface also shows: a recording list module and a conference caption module. The recording list module is used for displaying a list of the recorded audio and video conference, and the conference subtitle module is used for displaying the subtitle of the played video conference.
Step 204, when the video is played in the audio/video playing area, the topic information, the important viewpoint information and the conclusion information are input in the topic area.
Specifically, when a video is played through the audio/video playing area, the video can be played through the playing video selected in the recording list module, and topic information, important viewpoint information and conclusion information are input into the topic area; all the topic information, the important viewpoint information and the conclusion information can be input, and only one or two of the topic information, the important viewpoint information and the conclusion information can be input. For example, the user can view the played conference video and put the required information into the corresponding conference topic from the conference caption module. When one topic is finished, the time period corresponding to the characters dragged into the topic can be independent, so that the recheck of the single topic is facilitated.
And step 206, after the topic information, the important viewpoint information and the conclusion information are all recorded, a video clip related to the topic information, the important viewpoint information and the conclusion information is separated from the playing video, and the separated video clip is associated with the topic information, the important viewpoint information and the conclusion information.
Specifically, after the topic information, the important viewpoint information and the conclusion information are all recorded, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video. Specifically, the video clips are independently processed from the played video according to the time points of the topic information, the important viewpoint information and the conclusion information appearing in the played video, and the independent video clips are associated with the topic information, the important viewpoint information and the conclusion information, so that the corresponding video clip information can be quickly found according to the topic information, the important viewpoint information and the conclusion information. For example, the video clip is calculated in such a way that in the topic, the time of the earliest content and the time period of the last added content exist as independent videos, and a video button appears at the topic of the topic, so that the video of the topic can be viewed by clicking on the video button. If other contents are added, the time period changes, and the corresponding video also changes.
According to the rapid conference summary arranging method of the audio and video conference, the topic area corresponding to the audio and video playing area is arranged, when videos are played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input into the topic area, after the topic information, the important viewpoint information and the conclusion information are input, video clips related to the topic information, the important viewpoint information and the conclusion information are independent from the played videos, the independent video clips are related to the topic information, the important viewpoint information and the conclusion information, the association between the contents of the audio and video conference and the contents of the conference summary is achieved, and the interactive experience of users is improved.
In one embodiment, said entering topic information, important point information and conclusion information in said topic area comprises:
if the caption information corresponding to the played video is complete, dragging the topic information to the topic module, dragging the important viewpoint information to the important viewpoint module and dragging the conclusion information to the conclusion module from the caption information corresponding to the played video;
if the caption information corresponding to the played video is incomplete, manually inputting topic information to the topic module, manually inputting important viewpoint information to the important viewpoint module and manually inputting conclusion information to the conclusion module.
Specifically, when topic information, important viewpoint information and conclusion information are input into the topic area, related contents can be dragged from caption information corresponding to a playing video and input into the topic module, the important viewpoint module and the conclusion module, and when the caption information corresponding to the playing video is complete, the topic information is dragged from the caption information corresponding to the playing video to the topic module, the important viewpoint information is dragged to the important viewpoint module and the conclusion information is dragged to the conclusion module; when the caption information corresponding to the played video is incomplete, manually inputting the topic information to the topic module, manually inputting the important viewpoint information to the important viewpoint module and manually inputting the conclusion information to the conclusion module. Therefore, the conference captions can be directly multiplexed with the text information without manually typing and arranging the subjects; and no other software like word is needed to settle the conference issue.
In this embodiment, if the subtitle information corresponding to the played video is complete, dragging the topic information to the topic module, dragging the important viewpoint information to the important viewpoint module, and dragging the conclusion information to the conclusion module from the subtitle information corresponding to the played video; if the subtitle information corresponding to the played video is incomplete, the topic information is manually input into the topic module, the important viewpoint information is manually input into the important viewpoint module, and the conclusion information is manually input into the conclusion module, so that the topic information, the important viewpoint information and the conclusion information are input into the topic module, the important viewpoint module and the conclusion module from the subtitle information, and the interactive experience of a user is improved.
In one embodiment, the step of separating the video segment related to the topic information, the important point information and the conclusion information from the playing video comprises:
acquiring time information of the topic information, the important viewpoint information and the conclusion information appearing in a playing video;
and independently obtaining a corresponding video clip from the playing video based on the time information.
Specifically, fig. 3 is a schematic flowchart of a video clip independent step in an embodiment, as shown in fig. 3, when a corresponding video clip is independently played from a played video, time information of occurrence of topic information, important viewpoint information, and conclusion information in the played video is obtained; the time information comprises the related time points of the first appearance and the last appearance of the viewpoint information and the conclusion information in the playing video, and corresponding video clips are independently extracted from the playing video based on the time information.
In the embodiment, the time information of the topic information, the important viewpoint information and the conclusion information appearing in the playing video is obtained, and the corresponding video segment is independently obtained from the playing video based on the time information, so that the video segment corresponding to the topic information, the important viewpoint information and the conclusion information is independent from the playing video, a user can quickly obtain the corresponding video segment according to the topic information, the important viewpoint information and the conclusion information, and the interaction experience of the user is improved.
In one embodiment, the associating the independent video segments with the topic information, the important point information, and the conclusion information further comprises:
if the subject information, the important viewpoint information and the conclusion information change, judging whether time information corresponding to the subject information, the important viewpoint information and the conclusion information changes or not;
and if the corresponding time information changes, correspondingly adjusting the video clips corresponding to the topic information, the important viewpoint information and the conclusion information.
Specifically, if the topic information, the important viewpoint information, and the conclusion information change, it is determined whether time information corresponding to the topic information, the important viewpoint information, and the conclusion information changes, and if the time information corresponding to the topic information, the important viewpoint information, and the conclusion information changes, a corresponding video segment may also change. Therefore, if the corresponding time information changes, the contents of the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are correspondingly adjusted according to the changed time information, so that the correspondence between the topic information, the important viewpoint information and the conclusion information and the video clips is realized.
In this embodiment, when the topic information, the important viewpoint information, and the conclusion information change, it is determined whether time information corresponding to the topic information, the important viewpoint information, and the conclusion information changes, and when the corresponding time information changes, video segments corresponding to the topic information, the important viewpoint information, and the conclusion information are correspondingly adjusted, so that correspondence between the topic information, the important viewpoint information, and the conclusion information and the video segments is realized, and association between content of the audio/video conference and conference summary content is improved.
In one embodiment, said entering of subject information, important point information and conclusion information in said subject area further comprises:
judging whether the input subject information, important viewpoint information and conclusion information are in the target language, if not, translating the input subject information, important viewpoint information and conclusion information into the target language through a preset translation interface.
Specifically, in order to implement language switching, after the topic information, the important viewpoint information and the conclusion information are entered in the topic area, whether the entered topic information, the important viewpoint information and the conclusion information are the target language is judged, and if the entered topic information, the important viewpoint information and the conclusion information are not the target language, the entered topic information, the important viewpoint information and the conclusion information are translated into the target language through a preset translation interface, so that conversion of the target language is achieved, and user experience of a user is improved.
In the embodiment, after the topic information, the important viewpoint information and the conclusion information are input into the topic area, whether the input topic information, the important viewpoint information and the conclusion information are the target language is judged, and if not, the input topic information, the important viewpoint information and the conclusion information are translated into the target language through the preset translation interface, so that the conversion of the languages used for inputting the topic information, the important viewpoint information and the conclusion information is realized, and the use experience of a user is improved.
In one embodiment, the fast-settling of the conference summary of the audio-video conference further comprises:
when the video clips corresponding to the topic information, the important viewpoint information and the conclusion information need to be viewed, the topic module, the important viewpoint module and the conclusion module in the topic area are clicked, and the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played.
Specifically, after the topic information, the important viewpoint information and the conclusion information are input in the topic area and the independent video clips are associated with the topic information, the important viewpoint information and the conclusion information, when a user needs to view the video clips corresponding to the topic information, the important viewpoint information and the conclusion information, the topic module, the important viewpoint module and the conclusion module in the topic area are clicked, and the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played.
In this embodiment, when a user needs to view the video clips corresponding to the topic information, the important viewpoint information and the conclusion information, the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played by clicking the topic module, the important viewpoint module and the conclusion module in the topic area, so that the interactive experience of the user is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a conference summary rapid arrangement device of the audio and video conference, which is used for realizing the related conference summary rapid arrangement method of the audio and video conference. The implementation scheme for solving the problems provided by the device is similar to the implementation scheme recorded in the method, so that specific limitations in the embodiment of the conference summary fast sorting device for one or more audio and video conferences provided below can be referred to the limitations on the conference summary fast sorting method for the audio and video conferences, and details are not repeated here.
In one embodiment, as shown in fig. 4, there is provided a conference summary swizzle device for an audio-video conference, including: a setup module 401, an entry module 402 and an association module 403, wherein:
a setting module 401, configured to set an issue area corresponding to the audio/video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
an entry module 402, configured to enter topic information, important viewpoint information, and conclusion information in the topic area when a video is played in the audio/video playing area;
and an associating module 403, configured to, after the topic information, the important viewpoint information, and the conclusion information are all entered, isolate, from the playing video, a video segment related to the topic information, the important viewpoint information, and the conclusion information, and associate the video segment that is isolated with the topic information, the important viewpoint information, and the conclusion information.
In one embodiment, the entry module 402 is specifically configured to: if the caption information corresponding to the played video is complete, dragging the topic information to the topic module, dragging the important viewpoint information to the important viewpoint module and dragging the conclusion information to the conclusion module from the caption information corresponding to the played video; if the caption information corresponding to the played video is incomplete, manually inputting topic information to the topic module, manually inputting important viewpoint information to the important viewpoint module and manually inputting conclusion information to the conclusion module.
In one embodiment, the association module 403 is specifically configured to: acquiring time information of the topic information, the important viewpoint information and the conclusion information appearing in a playing video; and independently obtaining a corresponding video clip from the playing video based on the time information.
In one embodiment, the association module 403 is further configured to: if the subject information, the important viewpoint information and the conclusion information change, judging whether time information corresponding to the subject information, the important viewpoint information and the conclusion information changes or not; and if the corresponding time information changes, correspondingly adjusting the video clips corresponding to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the logging module 402 is further configured to: judging whether the input subject information, important viewpoint information and conclusion information are in the target language, if not, translating the input subject information, important viewpoint information and conclusion information into the target language through a preset translation interface.
In one embodiment, the association module 403 is further configured to: when the video clips corresponding to the topic information, the important viewpoint information and the conclusion information need to be viewed, the topic module, the important viewpoint module and the conclusion module in the topic area are clicked, and the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played.
The conference summary fast-sorting device of the audio and video conference is provided with the topic area corresponding to the audio and video playing area, when videos are played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input into the topic area, after the topic information, the important viewpoint information and the conclusion information are input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the played videos, the separated video clips are related to the topic information, the important viewpoint information and the conclusion information, the association between the contents of the audio and video conference and the conference summary contents is achieved, and the interaction experience of users is improved.
All modules in the conference summary quick tidying device of the audio and video conference can be completely or partially realized through software, hardware and a combination of the software and the hardware. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize a method for quickly arranging the conference summary of the audio and video conference.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area;
and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the processor, when executing the computer program, further performs the steps of: if the caption information corresponding to the played video is complete, dragging the topic information to the topic module, dragging the important viewpoint information to the important viewpoint module and dragging the conclusion information to the conclusion module from the caption information corresponding to the played video; if the caption information corresponding to the played video is incomplete, manually inputting topic information to the topic module, manually inputting important viewpoint information to the important viewpoint module and manually inputting conclusion information to the conclusion module.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring time information of the topic information, the important viewpoint information and the conclusion information appearing in a playing video; and independently obtaining a corresponding video clip from the playing video based on the time information.
In one embodiment, the processor, when executing the computer program, further performs the steps of: if the subject information, the important viewpoint information and the conclusion information change, judging whether time information corresponding to the subject information, the important viewpoint information and the conclusion information changes or not; and if the corresponding time information changes, correspondingly adjusting the video clips corresponding to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the processor, when executing the computer program, further performs the steps of: judging whether the input subject information, important viewpoint information and conclusion information are in the target language, if not, translating the input subject information, important viewpoint information and conclusion information into the target language through a preset translation interface.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the video clips corresponding to the topic information, the important viewpoint information and the conclusion information need to be viewed, the topic module, the important viewpoint module and the conclusion module in the topic area are clicked, and the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played.
The computer equipment is provided with the topic area corresponding to the audio and video playing area, when a video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input into the topic area, after the topic information, the important viewpoint information and the conclusion information are input, video clips related to the topic information, the important viewpoint information and the conclusion information are independent from the played video, and the independent video clips are related to the topic information, the important viewpoint information and the conclusion information, so that the association between the content of the audio and video conference and the conference summary content is realized, and the interaction experience of users is improved.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area;
and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the caption information corresponding to the played video is complete, dragging the topic information to the topic module, dragging the important viewpoint information to the important viewpoint module and dragging the conclusion information to the conclusion module from the caption information corresponding to the played video; if the caption information corresponding to the played video is incomplete, manually inputting topic information to the topic module, manually inputting important viewpoint information to the important viewpoint module and manually inputting conclusion information to the conclusion module.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring time information of the topic information, the important viewpoint information and the conclusion information appearing in a playing video; and independently obtaining a corresponding video clip from the playing video based on the time information.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the subject information, the important viewpoint information and the conclusion information change, judging whether time information corresponding to the subject information, the important viewpoint information and the conclusion information changes or not; and if the corresponding time information changes, correspondingly adjusting the video clips corresponding to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the computer program when executed by the processor further performs the steps of: judging whether the input subject information, important viewpoint information and conclusion information are in the target language, if not, translating the input subject information, important viewpoint information and conclusion information into the target language through a preset translation interface.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the video clips corresponding to the topic information, the important viewpoint information and the conclusion information need to be viewed, the topic module, the important viewpoint module and the conclusion module in the topic area are clicked, and the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played.
The storage medium is provided with the topic area corresponding to the audio and video playing area, when a video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input into the topic area, after the topic information, the important viewpoint information and the conclusion information are input, video clips related to the topic information, the important viewpoint information and the conclusion information are independent from the played video, and the independent video clips are related to the topic information, the important viewpoint information and the conclusion information, so that the association between the content of the audio and video conference and the conference summary content is realized, and the interaction experience of users is improved.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area;
and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the caption information corresponding to the played video is complete, dragging the topic information to the topic module, dragging the important viewpoint information to the important viewpoint module and dragging the conclusion information to the conclusion module from the caption information corresponding to the played video; if the caption information corresponding to the played video is incomplete, manually inputting topic information to the topic module, manually inputting important viewpoint information to the important viewpoint module and manually inputting conclusion information to the conclusion module.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring time information of the topic information, the important viewpoint information and the conclusion information appearing in a playing video; and independently obtaining a corresponding video clip from the playing video based on the time information.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the subject information, the important viewpoint information and the conclusion information change, judging whether time information corresponding to the subject information, the important viewpoint information and the conclusion information changes or not; and if the corresponding time information changes, correspondingly adjusting the video clips corresponding to the topic information, the important viewpoint information and the conclusion information.
In one embodiment, the computer program when executed by the processor further performs the steps of: judging whether the input subject information, important viewpoint information and conclusion information are in the target language, if not, translating the input subject information, important viewpoint information and conclusion information into the target language through a preset translation interface.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the video clips corresponding to the topic information, the important viewpoint information and the conclusion information need to be viewed, the topic module, the important viewpoint module and the conclusion module in the topic area are clicked, and the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played.
The computer program product is provided with the topic area corresponding to the audio and video playing area, when a video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input into the topic area, after the topic information, the important viewpoint information and the conclusion information are input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the played video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information, so that the association between the content of the audio and video conference and the important content of the conference is realized, and the interactive experience of users is improved.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A conference summary rapid arranging method of an audio and video conference is characterized by comprising the following steps:
setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
when the video is played in the audio and video playing area, topic information, important viewpoint information and conclusion information are input in the topic area;
and after the topic information, the important viewpoint information and the conclusion information are all input, video clips related to the topic information, the important viewpoint information and the conclusion information are separated from the playing video, and the separated video clips are related to the topic information, the important viewpoint information and the conclusion information.
2. The method of claim 1, wherein said entering topic information, important point of view information, and conclusion information in said topic area comprises:
if the caption information corresponding to the played video is complete, dragging the topic information to the topic module, dragging the important viewpoint information to the important viewpoint module and dragging the conclusion information to the conclusion module from the caption information corresponding to the played video;
if the caption information corresponding to the played video is incomplete, manually inputting topic information to the topic module, manually inputting important viewpoint information to the important viewpoint module and manually inputting conclusion information to the conclusion module.
3. The method of claim 2, wherein the isolating the video segments associated with the topic information, the important point information, and the conclusion information from the playing video comprises:
acquiring time information of the topic information, the important viewpoint information and the conclusion information appearing in a playing video;
and independently obtaining a corresponding video clip from the playing video based on the time information.
4. The method of claim 3, wherein associating the independent video segments with the topic information, the important point information, and the conclusion information further comprises:
if the subject information, the important viewpoint information and the conclusion information change, judging whether time information corresponding to the subject information, the important viewpoint information and the conclusion information changes or not;
and if the corresponding time information changes, correspondingly adjusting the video clips corresponding to the topic information, the important viewpoint information and the conclusion information.
5. The method of claim 1, wherein said entering topic information, important point of view information, and conclusion information at said topic area further comprises:
judging whether the input subject information, important viewpoint information and conclusion information are in the target language, if not, translating the input subject information, important viewpoint information and conclusion information into the target language through a preset translation interface.
6. The method of claim 1, wherein the fast-grooming of the meeting summary of the audiovisual conference further comprises:
when the video clips corresponding to the topic information, the important viewpoint information and the conclusion information need to be viewed, the topic module, the important viewpoint module and the conclusion module in the topic area are clicked, and the video clips corresponding to the topic information, the important viewpoint information and the conclusion information are played.
7. A conference summary quick collating device of an audio-video conference is characterized by comprising:
the setting module is used for setting an issue area corresponding to the audio and video playing area; the topic area comprises a topic module, an important point module and a conclusion module;
the input module is used for inputting topic information, important viewpoint information and conclusion information in the topic area when the audio and video playing area plays a video;
and the association module is used for independently separating the video clips related to the topic information, the important viewpoint information and the conclusion information from the playing video and associating the independent video clips with the topic information, the important viewpoint information and the conclusion information after the topic information, the important viewpoint information and the conclusion information are all input.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202210079441.XA 2022-01-24 2022-01-24 Method and device for rapidly arranging conference summary of audio and video conference and computer equipment Pending CN114422745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210079441.XA CN114422745A (en) 2022-01-24 2022-01-24 Method and device for rapidly arranging conference summary of audio and video conference and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210079441.XA CN114422745A (en) 2022-01-24 2022-01-24 Method and device for rapidly arranging conference summary of audio and video conference and computer equipment

Publications (1)

Publication Number Publication Date
CN114422745A true CN114422745A (en) 2022-04-29

Family

ID=81277515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210079441.XA Pending CN114422745A (en) 2022-01-24 2022-01-24 Method and device for rapidly arranging conference summary of audio and video conference and computer equipment

Country Status (1)

Country Link
CN (1) CN114422745A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114928713A (en) * 2022-07-18 2022-08-19 广州市保伦电子有限公司 Voice analysis system for user remote video conference

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114928713A (en) * 2022-07-18 2022-08-19 广州市保伦电子有限公司 Voice analysis system for user remote video conference

Similar Documents

Publication Publication Date Title
JP6939037B2 (en) How to represent meeting content, programs, and equipment
US10594749B2 (en) Copy and paste for web conference content
US10656782B2 (en) Three-dimensional generalized space
US20150012840A1 (en) Identification and Sharing of Selections within Streaming Content
US20170019715A1 (en) Media production system with scheduling feature
US20150127643A1 (en) Digitally displaying and organizing personal multimedia content
US9055193B2 (en) System and method of a remote conference
DE202014011461U1 (en) Display device
US9402050B1 (en) Media content creation application
US20170041730A1 (en) Social media processing with three-dimensional audio
US10381043B2 (en) Media-production system with social media content interface feature
JP2017162434A (en) Method, program and device for generating web-based copy of document
CN109040779A (en) Caption content generation method, device, computer equipment and storage medium
US20150088513A1 (en) Sound processing system and related method
CN112287168A (en) Method and apparatus for generating video
KR101618084B1 (en) Method and apparatus for managing minutes
Markham et al. Experimenting with algorithms and memory-making: Lived experience and future-oriented ethics in critical data science
US11665406B2 (en) Verbal queries relative to video content
CN114422745A (en) Method and device for rapidly arranging conference summary of audio and video conference and computer equipment
US20150111189A1 (en) System and method for browsing multimedia file
WO2019146466A1 (en) Information processing device, moving-image retrieval method, generation method, and program
EP4099711A1 (en) Method and apparatus and storage medium for processing video and timing of subtitles
US11152031B1 (en) System and method to compress a time frame of one or more videos
US11355155B1 (en) System and method to summarize one or more videos based on user priorities
US20210241643A1 (en) Information processing apparatus, information processing system, and non-transitory computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination