CN111770300B - Conference information processing method and virtual reality head-mounted equipment - Google Patents

Conference information processing method and virtual reality head-mounted equipment Download PDF

Info

Publication number
CN111770300B
CN111770300B CN202010587825.3A CN202010587825A CN111770300B CN 111770300 B CN111770300 B CN 111770300B CN 202010587825 A CN202010587825 A CN 202010587825A CN 111770300 B CN111770300 B CN 111770300B
Authority
CN
China
Prior art keywords
information
audio
display
module
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010587825.3A
Other languages
Chinese (zh)
Other versions
CN111770300A (en
Inventor
王珂晟
黄劲
黄钢
许巧龄
郝缘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oook Beijing Education Technology Co ltd
Original Assignee
Oook Beijing Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oook Beijing Education Technology Co ltd filed Critical Oook Beijing Education Technology Co ltd
Priority to CN202010587825.3A priority Critical patent/CN111770300B/en
Publication of CN111770300A publication Critical patent/CN111770300A/en
Application granted granted Critical
Publication of CN111770300B publication Critical patent/CN111770300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure provides a conference information processing method and virtual reality head-mounted equipment. The method comprises the following steps: acquiring first audio information of a speaker in a current conference site by using the audio acquisition module; analyzing the first audio information to generate corresponding first character information; acquiring first display information associated with a current conference; the first display information comprises first display images which are sequentially arranged; adding the first text information to a first display image of the first display information based on a time sequence to generate virtual reality video information; and displaying the virtual reality video information on the display module. This is disclosed converts the audio information of speaker into literal information to utilize the virtual reality technique to project literal information in virtual reality head-mounted device's display module, made things convenient for the participant to browse the literal information, avoided the record to participate in meeting notes and missed the problem of speaker key information, improved participation efficiency.

Description

Conference information processing method and virtual reality head-mounted equipment
Technical Field
The disclosure relates to the technical field of computers, in particular to a conference information processing method and virtual reality head-mounted equipment.
Background
A meeting is a social, customs, political, opinion exchange, information dissemination and communication activity of human society, and is participated in by two or more people.
In business activities, a variety of meetings are emerging. Generally, in a meeting, a projector is used to play a presentation for work reporting, enterprise publicity, technical scheme introduction, product recommendation, management consultation and educational training. However, this form of conferencing typically requires participants to record the conference content while viewing the presentation, often resulting in lost consideration.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The present disclosure is directed to a method for processing conference information and a virtual reality headset, which can solve at least one of the above-mentioned problems. The specific scheme is as follows:
according to a specific implementation manner of the present disclosure, in a first aspect, the present disclosure provides a method for processing meeting information, which is applied to a virtual reality head-mounted device, where the virtual reality head-mounted device includes: audio acquisition module, video acquisition module, wireless communication module and display module include:
acquiring first audio information of a speaker in a current conference site by using the audio acquisition module;
analyzing the first audio information to generate corresponding first character information;
acquiring first display information associated with a current conference; the first display information comprises first display images which are sequentially arranged;
adding the first text information to a first display image of the first display information based on a time sequence to generate virtual reality video information;
and displaying the virtual reality video information on the display module.
According to a second aspect thereof, the present disclosure provides a virtual reality headset, comprising: an audio acquisition module, a display module, and,
a memory for storing at least a conference information processing instruction set;
a processor, configured to invoke and execute the meeting information processing instruction set to perform the following operations:
acquiring first audio information of a speaker in a current conference site by using the audio acquisition module;
analyzing the first audio information to generate corresponding first character information;
acquiring first display information associated with a current conference; the first display information comprises first display images which are sequentially arranged;
adding the first text information to a first display image of the first display information based on a time sequence to generate virtual reality video information;
and displaying the virtual reality video information on the display module.
Compared with the prior art, the scheme of the embodiment of the disclosure at least has the following beneficial effects:
the disclosure provides a conference information processing method and virtual reality head-mounted equipment. The method comprises the following steps: acquiring first audio information of a speaker in a current conference site by using the audio acquisition module; analyzing the first audio information to generate corresponding first character information; acquiring first display information associated with a current conference; the first display information comprises first display images which are sequentially arranged; adding the first text information to a first display image of the first display information based on a time sequence to generate virtual reality video information; and displaying the virtual reality video information on the display module. This is disclosed converts the audio information of speaker into literal information to utilize the virtual reality technique to project literal information in virtual reality head-mounted device's display module, made things convenient for the participant to browse the literal information, avoided the record to participate in meeting notes and missed the problem of speaker key information, improved participation efficiency.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale. In the drawings:
fig. 1 shows a schematic diagram of a virtual reality headset according to an embodiment of the disclosure.
FIG. 2 shows a flow diagram of a meeting information processing method according to an embodiment of the present disclosure;
fig. 3 shows a block schematic diagram of a virtual reality headset according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Alternative embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The first embodiment provided by the present disclosure, that is, an embodiment of a method for processing meeting information.
The embodiments of the present disclosure will be described in detail below with reference to fig. 1 and 2.
The method of the embodiment of the present disclosure is applied to a virtual reality headset, as shown in fig. 1, the virtual reality headset includes: the device comprises an audio acquisition module, a video acquisition module, a wireless communication module and a display module. The method comprises the following steps:
and S101, acquiring first audio information of a speaker in the current conference site by using the audio acquisition module.
The audio acquisition module is generally used for acquiring the live audio information of the current conference site to be used as the first audio information. However, when the conference site environment is complicated, the site audio information may include background noise in addition to the first audio information. Optionally, the obtaining, by using the audio acquisition module, first audio information of a speaker in a current conference site includes:
and S101-1, acquiring the on-site audio information of the conference site by using the audio acquisition module.
And S101-2, performing noise filtering processing on the live audio information to acquire first audio information of the speaker.
The embodiment of the disclosure adopts the preset filter to carry out noise filtering processing on the field audio information, so that the rising edge and the falling edge of the sudden change of the pulse are gentle, and the attenuation of the frequency outside the frequency band is accelerated. The first audio information obtained after the noise filtering treatment improves the accuracy of converting the audio into the corresponding characters.
And step S102, analyzing the first audio information to generate corresponding first character information.
The process of converting the audio information into the text information is not described in detail in this embodiment, and may be implemented by referring to various implementations in the prior art.
Step S103, acquiring first display information associated with the current conference.
The first display information includes first display images arranged in sequence. For example, the first display information includes video information constituted by one frame of continuously captured image and presentation information constituted by one frame of non-continuous presentation image.
In view of the above steps, the embodiments of the present disclosure provide two application scenarios.
Scene one:
mainly for video information.
The method for acquiring the first display information associated with the current conference comprises the following steps:
and S103-11, acquiring live-action video information of a conference site by using the video acquisition module as the first display information.
Scene two:
and the second scene is the optimization of the first scene and is mainly applied to the presentation information.
Presentation, which means to make a static file into a dynamic file for browsing, wherein a slide consists of one image. Typically a user makes a presentation on a projector or computer.
Optionally, the virtual reality headset further includes a unidirectional signal receiving module.
As shown in fig. 1, the unidirectional signal receiving module is used for receiving a presentation file playing signal sent by a transmitting device in a conference site. As shown in fig. 1, the unidirectional signal receiving module is embedded in a hole at the front end of the virtual reality headset, and the hole is used for limiting the direction in which the unidirectional signal receiving module receives the playback signal of the presentation document. The transmitting device is generally installed near a screen on which the presentation is displayed, and transmits a presentation playback signal when the presentation is displayed. The participant wearing the virtual reality head-mounted device can receive the presentation file playing signal only if the head of the participant turns to the screen direction displaying the presentation file.
The method for acquiring the first display information associated with the current conference further comprises the following steps:
step S103-21, when the one-way signal receiving module obtains a demonstration document playing signal, the wireless communication module is used for receiving the information of the demonstration document which is played currently as the first display information.
That is, when the one-way signal receiving module cannot acquire a presentation document playing signal, the video acquisition module acquires live-action video information of a conference site as the first display information; and when the unidirectional signal receiving module acquires a demonstration document playing signal, taking the currently played demonstration document information as the first display information.
And step S104, adding the first character information to a first display image of the first display information based on the time sequence, and generating virtual reality video information.
For the live-action video information, the first character information generated in a certain time period is added to the corresponding video frame image in the corresponding time period.
For the corresponding presentation information, the first text information generated in a certain time period is added to the slide of the corresponding presentation in the corresponding time period.
And step S105, displaying the virtual reality video information on the display module.
The audio information of the speaker is converted into the text information, the text information is projected to the display module of the virtual reality head-mounted equipment by using the virtual reality technology, the participant can browse the text information conveniently, the problem that the participant takes notes and loses key information of the speaker is avoided, and the participant efficiency is improved.
However, the first text information displayed on the display module is only text information after audio conversion, and the viewpoint of the speaker cannot be distinguished. Optionally, before the first audio information of the speaker in the current conference site is acquired by using the audio acquisition module, the method further includes the following steps:
and S100-11, receiving the audio feature information sets of the participants by using the wireless communication module.
The audio feature information set comprises participant feature information and audio feature information corresponding to the participant feature information.
The participant characteristic information is unique information that distinguishes the participant from other persons. Such as a name or job number or certificate number.
The audio feature information is also the unique audio information that distinguishes the participant from other persons.
The audio feature information set is set information that is collected and stored in advance.
Further, for step S102, the analyzing the first audio information to generate corresponding first text information includes the following specific steps:
and step S102-1, performing audio characteristic analysis on the first audio information to acquire second audio information and second audio characteristic information of a second speaker behind the first speaker.
Firstly, the audio information of different speakers in the first audio information is separated, the second audio information of the latter speaker (i.e. the second speaker) is obtained, and the second audio feature information is extracted from the second audio information.
And S102-2, matching the second audio characteristic information with the audio characteristic information in the audio characteristic information set, acquiring participant characteristic information corresponding to the matched audio characteristic information, and using the participant characteristic information as second speaker characteristic information.
And S102-3, analyzing the second audio information to generate corresponding second character information.
And S102-4, generating first text information before adding the second speaker characteristic information to the second text information.
And second speaker characteristic information is added in front of the second text information, so that people wearing the virtual reality head-mounted equipment can browse the text information conveniently, and specific speakers corresponding to each sentence can be distinguished when a conference summary is generated automatically.
In order to improve the security of the conference, optionally, before the receiving, by using the wireless communication module, the audio feature information sets of the participants, the method further includes the following steps:
and S100-01, acquiring conference participation report audio information by using an audio acquisition module of the virtual reality headset within a preset report time range.
The participant reports the audio information, namely the participant sends the reported audio information through the audio acquisition module.
And S100-02, sending the participating report audio information by using the wireless communication module.
And S100-03, receiving report confirmation information responding to the audio information of the participating reports, and opening the display module based on the report confirmation information.
The steps can prevent irrelevant personnel from randomly entering the conference, and the safety and the reliability of the conference are improved.
Optionally, the method further includes the following steps:
and S106, storing the first character information and generating a conference summary.
The conference summary is automatically generated, so that the time for recording notes by the participants is reduced, and more energy can be concentrated in the conference.
The audio information of the speaker is converted into the text information, the text information is projected to the display module of the virtual reality head-mounted equipment by using the virtual reality technology, the participant can browse the text information conveniently, the problem that key information is missed due to recording of participant notes is avoided, and the participant efficiency is improved.
Corresponding to the first embodiment provided by the present disclosure, the present disclosure also provides a second embodiment, namely, a virtual reality headset. Since the second embodiment is basically similar to the first embodiment, the description is simple, and the relevant portions should be referred to the corresponding description of the first embodiment. The virtual reality headset embodiments described below are merely illustrative.
Fig. 3 illustrates an embodiment of a virtual reality headset provided by the present disclosure.
As shown in fig. 3, the present disclosure provides a virtual reality headset, comprising: an audio capture module 303, a display module 305, and,
a memory 302 for storing at least a conference information processing instruction set;
a processor 301, configured to invoke and execute the meeting information processing instruction set, so as to complete the following operations:
acquiring first audio information of a speaker in a current conference site by using the audio acquisition module 303;
analyzing the first audio information to generate corresponding first character information;
acquiring first display information associated with a current conference; the first display information comprises first display images which are sequentially arranged;
adding the first text information to a first display image of the first display information based on a time sequence to generate virtual reality video information;
displaying the virtual reality video information on the display module 305.
Optionally, the virtual reality headset further includes a video capture module 304;
the processor 301 obtains first display information associated with a current conference, including:
and acquiring the live-action video information of the conference site as the first display information by using the video acquisition module 304.
Optionally, the virtual reality headset further includes a unidirectional signal receiving module 306 and a wireless communication module 307;
the acquiring first display information associated with a current conference further comprises:
when the unidirectional signal receiving module 306 obtains a presentation file playing signal, the wireless communication module 307 is used to receive the currently played presentation file information as the first display information.
Optionally, before the processor 301 obtains the first audio information of the speaker in the current conference site by using the audio collecting module 303, the method further includes:
receiving the audio feature information set of the participant by using the wireless communication module 307; the audio characteristic information set comprises participant characteristic information and audio characteristic information corresponding to the participant characteristic information;
the analyzing the first audio information to generate corresponding first text information includes:
performing audio characteristic analysis on the first audio information to acquire second audio information and second audio characteristic information of a second speaker behind the first speaker;
matching the second audio characteristic information with the audio characteristic information in the audio characteristic information set to obtain participant characteristic information corresponding to the matched audio characteristic information, and using the participant characteristic information as second speaker characteristic information;
analyzing the second audio information to generate corresponding second text information;
and generating first text information before adding the second speaker characteristic information to the second text information.
Optionally, before the processor 301 receives the audio feature information sets of the participants by using the wireless communication module 307, the method further includes:
acquiring report audio information by using an audio acquisition module 303 of the virtual reality headset within a preset report time range;
sending the report audio information by using the wireless communication module 307;
receiving story confirmation information in response to the story audio information and turning on the display module 305 based on the story confirmation information.
Optionally, the processor 301 obtains first audio information of a speaker in a current conference site by using the audio collecting module 303, where the first audio information includes:
acquiring field audio information of a conference field by using the audio acquisition module 303;
and carrying out noise filtering processing on the field audio information to obtain first audio information.
Optionally, the processor 301 further performs the following operations:
and storing the first text information and generating a conference summary.
The embodiment of the disclosure converts the audio information of the speaker into the text information, and projects the text information to the display module 305 of the virtual reality head-mounted device by using the virtual reality technology, so that the participant can browse the text information conveniently, the problem that the participant is lost due to recording of the participant note is avoided, and the participant efficiency is improved.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other combinations of features described above or equivalents thereof without departing from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (6)

1. A conference information processing method is applied to a virtual reality head-mounted device, and the virtual reality head-mounted device comprises the following steps: audio acquisition module, video acquisition module, one-way signal receiving module, wireless communication module and display module, its characterized in that includes:
acquiring first audio information of a speaker in a current conference site by using the audio acquisition module;
analyzing the first audio information to generate corresponding first character information;
when the unidirectional signal receiving module cannot acquire a play signal of a presentation document, acquiring live-action video information of a conference site by using the video acquisition module as first display information, wherein the first display information comprises first display images which are sequentially arranged;
when the unidirectional signal receiving module acquires a demonstration document playing signal, the wireless communication module is used for receiving the information of the demonstration document which is played currently as the first display information;
adding the first text information to a first display image of the first display information based on a time sequence to generate virtual reality video information;
and displaying the virtual reality video information on the display module.
2. The method of claim 1, further comprising, before the obtaining, by the audio capture module, first audio information of a speaker currently in a conference site, the steps of:
receiving the audio characteristic information sets of the participants by using the wireless communication module; the audio characteristic information set comprises participant characteristic information and audio characteristic information corresponding to the participant characteristic information;
the analyzing the first audio information to generate corresponding first text information includes:
performing audio characteristic analysis on the first audio information to acquire second audio information and second audio characteristic information of a second speaker behind the first speaker;
matching the second audio characteristic information with the audio characteristic information in the audio characteristic information set to obtain participant characteristic information corresponding to the matched audio characteristic information, and using the participant characteristic information as second speaker characteristic information;
analyzing the second audio information to generate corresponding second text information;
and generating first text information before adding the second speaker characteristic information to the second text information.
3. The method of claim 2, wherein prior to said receiving the set of audio feature information of the participant with the wireless communication module, further comprising:
in a preset report time range, acquiring report audio information by using an audio acquisition module of virtual reality head-mounted equipment;
sending the report audio information by using the wireless communication module;
receiving story confirmation information in response to the story audio information, and opening the display module based on the story confirmation information.
4. The method of claim 1, wherein the obtaining, by the audio capture module, first audio information of a speaker currently in a conference site comprises:
acquiring field audio information of a conference field by using the audio acquisition module;
and carrying out noise filtering processing on the field audio information to obtain first audio information.
5. The method of claim 1, further comprising:
and storing the first text information to generate a conference summary.
6. A virtual reality headset, comprising: an audio acquisition module, a display module, a video acquisition module, a one-way signal receiving module, a wireless communication module, and,
a memory for storing at least a conference information processing instruction set;
a processor for invoking and executing the meeting information processing instruction set to accomplish the following operations:
acquiring first audio information of a speaker in a current conference site by using the audio acquisition module;
analyzing the first audio information to generate corresponding first character information;
when the unidirectional signal receiving module cannot acquire a play signal of a presentation document, acquiring live-action video information of a conference site by using the video acquisition module as first display information, wherein the first display information comprises first display images which are sequentially arranged;
when the unidirectional signal receiving module acquires a demonstration document playing signal, the wireless communication module is used for receiving the information of the demonstration document which is played currently as the first display information;
adding the first text information to a first display image of the first display information based on a time sequence to generate virtual reality video information;
and displaying the virtual reality video information on the display module.
CN202010587825.3A 2020-06-24 2020-06-24 Conference information processing method and virtual reality head-mounted equipment Active CN111770300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010587825.3A CN111770300B (en) 2020-06-24 2020-06-24 Conference information processing method and virtual reality head-mounted equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010587825.3A CN111770300B (en) 2020-06-24 2020-06-24 Conference information processing method and virtual reality head-mounted equipment

Publications (2)

Publication Number Publication Date
CN111770300A CN111770300A (en) 2020-10-13
CN111770300B true CN111770300B (en) 2022-07-05

Family

ID=72722286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010587825.3A Active CN111770300B (en) 2020-06-24 2020-06-24 Conference information processing method and virtual reality head-mounted equipment

Country Status (1)

Country Link
CN (1) CN111770300B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114153316B (en) * 2021-12-15 2024-03-29 天翼电信终端有限公司 AR-based conference summary generation method, device, server and storage medium
CN118658487A (en) * 2024-08-16 2024-09-17 青岛歌尔视界科技有限公司 Intelligent glasses control method, intelligent glasses, storage medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580986A (en) * 2015-02-15 2015-04-29 王生安 Video communication system combining virtual reality glasses
CN104732969A (en) * 2013-12-23 2015-06-24 鸿富锦精密工业(深圳)有限公司 Voice processing system and method
CN108986826A (en) * 2018-08-14 2018-12-11 中国平安人寿保险股份有限公司 Automatically generate method, electronic device and the readable storage medium storing program for executing of minutes
CN109660368A (en) * 2018-12-24 2019-04-19 广州维速信息科技有限公司 A kind of cloud conference system and method
CN110677614A (en) * 2019-10-15 2020-01-10 广州国音智能科技有限公司 Information processing method, device and computer readable storage medium
CN110825224A (en) * 2019-10-25 2020-02-21 北京威尔文教科技有限责任公司 Interaction method, interaction system and display device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7996792B2 (en) * 2006-09-06 2011-08-09 Apple Inc. Voicemail manager for portable multifunction device
JP6582626B2 (en) * 2015-07-02 2019-10-02 富士通株式会社 Transmission control method, display terminal, and transmission control program
US20170103577A1 (en) * 2015-10-12 2017-04-13 Cinova Media Method and apparatus for optimizing video streaming for virtual reality
US20170124762A1 (en) * 2015-10-28 2017-05-04 Xerox Corporation Virtual reality method and system for text manipulation
CN105867626A (en) * 2016-04-12 2016-08-17 京东方科技集团股份有限公司 Head-mounted virtual reality equipment, control method thereof and virtual reality system
CN106210703B (en) * 2016-09-08 2018-06-08 北京美吉克科技发展有限公司 The utilization of VR environment bust shot camera lenses and display methods and system
CN106485787A (en) * 2016-10-21 2017-03-08 安徽协创物联网技术有限公司 A kind of many people on-line virtual reality all-in-one based on the Internet
CN107071334A (en) * 2016-12-24 2017-08-18 深圳市虚拟现实技术有限公司 3D video-meeting methods and equipment based on virtual reality technology
CN109426342B (en) * 2017-08-29 2022-04-01 深圳市掌网科技股份有限公司 Document reading method and device based on augmented reality

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732969A (en) * 2013-12-23 2015-06-24 鸿富锦精密工业(深圳)有限公司 Voice processing system and method
CN104580986A (en) * 2015-02-15 2015-04-29 王生安 Video communication system combining virtual reality glasses
CN108986826A (en) * 2018-08-14 2018-12-11 中国平安人寿保险股份有限公司 Automatically generate method, electronic device and the readable storage medium storing program for executing of minutes
CN109660368A (en) * 2018-12-24 2019-04-19 广州维速信息科技有限公司 A kind of cloud conference system and method
CN110677614A (en) * 2019-10-15 2020-01-10 广州国音智能科技有限公司 Information processing method, device and computer readable storage medium
CN110825224A (en) * 2019-10-25 2020-02-21 北京威尔文教科技有限责任公司 Interaction method, interaction system and display device

Also Published As

Publication number Publication date
CN111770300A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
TWI482108B (en) To bring virtual social networks into real-life social systems and methods
WO2013039062A1 (en) Facial analysis device, facial analysis method, and memory medium
US20050209848A1 (en) Conference support system, record generation method and a computer program product
Temkar et al. Internet of things for smart classrooms
US20140136626A1 (en) Interactive Presentations
US10468051B2 (en) Meeting assistant
CN111770300B (en) Conference information processing method and virtual reality head-mounted equipment
JP5751143B2 (en) Minutes creation support device, minutes creation support system, and minutes creation program
JP2020126645A (en) Information processing method, terminal device and information processing device
JP2006085440A (en) Information processing system, information processing method and computer program
CN104639777A (en) Conference control method, conference control device and conference system
US20080255840A1 (en) Video Nametags
JP2001510671A (en) Communication method and terminal
US20130215214A1 (en) System and method for managing avatarsaddressing a remote participant in a video conference
CN109068088A (en) Meeting exchange method, apparatus and system based on user's portable terminal
CN109151642A (en) A kind of intelligent earphone, intelligent earphone processing method, electronic equipment and storage medium
Son et al. Evaluation of work-as-done in information management of multidisciplinary incident management teams via Interaction Episode Analysis
CN111479124A (en) Real-time playing method and device
Sung et al. Mobile‐IT Education (MIT. EDU): m‐learning applications for classroom settings
CN106412485A (en) Remote office method, device and system
CN104135638A (en) Optimized video snapshot
Losada et al. GroupAnalyzer: A system for dynamic analysis of group interaction
JP2004199547A (en) Reciprocal action analysis system, and reciprocal action analysis program
US20050131697A1 (en) Speech improving apparatus, system and method
JP2012156965A (en) Digest generation method, digest generation device, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 1202, 12 / F, building 1, yard 54, Shijingshan Road, Shijingshan District, Beijing

Applicant after: Oook (Beijing) Education Technology Co.,Ltd.

Address before: 100041 1202, 12 / F, building 1, yard 54, Shijingshan Road, Shijingshan District, Beijing

Applicant before: Beijing Anbo chuangying Education Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant