CN111245629B - Conference control method, device, equipment and storage medium - Google Patents

Conference control method, device, equipment and storage medium Download PDF

Info

Publication number
CN111245629B
CN111245629B CN202010030152.1A CN202010030152A CN111245629B CN 111245629 B CN111245629 B CN 111245629B CN 202010030152 A CN202010030152 A CN 202010030152A CN 111245629 B CN111245629 B CN 111245629B
Authority
CN
China
Prior art keywords
conference
control instruction
control
equipment
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010030152.1A
Other languages
Chinese (zh)
Other versions
CN111245629A (en
Inventor
卞同同
陈孝良
李智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202010030152.1A priority Critical patent/CN111245629B/en
Publication of CN111245629A publication Critical patent/CN111245629A/en
Application granted granted Critical
Publication of CN111245629B publication Critical patent/CN111245629B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1827Network arrangements for conference optimisation or adaptation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Telephonic Communication Services (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a conference control method, a conference control device, conference control equipment and a storage medium, and belongs to the technical field of conference control. The method comprises the following steps: when a first control instruction sent by a first device is received, which conference information the first device binds to may be queried, then at least one conference device related to the conference information is found, and a second control instruction is sent to the at least one conference device to instruct the at least one conference device to perform a corresponding operation, for example, the first control instruction may be initiated by a user voice, so that automatic control of the conference device by the voice may be implemented. According to the conference control method and device, the conference equipment does not need to be manually operated by a user, the intelligence of conference control is improved, and the conference control efficiency is high.

Description

Conference control method, device, equipment and storage medium
Technical Field
The present application relates to the field of conference control technologies, and in particular, to a conference control method, apparatus, device, and storage medium.
Background
Multimedia conferences such as voice conferences or video conferences serve as a new conference mode, multi-party participants in different places can perform real-time information interaction, and the method has the advantages of timely communication, high efficiency and the like, and is increasingly popularized in enterprises.
In the related technology, a host generally reserves a conference on a conference application of a terminal, then the host enters a conference room at a point and manually starts a conference machine, invites are sent to participants of all parties on the conference machine, and the participants of all parties join the conference after manual operation is performed on the conference machine.
In the technology, a certain time is wasted for manually operating the conference machine when a conference is started, the control of the conference machine is not intelligent enough, and the conference control efficiency is low.
Disclosure of Invention
The embodiment of the application provides a conference control method, a conference control device, conference control equipment and a storage medium, and conference control efficiency can be improved. The technical scheme is as follows:
in a first aspect, a conference control method is provided, including:
acquiring a first control instruction, wherein the first control instruction is used for controlling target conference equipment to execute corresponding operation;
determining target conference information bound to first equipment corresponding to the first control instruction, wherein the first equipment is equipment for sending the first control instruction;
and taking at least one conference device corresponding to the target conference information as the target conference device, and sending a second control instruction to the at least one conference device, wherein the second control instruction is used for instructing the at least one conference device to execute the operation.
In one possible implementation, the obtaining the first control instruction includes:
receiving a voice control instruction from the first device, the voice control instruction being triggered by the voice collected by the first device;
and carrying out voice recognition on the voice control instruction to obtain a first character control instruction, and taking the first character control instruction as the first control instruction.
In one possible implementation, the obtaining the first control instruction includes:
and receiving a second character control instruction from the first equipment, wherein the second character control instruction is used as the first control instruction and is triggered by the first equipment at regular time.
In one possible implementation, before sending the second control instruction to the at least one conference device, the method further includes:
acquiring conference control skill information corresponding to the first control instruction, wherein the conference control skill information comprises at least one conference control intention and a corresponding slot position;
when the first control instruction comprises a slot value of at least one slot corresponding to any intention in the conference control skill information, taking a slot value combination result of the at least one slot as a conference control intention corresponding to the first control instruction;
and generating the second control instruction according to the conference control intention corresponding to the first control instruction.
In a possible implementation manner, the obtaining conference control skill information corresponding to the first control instruction includes:
and acquiring conference control skill information bound with the equipment group identification as conference control skill information corresponding to the first control instruction according to the equipment group identification bound with the first equipment, wherein the equipment group identification is used for identifying a group formed by at least one piece of equipment.
In a possible implementation manner, the determining target conference information bound to the first device corresponding to the first control instruction includes:
determining at least one piece of conference information corresponding to the first device from the stored conference information, wherein the at least one piece of conference information comprises a conference room identifier of a conference room where the first device is located;
and according to the storage time of the at least one piece of conference information, taking the conference information with the storage time closest to the current time as the target conference information.
In a possible implementation manner, the first device is any one of a smart sound box or a conference device in any conference room.
In a second aspect, there is provided a conference control apparatus comprising:
the acquisition module is used for acquiring a first control instruction, and the first control instruction is used for controlling target conference equipment to execute corresponding operation;
a determining module, configured to determine target conference information bound to first equipment corresponding to the first control instruction, where the first equipment is equipment that sends the first control instruction;
and the sending module is used for sending a second control instruction to the at least one conference device by taking the at least one conference device corresponding to the target conference information as the target conference device, wherein the second control instruction is used for instructing the at least one conference device to execute the operation.
In one possible implementation, the obtaining module is configured to:
receiving a voice control instruction from the first device, the voice control instruction being triggered by the voice collected by the first device;
and carrying out voice recognition on the voice control instruction to obtain a first character control instruction, and taking the first character control instruction as the first control instruction.
In one possible implementation, the obtaining module is configured to:
and receiving a second character control instruction from the first equipment, wherein the second character control instruction is used as the first control instruction and is triggered by the first equipment at regular time.
In one possible implementation, the apparatus further includes:
the acquisition module is further configured to acquire conference control skill information corresponding to the first control instruction, where the conference control skill information includes at least one conference control intention and a corresponding slot position;
the obtaining module is further configured to, when the first control instruction includes a slot value of at least one slot corresponding to any intention in the conference control skill information, use a slot value combination result of the at least one slot as a conference control intention corresponding to the first control instruction;
and the generating module is used for generating the second control instruction according to the conference control intention corresponding to the first control instruction.
In one possible implementation, the obtaining module is configured to:
and acquiring conference control skill information bound with the equipment group identification as conference control skill information corresponding to the first control instruction according to the equipment group identification bound with the first equipment, wherein the equipment group identification is used for identifying a group formed by at least one piece of equipment.
In one possible implementation, the determining module is configured to:
determining at least one piece of conference information corresponding to the first device from the stored conference information, wherein the at least one piece of conference information comprises a conference room identifier of a conference room where the first device is located;
and according to the storage time of the at least one piece of conference information, taking the conference information with the storage time closest to the current time as the target conference information.
In a possible implementation manner, the first device is any one of a smart sound box or a conference device in any conference room.
In a third aspect, an electronic device is provided, which includes one or more processors and one or more memories, and at least one program code is stored in the one or more memories, and the at least one program code is loaded and executed by the one or more processors to implement the method steps of any one of the implementations of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, in which at least one program code is stored, which is loaded and executed by a processor to implement the method steps of any of the implementations of the first aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
by receiving the first control instruction, the conference information bound by the sending equipment of the first control instruction is automatically obtained, and the second control instruction is sent to the at least one conference equipment corresponding to the conference information, so that the at least one conference equipment executes corresponding operation, the conference equipment does not need to be manually operated by a user, the intelligence of conference control is improved, and the conference control efficiency is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic architecture diagram of a conference control system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a deployment service provided by an embodiment of the present application;
fig. 3 is a flowchart of a conference control method according to an embodiment of the present application;
fig. 4 is a flowchart of a conference control method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a conference control apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal 600 according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a server 700 according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, some terms referred to in the embodiments of the present application are explained below:
ASR (Automatic Speech Recognition): ASR is a method of converting speech signals of human speech into text information that can be recognized by a computer, so that the computer can understand human language.
NLP (Natural Language Processing): the NLP is a semantic analysis of the text information to obtain the intention expressed by the text information.
IOT (Internet of Things ): based on information carriers such as the internet, the traditional telecommunication network and the like, the network realizes interconnection and intercommunication of all common objects capable of performing independent functions.
MQTT (Message Queuing Telemetry Transport) is an instant messaging protocol which supports all platforms and can connect almost all networked items to the outside.
Fig. 1 is a schematic structural diagram of a conference control system according to an embodiment of the present application, and as shown in fig. 1, the conference control system includes a conference device 101 and a server 102.
The conference device 101 may be a conference machine, or may be a terminal such as a mobile phone or a PC (Personal Computer), and the terminal may have a conference application installed thereon and perform a conference based on the conference application. The number of the conference devices 101 may be one or more. The server 102 may be a single server or a server cluster composed of a plurality of servers, and a plurality of services may be deployed on the server 102.
In an example, as shown in fig. 1, the conference control system may further include a smart speaker 103, where the smart speaker 103 is capable of collecting a voice of a user and sending a control instruction to the server 102. The number of the smart speakers 103 may be one or more.
Referring to fig. 2, fig. 2 is a schematic view of a deployment service provided in the embodiment of the present application, and as shown in fig. 2, a server 102 may be deployed with a master control cloud service, an ASR (Automatic Speech Recognition) service, an NLP (Natural Language Processing) service, a skill cloud service, an authority management cloud service, an IOT service, and the like. These services may be deployed on one server, or may be deployed on different servers, which is not limited in this embodiment of the present application. The master control cloud service can be in butt joint with the intelligent sound box 103, receives a control instruction sent by the intelligent sound box 103, can call ASR service to analyze the voice equipment into characters and call NLP service to obtain intentions, and can instruct the skill cloud service to perform corresponding business logic processing and authentication, wherein the authentication can be realized through the authority management cloud service. The skill cloud service can send MQTT control instructions to each conference device through the IOT service, so that each conference device can execute corresponding operations. The server 102 may also include a database that may be used to store meeting information.
Fig. 3 is a flowchart of a conference control method according to an embodiment of the present application. Referring to fig. 3, the method includes:
301. and acquiring a first control instruction, wherein the first control instruction is used for controlling the target conference equipment to execute corresponding operation.
302. And determining target conference information bound to first equipment corresponding to the first control instruction, wherein the first equipment is equipment for sending the first control instruction.
303. And taking at least one conference device corresponding to the target conference information as the target conference device, and sending a second control instruction to the at least one conference device, wherein the second control instruction is used for instructing the at least one conference device to execute the operation.
According to the method provided by the embodiment of the application, the conference information bound with the sending equipment of the first control instruction is automatically obtained by receiving the first control instruction, and the second control instruction is sent to the at least one conference equipment corresponding to the conference information, so that the at least one conference equipment executes corresponding operation, the conference equipment does not need to be manually operated by a user, the intelligence of conference control is improved, and the conference control efficiency is high.
In one possible implementation, the obtaining the first control instruction includes:
receiving a voice control instruction from the first device, wherein the voice control instruction is triggered by the voice collected by the first device;
and carrying out voice recognition on the voice control instruction to obtain a first character control instruction, and taking the first character control instruction as the first control instruction.
In one possible implementation, the obtaining the first control instruction includes:
and receiving a second character control instruction from the first equipment, wherein the second character control instruction is used as the first control instruction and is triggered by the first equipment at regular time.
In one possible implementation, before sending the second control instruction to the at least one conference device, the method further includes:
acquiring conference control skill information corresponding to the first control instruction, wherein the conference control skill information comprises at least one conference control intention and a corresponding slot position;
when the first control instruction comprises the slot value of at least one slot position corresponding to any intention in the conference control skill information, taking the slot value combination result of the at least one slot position as the conference control intention corresponding to the first control instruction;
and generating the second control instruction according to the conference control intention corresponding to the first control instruction.
In a possible implementation manner, the obtaining of the conference control skill information corresponding to the first control instruction includes:
and acquiring conference control skill information bound with the equipment group identifier as conference control skill information corresponding to the first control instruction according to the equipment group identifier bound with the first equipment, wherein the equipment group identifier is used for identifying a group formed by at least one piece of equipment.
In a possible implementation manner, the determining target conference information bound to the first device corresponding to the first control instruction includes:
determining at least one piece of conference information corresponding to the first device from the stored conference information, wherein the at least one piece of conference information comprises a conference room identifier of a conference room where the first device is located;
and according to the storage time of the at least one piece of conference information, taking the conference information with the storage time closest to the current time as the target conference information.
In one possible implementation, the first device is any one of a smart speaker or a conference device in any conference room.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Fig. 4 is a flowchart of a conference control method according to an embodiment of the present application. In this way, as an example, the method is performed by interaction among a first device, a server, and a conference device, where the server may be the server 102 in the conference control system shown in fig. 1, the first device may be the smart speaker 103 in the conference control system, and the conference device may be the conference device 101 in the conference control system, and referring to fig. 4, the method includes:
401. the server acquires and stores conference control skill information, wherein the conference control skill information comprises at least one conference control intention and corresponding slots.
The conference control intention refers to an intention for performing conference control, such as opening a conference, pausing the conference, exiting the conference, and the like. Each intent may include a plurality of slots, and each information used to accurately express an intent may serve as a slot, and each slot may contain a slot value. For example, the intention to open a conference may include a time slot to express a time to open the conference, a place slot to express a place to open the conference, a conference device slot to express a conference device to open the conference, and the like. The conference control skill information can comprise corpora besides conference control intents and slots, each intention corresponds to a plurality of corpora, and each speaking mode for expressing the intention can be used as one corpus. For example, the plurality of corpora corresponding to the intention of opening the conference may include "please help to open the conference machine" and "open the conference machine now immediately", etc.
The conference control skill information may be created on the server by a developer or B-end user so that the server may obtain the conference control skill information. After the server acquires the conference control skill information, the conference control skill information can be stored in an NLP engine, and the NLP engine is used for providing NLP services.
402. The server acquires a device group identifier, and establishes a binding relationship between the device group identifier and the conference control skill information, wherein the device group identifier is used for identifying a group formed by at least one device.
The device group identifier may be created on the server by a developer or a B-end user, so that the server may obtain the device group identifier, for example, the developer may be a developer of a smart speaker for conference control, the B-end user may be a group user using the smart speaker, such as a company, an organization, and the like, and accordingly, the device group identifier may identify a group formed by at least one smart speaker for conference control.
The server may bind the device group identifier obtained in step 402 with the conference control skill information obtained in step 401. For example, the server may create a robot in the NLP engine, bind the device group identifier to the robot, and bind the conference control skill information to the robot, such that a binding relationship between the device group identifier and the conference control skill information is established by the robot. Of course, the server may also directly establish the binding relationship between the device group identifier and the conference control skill information in the NLP engine, which is not limited in this embodiment of the present application.
403. The server establishes a binding relationship between the equipment identifier of at least one intelligent sound box and the equipment group identifier, and a binding relationship between the equipment identifier of the at least one intelligent sound box and the conference room identifier of the conference room in which the equipment identifier of the at least one intelligent sound box is located.
Wherein, this at least one intelligent audio amplifier can be all intelligent audio amplifiers that are used for meeting control.
The device identifier of the at least one smart speaker and the conference room identifier of the conference room in which each smart speaker is located may be provided by a developer or a B-end user. The server may bind the device identifier of the at least one smart speaker to the device group identifier in step 402, such that the device group identifier is used to identify the group consisting of the at least one smart speaker to which the device identifier is bound. The server can also bind the conference room identifier of each conference room with the device identifier of the smart loudspeaker in the conference room. In one possible embodiment, the server may further establish a binding relationship between the device identifier of the at least one conference device and the conference room identifier of the respective conference room, where the device identifier of the at least one conference device may be provided by the developer or the B-end user.
404. And the server receives the conference reservation request and stores the conference information in the conference reservation request.
A user hosting the conference may reserve the conference on a designated conference reservation platform, which may be serviced by the server. When the user reserves a conference, conference information such as conference room identification, conference time, participants and the like of a conference room can be filled in the conference reservation platform, and then the conference information is submitted to the server in a mode of sending a conference reservation request, so that the server can extract the conference information from the conference reservation request after receiving the conference reservation request, store the conference information, and store the conference information in a database, for example. In one possible embodiment, the conference information may further include a device identifier of a conference device in the conference room. Because the loudspeaker box in each conference room is bound with the conference room, and the conference information comprises the conference room, the binding of the loudspeaker box in the conference room and the conference information is also realized.
It should be noted that steps 401 to 404 are optional steps, which are steps that need to be executed before controlling the conference device, and are not required to be executed each time the conference device is controlled, so that it is ensured that the steps are already executed when the conference device needs to be controlled.
405. The method comprises the steps that first equipment sends a voice control instruction to a server, the voice control instruction is used for controlling target conference equipment to execute corresponding operation, the first equipment is an intelligent sound box in any conference room, and the voice control instruction is triggered by voice collected by the first equipment.
The first device may be an intelligent sound box in a conference room where a user who hosts the conference is located.
After the user who presides over the meeting arrives at the meeting room, can carry out speech control to the intelligent audio amplifier in this meeting room and awaken up, this intelligent audio amplifier can send the pronunciation of gathering for the server as this speech control instruction. For example, the smart sound box may send the voice control instruction to a total control cloud service of the server.
406. The server receives the voice control instruction from the first device, performs voice recognition on the voice control instruction to obtain a first character control instruction, and takes the first character control instruction as the first control instruction.
After the server receives the voice control instruction, the voice control instruction can be subjected to voice recognition by using an ASR (access service router) technology, and characters obtained through recognition are used as the first character control instruction. For example, after receiving the voice control instruction, the total control cloud service of the server may call the ASR service to convert the voice control instruction into a text control instruction.
Step 406 is one possible implementation manner of the server obtaining the first control instruction, where the first control instruction is used to control the target conference device to perform a corresponding operation. The mode is that a user who presides the conference wakes up the sound box control. In another possible implementation manner, the server obtains the first control instruction, including: and receiving a second character control instruction from the first equipment, wherein the second character control instruction is used as the first control instruction and is triggered by the first equipment at regular time. The first device may automatically wake up when meeting time arrives, and send a second text control instruction to the server, where the mode is that the sound box automatically wakes up, for example, a timer service may be written in the first device, and is used to periodically trigger the first device to send the second text control instruction to the server, so that the server takes the received second text control instruction as the first control instruction. The timing trigger may be triggered when the meeting start time is reached, or may be triggered at a target time before the meeting start time, where the time length from the meeting start time to the target time is a target time length, for example, 15 minutes, and the target time may be set by the user. The timing trigger may also be triggered when an event that someone enters the conference room or a participant enters the conference room is detected, for example, a camera may be disposed in the conference room, and whether the event occurs or not may be determined by recognizing a picture taken by the camera. The embodiment of the present application does not specifically limit the timing triggering manner.
407. And the server acquires the conference control skill information bound with the equipment group identification as the conference control skill information corresponding to the first control instruction according to the equipment group identification bound with the first equipment.
After obtaining the first control instruction, the server may query the device group identifier bound to the first device, for example, query the binding relationship in step 403 according to the device identifier of the first device, to obtain the device group identifier bound to the first device, and then query the binding relationship in step 402 according to the device group identifier, to obtain the conference control skill information bound to the device group identifier. For example, the process can be realized by a master control cloud service, after the master control cloud service acquires a first control instruction, the device group identifier bound by the first device can be inquired, the NLP engine is called according to the device group identifier, and the NLP engine returns conference control skill information bound by the device group identifier.
Step 407 is one possible implementation manner of obtaining the conference control skill information corresponding to the first control instruction. The first equipment and the equipment group identification are bound in advance, and the equipment group identification and the member control skill information are bound, so that the server can find the corresponding conference control skill information by utilizing the binding relationship established in advance after receiving the first control instruction.
408. And when the first control instruction comprises the slot value of at least one slot corresponding to any conference control intention in the conference control skill information, the server takes the slot value combination result of the at least one slot as the conference control intention corresponding to the first control instruction.
After the server acquires the conference control skill information corresponding to the first control instruction, it may be determined whether the first control instruction includes a slot value of each slot corresponding to any conference control intention, that is, whether the first control instruction hits the slot value of each slot, and if so, a combination result of the slot values of the slots is used as the conference control intention corresponding to the first control instruction. If not, multi-round interaction can be carried out with the user, and the user is prompted to restart the control instruction until the control instruction can hit the slot value of each slot corresponding to any conference control intention.
409. And the server determines target conference information bound by first equipment corresponding to the first control instruction, wherein the first equipment is equipment for sending the first control instruction.
In a possible implementation manner, the determining the target conference information bound to the first device corresponding to the first control instruction includes: determining at least one piece of conference information corresponding to the first device from the stored conference information, wherein the at least one piece of conference information comprises a conference room identifier of a conference room where the first device is located; and according to the storage time of the at least one piece of conference information, taking the conference information with the storage time closest to the current time as the target conference information.
The server may query which meeting information the first device binds to, for example, the server may query stored meeting information through a database, query at least one piece of meeting information that includes the meeting room identifier in the stored meeting information according to the meeting room identifier of the meeting room where the first device is located, use the at least one piece of meeting information as at least one piece of meeting information corresponding to the first device, obtain the latest meeting information therefrom as the target meeting information, for example, obtain the meeting information in step 404. For the meeting room identifier of the meeting room where the first device is located, the server may query the binding relationship in step 403 according to the device identifier of the first device, to obtain the meeting room identifier bound by the first device, that is, the meeting room identifier of the meeting room where the first device is located.
It should be noted that, before determining the target conference information bound by the first device, the server may further perform corresponding business logic processing and authentication, for example, find a corresponding skill cloud service according to the conference control skill information, and perform corresponding business logic processing and authentication by the skill cloud service (authentication may be implemented by a rights management cloud service, such as a network cloud service).
410. And the server generates a second control instruction according to the conference control intention corresponding to the first control instruction. The server may generate an MQTT control instruction, that is, the second control instruction, according to the conference control intention corresponding to the first control instruction and a format specified by the MQTT protocol, where the second control instruction is used to instruct the receiving device to specify a corresponding operation, so as to implement the conference control intention.
It should be noted that step 409 and step 410 do not represent the execution sequence, and step 410 may be executed first and then step 409 is executed, which is not limited in this embodiment of the present application.
411. And the server takes at least one conference device corresponding to the target conference information as a target conference device, and sends a second control instruction to the at least one conference device, wherein the second control instruction is used for instructing the at least one conference device to execute corresponding operation.
For the case that the target conference information includes the device identifier of the conference device in the conference room, the server may directly use the conference device to which at least one device identifier included in the target conference information belongs as at least one conference device corresponding to the target conference information.
For the case that the target conference information includes a conference room name of a conference room, and the server can establish a binding relationship between the device identifier of at least one conference device and the conference room identifier of the respective conference room in step 403, in this step 411, the server can use at least one conference device bound to at least one conference room name included in the target conference information as at least one conference device corresponding to the target conference information.
The server may send the second control instruction generated in step 410 to the at least one conferencing device.
412. And for any conference equipment in the at least one conference equipment, the conference equipment receives the second control instruction and executes corresponding operation.
After receiving the second control instruction, the at least one conference device may perform a corresponding operation, such as an operation of opening the conference, an operation of suspending the conference, or an operation of exiting the conference.
It should be noted that, in the embodiment of the present application, the first device is taken as an example of an intelligent sound box in any conference room, and an overall process for controlling a conference through the sound box is provided. In one possible embodiment, the first device may also be a conference device in any conference room, in which case, the functional chip of the smart speaker may be integrated on the conference device, so that the conference device may implement the function of the smart speaker.
The embodiment of the application can enable the conference equipment to be more intelligent, improves the control efficiency of the conference equipment, saves meeting time, does not need manual operation, and can control the conference equipment through voice.
According to the method provided by the embodiment of the application, the conference information bound with the sending equipment of the first control instruction is automatically obtained by receiving the first control instruction, and the second control instruction is sent to the at least one conference equipment corresponding to the conference information, so that the at least one conference equipment executes corresponding operation, the conference equipment does not need to be manually operated by a user, the intelligence of conference control is improved, and the conference control efficiency is high.
Fig. 5 is a schematic structural diagram of a conference control apparatus according to an embodiment of the present application. Referring to fig. 5, the apparatus includes:
an obtaining module 501, configured to obtain a first control instruction, where the first control instruction is used to control a target conference device to execute a corresponding operation;
a determining module 502, configured to determine target conference information bound to a first device corresponding to the first control instruction, where the first device is a device that sends the first control instruction;
a sending module 503, configured to send a second control instruction to at least one conference device corresponding to the target conference information as the target conference device, where the second control instruction is used to instruct the at least one conference device to perform the operation.
In one possible implementation, the obtaining module 501 is configured to:
receiving a voice control instruction from the first device, wherein the voice control instruction is triggered by the voice collected by the first device;
and performing voice recognition on the voice control instruction to obtain a first character control instruction, and taking the first character control instruction as the first control instruction.
In one possible implementation, the obtaining module 501 is configured to:
and receiving a second text control instruction from the first equipment, wherein the second text control instruction is used as the first control instruction and is triggered by the first equipment at regular time.
In one possible implementation, the apparatus further includes:
the obtaining module 501 is further configured to obtain conference control skill information corresponding to the first control instruction, where the conference control skill information includes at least one conference control intention and a corresponding slot position;
the obtaining module 501 is further configured to, when the first control instruction includes a slot value of at least one slot corresponding to any intention in the conference control skill information, use a slot value combination result of the at least one slot as a conference control intention corresponding to the first control instruction;
and the generating module is used for generating the second control instruction according to the conference control intention corresponding to the first control instruction.
In one possible implementation, the obtaining module 501 is configured to:
and acquiring conference control skill information bound with the equipment group identifier as conference control skill information corresponding to the first control instruction according to the equipment group identifier bound with the first equipment, wherein the equipment group identifier is used for identifying a group formed by at least one piece of equipment.
In one possible implementation, the determining module 502 is configured to:
determining at least one piece of conference information corresponding to the first device from the stored conference information, wherein the at least one piece of conference information comprises a conference room identifier of a conference room where the first device is located;
and according to the storage time of the at least one piece of conference information, taking the conference information with the storage time closest to the current time as the target conference information.
In one possible implementation, the first device is any one of a smart speaker or a conference device in any conference room.
It should be noted that: in the conference control apparatus provided in the above embodiment, only the division of the above functional modules is exemplified in conference control, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the conference control apparatus and the conference control method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 6 is a schematic structural diagram of a terminal 600 according to an embodiment of the present application. The terminal 600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: one or more processors 601 and one or more memories 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the conference control method provided by the method embodiments of the present application.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a display 605, a camera assembly 606, an audio circuit 607, a positioning assembly 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or above the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and can be used for light compensation under different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert the electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used for positioning the current geographic Location of the terminal 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the display screen 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization while shooting, game control, and inertial navigation.
Pressure sensors 613 may be disposed on the side bezel of terminal 600 and/or underneath display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, a user's holding signal of the terminal 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of display screen 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 605 is increased; when the ambient light intensity is low, the display brightness of the display screen 605 is adjusted down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when proximity sensor 616 detects that the distance between the user and the front face of terminal 600 gradually decreases, processor 601 controls display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front face of the terminal 600 is gradually increased, the processor 601 controls the display 605 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not limiting of terminal 600 and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 7 is a schematic structural diagram of a server 700 according to an embodiment of the present application, where the server 700 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 701 and one or more memories 702, where the memory 702 stores at least one program code, and the at least one program code is loaded and executed by the processors 701 to implement the methods provided by the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, storing at least one program code, which is loaded and executed by a processor, to implement the conference control method in the above embodiments. For example, the computer readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (7)

1. A conference control method, the method comprising:
acquiring conference control skill information, and storing the conference control skill information, wherein the conference control skill information comprises at least one conference control intention and a corresponding slot position;
acquiring a device group identifier, and establishing a binding relationship between the device group identifier and the conference control skill information, wherein the device group identifier is used for identifying a group formed by at least one device;
establishing a binding relationship between the equipment identifier of at least one intelligent sound box and the equipment group identifier, and a binding relationship between the equipment identifier of at least one intelligent sound box and the conference room identifier of the conference room in which the equipment identifier is located;
receiving a conference reservation request, and storing conference information in the conference reservation request;
acquiring a first control instruction, wherein the first control instruction is used for controlling target conference equipment to execute conference operation, the first control instruction is sent by first equipment, and the first equipment is an intelligent sound box in any conference room;
acquiring conference control skill information bound with the equipment group identification as conference control skill information corresponding to the first control instruction according to the equipment group identification bound with the first equipment;
when the first control instruction comprises a slot value of at least one slot corresponding to any intention in the conference control skill information, taking a slot value combination result of the at least one slot as a conference control intention corresponding to the first control instruction;
generating a second control instruction according to the conference control intention corresponding to the first control instruction;
determining at least one piece of conference information corresponding to the first device from stored conference information based on the conference room identifier of the conference room where the first device is located, wherein the at least one piece of conference information comprises the conference room identifier of the conference room where the first device is located;
according to the storage time of the at least one piece of conference information, taking the conference information with the storage time closest to the current time as the target conference information;
and taking at least one conference device corresponding to the target conference information as the target conference device, and sending the second control instruction to the at least one conference device, wherein the second control instruction is used for instructing the at least one conference device to execute conference operation.
2. The method of claim 1, wherein said obtaining a first control instruction comprises:
receiving a voice control instruction from the first device, the voice control instruction being triggered by the voice collected by the first device;
and carrying out voice recognition on the voice control instruction to obtain a first character control instruction, and taking the first character control instruction as the first control instruction.
3. The method of claim 1, wherein said obtaining a first control instruction comprises:
and receiving a second character control instruction from the first equipment, wherein the second character control instruction is used as the first control instruction and is triggered by the first equipment at regular time.
4. The method of claim 1, wherein the first device is any one of a smart speaker or a conference device in any conference room.
5. A conference control apparatus, characterized in that the apparatus comprises a plurality of functional modules for performing the conference control method of any one of claims 1 to 4.
6. An electronic device, comprising one or more processors and one or more memories having at least one program code stored therein, the at least one program code being loaded and executed by the one or more processors to implement a conference control method as claimed in any one of claims 1 to 4.
7. A computer-readable storage medium having stored therein at least one program code, the at least one program code being loaded and executed by a processor, to implement the conference control method as claimed in any one of claims 1 to 4.
CN202010030152.1A 2020-01-13 2020-01-13 Conference control method, device, equipment and storage medium Active CN111245629B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010030152.1A CN111245629B (en) 2020-01-13 2020-01-13 Conference control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010030152.1A CN111245629B (en) 2020-01-13 2020-01-13 Conference control method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111245629A CN111245629A (en) 2020-06-05
CN111245629B true CN111245629B (en) 2022-05-10

Family

ID=70876247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010030152.1A Active CN111245629B (en) 2020-01-13 2020-01-13 Conference control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111245629B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115580700A (en) * 2022-09-15 2023-01-06 海南视联通信技术有限公司 Terminal quitting method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370610A (en) * 2017-08-30 2017-11-21 百度在线网络技术(北京)有限公司 Meeting synchronous method and device
CN109218654A (en) * 2018-10-19 2019-01-15 视联动力信息技术股份有限公司 A kind of view networking conference control method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206921471U (en) * 2017-06-16 2018-01-23 青岛爱上办公集成有限公司 A kind of intelligent meeting control system based on speech recognition
CN110505431A (en) * 2018-05-17 2019-11-26 视联动力信息技术股份有限公司 A kind of control method and device of terminal
CN109951519A (en) * 2019-01-22 2019-06-28 视联动力信息技术股份有限公司 A kind of control method and device of convention business
CN110381285B (en) * 2019-07-19 2021-05-28 视联动力信息技术股份有限公司 Conference initiating method and device
CN110602432B (en) * 2019-08-23 2021-01-26 苏州米龙信息科技有限公司 Conference system based on biological recognition and conference data transmission method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370610A (en) * 2017-08-30 2017-11-21 百度在线网络技术(北京)有限公司 Meeting synchronous method and device
CN109218654A (en) * 2018-10-19 2019-01-15 视联动力信息技术股份有限公司 A kind of view networking conference control method and system

Also Published As

Publication number Publication date
CN111245629A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN110278464B (en) Method and device for displaying list
CN112291583A (en) Live broadcast wheat connecting method and device, server, terminal and storage medium
CN112947823A (en) Session processing method, device, equipment and storage medium
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN111462742B (en) Text display method and device based on voice, electronic equipment and storage medium
CN113206781B (en) Client control method, device, equipment and storage medium
CN111343346B (en) Incoming call pickup method and device based on man-machine conversation, storage medium and equipment
CN107896337B (en) Information popularization method and device and storage medium
CN115378900A (en) Song list sharing method, device, terminal and storage medium
CN112312226A (en) Wheat connecting method, system, device, electronic equipment and storage medium
CN111681655A (en) Voice control method and device, electronic equipment and storage medium
CN112468884A (en) Dynamic resource display method, device, terminal, server and storage medium
CN111613213A (en) Method, device, equipment and storage medium for audio classification
CN109218169B (en) Instant messaging method, device and storage medium
CN111126958A (en) Schedule creating method, schedule creating device, schedule creating equipment and storage medium
CN110808021A (en) Audio playing method, device, terminal and storage medium
CN110912830A (en) Method and device for transmitting data
CN111294551B (en) Method, device and equipment for audio and video transmission and storage medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN111245629B (en) Conference control method, device, equipment and storage medium
CN111918084B (en) Wheat loading method and device, server and terminal
CN111190751B (en) Task processing method and device based on song list, computer equipment and storage medium
CN111444289A (en) Incidence relation establishing method
CN111314205A (en) Instant messaging matching method, device, system, equipment and storage medium
CN112311652A (en) Message sending method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant