CN113014852A - Information prompting method, device and equipment - Google Patents

Information prompting method, device and equipment Download PDF

Info

Publication number
CN113014852A
CN113014852A CN201911320524.8A CN201911320524A CN113014852A CN 113014852 A CN113014852 A CN 113014852A CN 201911320524 A CN201911320524 A CN 201911320524A CN 113014852 A CN113014852 A CN 113014852A
Authority
CN
China
Prior art keywords
conference
conference room
user
room
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911320524.8A
Other languages
Chinese (zh)
Other versions
CN113014852B (en
Inventor
管赛南
李进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Banma Zhixing Network Hongkong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Banma Zhixing Network Hongkong Co Ltd filed Critical Banma Zhixing Network Hongkong Co Ltd
Priority to CN201911320524.8A priority Critical patent/CN113014852B/en
Publication of CN113014852A publication Critical patent/CN113014852A/en
Application granted granted Critical
Publication of CN113014852B publication Critical patent/CN113014852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application provides an information prompting method, an information prompting device and information prompting equipment, wherein the method comprises the following steps: determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room; and displaying the head portrait prompt information of each conference room on a conference interface. The technical scheme provided by the embodiment of the application improves the prompting efficiency.

Description

Information prompting method, device and equipment
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence application, in particular to an information prompting method, device and equipment.
Background
With the rapid development of science and technology, the intelligent conference system is in more and more demand. The intelligent conference system can solve the problem of information synchronization between different conference places through a network. For example, A, B, C conference rooms in three places can simultaneously carry out network conference through the intelligent conference system, and the user at the A place can watch the conference progress situation at the B place or the C place.
In the prior art, in order to enable users in different meeting places to browse user information of other meeting places and prompt face images of users participating in a meeting, an intelligent meeting system can display the face images of all the participating users in one display interface, the face images of the participating users are mainly head portraits, and information such as the name of the user or the name of a meeting room to which the user belongs can be displayed below each head portraits. Generally, the face images of all the participating users can be displayed simultaneously in the display interface, and generally, the head portraits of all the participating users are directly displayed in a certain arrangement mode, for example, one interface displays two rows and four columns, and displays the head portraits of 8 participating users, and if the number of the conference persons is large, the number of the display interfaces needs to be increased.
However, since the intelligent conference system directly displays the head portraits of all the participating users in the display interface at the same time, if the users in different conference places want to be queried, the head portraits of all the participating users need to be browsed from the display interface to obtain the target head portraits, and the detailed information of the users is obtained based on the target head portraits, which easily causes lower query efficiency and lower prompt efficiency.
Disclosure of Invention
The embodiment of the application provides an information prompting method, device and equipment, and aims to solve the technical problem of low prompting efficiency caused by directly displaying face images of all participating users in the prior art.
In a first aspect, an embodiment of the present application provides an information prompting method, including:
determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively;
respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room;
and displaying the head portrait prompt information of each conference room on a conference interface.
In a second aspect, an embodiment of the present application provides an information prompting method, including:
acquiring respective face images of at least one conference user corresponding to at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively;
respectively generating head portrait prompt information of each conference room based on at least one face image acquired from each conference room;
and displaying the head portrait prompt information of each conference room on a conference interface.
In a third aspect, an embodiment of the present application provides an information prompting apparatus, where the apparatus provides a display interface for at least one conference room respectively;
the display interface of each conference room is used for displaying a conference interface;
the conference interface displays head portrait prompt information of at least one conference room; and generating the head portrait prompt information of each conference room based on the face image of each conference user corresponding to each conference room.
In a fourth aspect, an embodiment of the present application provides an information prompting apparatus, including:
the image determining module is used for determining respective face images of at least one conference user corresponding to at least one conference room respectively and obtaining at least one face image corresponding to the at least one conference room respectively;
the first generation module is used for respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room;
and the first display module is used for displaying the head portrait prompt information of each conference room in the conference interface.
In a fifth aspect, an embodiment of the present application provides an information prompting apparatus, including:
the image acquisition module is used for acquiring respective face images of at least one conference user corresponding to at least one conference room respectively so as to obtain at least one face image corresponding to the at least one conference room respectively;
the second generation module is used for respectively generating head portrait prompt information of the at least one conference room based on at least one face image acquired from the at least one conference room;
and the second display module is used for displaying the head portrait prompt information of each conference room on the conference interface.
In a sixth aspect, an embodiment of the present application provides an information prompting apparatus, including: the display device comprises a storage component, a processing component and a display component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room; and displaying the head portrait prompt information of each conference room on a conference interface.
In a seventh aspect, an embodiment of the present application provides an information prompting apparatus, including: the display device comprises a storage component, a processing component and a display component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
acquiring respective face images of at least one conference user corresponding to at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of each conference room based on at least one face image acquired from each conference room; and displaying the head portrait prompt information of each conference room on a conference interface.
In the embodiment of the application, the respective face images of the at least one conference user corresponding to the at least one conference room are determined, that is, the at least one face image corresponding to the at least one conference room can be obtained, so that the respective avatar prompt information of the at least one conference room can be generated based on the at least one face image corresponding to the at least one conference room, that is, the avatar prompt information of the conference room is generated according to the at least one face image corresponding to any conference room. Thereafter, the avatar prompt information of each of the at least one conference room may be presented in the conference interface. The method has the advantages that the head portrait prompt information formed by combining at least one face image corresponding to any conference room has a direct prompt effect of participating users for the conference room, the users participating in the conference take the conference room as a unit, the head portrait prompt information formed by the participating users of each conference room is respectively prompted to the participating users of different conference rooms, the respective participating users of a plurality of conference rooms can be prompted simultaneously, and the prompt efficiency is improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 illustrates an exemplary diagram of a display of participating users provided herein;
FIG. 2 is a flow diagram illustrating one embodiment of an information prompting method provided herein;
FIG. 3 is a diagram illustrating an example of avatar prompt information provided by the present application;
FIG. 4 is a flow chart illustrating a further embodiment of an information prompting method provided by the present application;
FIG. 5 is a flow chart illustrating a further embodiment of an information prompting method provided by the present application;
FIG. 6 illustrates an exemplary diagram of a conferencing interface provided by the present application;
FIG. 7 is a flow chart illustrating a further embodiment of an information prompting method provided by the present application;
FIG. 8 illustrates an exemplary view of a panoramic image provided herein;
FIG. 9 is a diagram illustrating an example display of a master-slave video interface provided by the present application;
FIG. 10 is a flow chart illustrating a further embodiment of an information prompting method provided by the present application;
FIG. 11 is a flow chart illustrating a further embodiment of an information prompting method provided by the present application;
FIG. 12 is a schematic structural diagram illustrating an embodiment of an information prompt apparatus provided by the present application;
FIG. 13 is a schematic structural diagram illustrating an embodiment of an information prompt apparatus provided by the present application;
FIG. 14 is a schematic structural diagram illustrating an embodiment of an information prompt apparatus provided by the present application;
FIG. 15 is an exemplary diagram illustrating an information prompting device provided herein;
FIG. 16 is a schematic structural diagram illustrating an embodiment of an information prompt apparatus provided herein;
FIG. 17 is a schematic structural diagram illustrating an embodiment of an information prompt apparatus provided herein;
FIG. 18 is a schematic structural diagram illustrating an embodiment of an information prompt system provided herein;
fig. 19 is a schematic structural diagram illustrating a further embodiment of an information prompt system provided by the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical scheme of the application can be applied to the network conference scene, the head portrait prompt information is established on the basis of the face images of the conference participating users by a plurality of conference rooms in the remote network conference scene, the identity prompt of the conference participating users in each conference room is directly prompted through the head portrait prompt information, so that each user directly determines the conference participating users in each conference room through the head portrait prompt information, and the prompt efficiency is improved.
In the prior art, a service system for providing a network conference may be generally referred to as an intelligent conference system, and the intelligent conference system may provide a face image prompting service for a participant. Generally, the intelligent conference system can display face images of all participating users in one display interface, wherein the face images are mainly head portraits, and can also display user names, user accounts or names of conference rooms to which the users belong and the like below each head portraits.
As shown in fig. 1, when the participating users are prompted in the display interface 100, the head portraits 101 of all parameter users may be directly displayed in a form of two rows and four columns as shown in fig. 1, the head portraits 101 of 8 participating users are displayed in the display interface, in addition, the account number 102 of the user may also be displayed below the head portraits 101 of the participating users in fig. 1, and the account number names in fig. 1 are only exemplary. When the number of the participating users is large, the number of the display interfaces can be increased, and when the user executes interface switching operation, the current display interface can be switched to the next display interface, and a plurality of display interfaces can be prompted. As shown in fig. 1, a prompting message 103 for switching avatar is displayed, where the prompting message 103 for switching avatar includes three prompting controls 1031, each prompting control 1031 may represent a display interface 100, and when a user performs an interface switching operation, the prompting controls 1031 switch accordingly.
However, when all the participating users are displayed on the display interface at once, it is necessary to browse the head portraits of all the participating users on the display interface and acquire information such as a conference room and a location where the user is located through the head portraits, and thus the prompting efficiency of a plurality of the participating users is low. .
In order to solve the technical problems, the inventor proposes a technical scheme of the application. In this embodiment of the application, the respective face image of the at least one conference user corresponding to the at least one conference room may be determined first to obtain the respective at least one face image of the at least one conference room, so that the avatar prompt information of each conference room may be generated based on the respective at least one face image of the at least one conference room. When the head portrait prompt information of each conference room is displayed in the conference interface, the integral prompt of at least one participant in each conference room can be realized. The head portrait prompt information is respectively generated for at least one conference room, the users participating in the conference take the conference room as a unit, the head portrait prompt information formed by the users participating in the conference in each conference room is respectively prompted to the users participating in the different conference rooms, the respective users participating in the conference rooms can be simultaneously prompted, the users can directly determine the users participating in the conference rooms through the head portrait prompt information, and the prompt efficiency is improved.
The technical solutions of the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
As shown in fig. 2, a flowchart of an embodiment of an information prompting method provided in the embodiment of the present application is provided, where the method may include the following steps:
201: determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively.
The embodiment of the application can be applied to a network conference scene, wherein the network conference scene refers to a video conference held by utilizing the Internet.
The first application scenario provided by the application may include a plurality of conference terminals and a server connected to the plurality of conference terminals, and the information prompting method provided by the embodiment of the application may be applied to the server, and the server executes a corresponding information prompting step. Optionally, the server may refer to a computing device having functions of providing resources, processing data or information, for example, a computer, a cloud server, and the like, and a specific existence form of the server is not limited in this embodiment. The plurality of conference terminals can be configured with display screens, and the server can control the display screens of the plurality of conference terminals to display the conference interface.
The second application scenario provided by the application may include a plurality of conference terminals, each conference terminal may correspond to one display screen, any conference terminal may serve as a master conference terminal, and other conference terminals except the master conference terminal among the plurality of conference terminals may serve as slave conference terminals. The information prompting method provided by the embodiment of the application can be applied to the main conference terminal, and the main conference terminal executes the corresponding information prompting steps. In a first application scenario of multiple conference terminals, any conference terminal may initiate a conference request, and a conference terminal initiating the conference request may be a main conference terminal. The main conference terminal can display the conference interface in the display screen of the main conference terminal and can also control the display of the conference interface in the display screen of the auxiliary conference terminal.
The third application scenario provided by the present application may include a server and a plurality of display screens, where the server may be configured with the information prompting method provided by the embodiment of the present application, and execute a corresponding information prompting step. The server can control the conference interface displayed in the plurality of display screens.
The conference rooms described in the embodiments of the present application are indoor locations, and each conference room may be configured with a display screen or a conference terminal.
At least one participant user exists in each conference room, and at least one face image corresponding to any conference room can be acquired by respectively acquiring the face image for at least one participant user in the conference room. The face image of at least one conference user in any conference room can be collected to obtain at least one face image of the conference room, and each conference user corresponds to one face image. One or more cameras may be provided in any of the meeting rooms to capture facial images of participating users as they enter any of the meeting rooms.
202: and respectively generating head portrait prompt information of at least one conference room based on the face image of at least one conference room corresponding to at least one conference user.
The face images of at least one conference user corresponding to any conference room form at least one face image corresponding to the conference room, and the at least one face image corresponding to the conference room can be combined to generate the avatar prompt information of the conference room. And the head portrait prompt information of any meeting room is obtained by combining at least one face image corresponding to the meeting room. The head portrait prompt information of the conference room can be used for prompting the respective face image of at least one conference user in the conference room, and the face image of each conference room can be prompted more intuitively. 203: and displaying the head portrait prompt information of each conference room in a conference interface.
The current meeting content can be displayed in the meeting interface. A conference interface may be displayed on a display screen in the intelligent conference system or the network conference system. The avatar prompt information can be displayed in a position or area of the conference interface where the user does not influence the conference content, so that the conference content and the avatar prompt information can be displayed.
The avatar prompt information of at least one conference room may be displayed in the conference interface according to a certain display order, for example, the prompt name corresponding to the avatar prompt information of at least one conference room may be named as a location of each conference room, and the avatar prompt information of at least one conference room is sorted according to an initial letter in the prompt name, so as to display the avatar prompt information of at least one conference room in the conference interface respectively.
For ease of understanding, avatar prompt information 301 for each of the four conference rooms presented in one conference interface 300 is shown in fig. 3, with avatar prompt information 301 being displayed in conference interface 300.
In the embodiment of the application, the respective face image of the at least one conference user corresponding to the at least one conference room may be determined first to obtain the respective at least one face image of the at least one conference room, so that the avatar prompt information of each conference room may be generated based on the respective at least one face image of the at least one conference room. When the head portrait prompt information of each conference room is displayed in the conference interface, the integral prompt of at least one participant in each conference room can be realized. The head portrait prompt information is respectively generated for at least one conference room, the users participating in the conference take the conference room as a unit, the head portrait prompt information formed by the users participating in the conference in each conference room is respectively prompted to the users participating in the different conference rooms, the respective users participating in the conference rooms can be simultaneously prompted, the users can directly determine the users participating in the conference rooms through the head portrait prompt information, and the prompt efficiency is improved.
In the foregoing embodiment, the avatar prompt information of any conference room is generated based on the respective face images of at least one participant user determined from the conference room, and each participant user may have a corresponding prompt sequence when generating the prompt information. As a possible implementation, the method may further include:
determining the respective prompt sequence of at least one face image corresponding to any meeting room;
the generating respective avatar prompt information of the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room includes:
and generating the head portrait prompt information of the conference room aiming at the prompt sequence of any conference room corresponding to at least one face image so as to obtain the head portrait prompt information of at least one conference room.
In practical application, at least one user can be included in one conference room to participate in a conference, the conference users in the same conference room enter the conference room in different sequences, and avatar prompt information of the conference room can be generated according to the sequence of the conference users entering the conference room. Therefore, the confirming of the prompting sequence of any meeting room corresponding to at least one face image can include:
aiming at any meeting room, determining the sequence of at least one conference user corresponding to the meeting room entering the meeting room respectively;
and determining a prompt sequence corresponding to the respective face images of at least one participant in the conference room according to the sequence in which the at least one participant corresponding to the conference room enters the conference room.
In addition to using the sequence of each participating user entering the conference room as the prompt sequence, because the names of different participating users are different, after the face image of at least one participating user in each conference room is determined, at least one participating user in each conference room can be sequenced according to the sequence of the arrangement of the initials of the names, and the prompt sequence of each participating user is obtained.
After the avatar prompt information of at least one conference room is displayed in the conference interface, the participating users can simultaneously browse the avatar prompt information of at least one conference room when watching the conference interface, however, because the avatar prompt information is only a simple prompt for the identity of at least one corresponding participating user, if a detailed face image corresponding to at least one participating user needs to be obtained, the corresponding avatar prompt information can be triggered to display the detailed information of at least one participating user corresponding to the avatar prompt information, so as to provide the detailed face image of the participating user. As shown in fig. 4, which is a flowchart of another embodiment of an information prompting method provided in the embodiment of the present application, the method may include the following steps:
401: the method comprises the steps of collecting respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively.
402: and respectively generating head portrait prompt information of the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room.
403: and displaying the head portrait prompt information of each conference room in a conference interface.
404: and extracting the respective face features of at least one conference user aiming at the respective face image of at least one conference user corresponding to any conference room.
405: and identifying the identity information corresponding to the at least one conference user respectively corresponding to the conference room according to the respective face characteristics of the at least one conference user corresponding to the conference room.
406: and detecting selection operation aiming at any avatar prompt message, and outputting the identity information of at least one conference user in a conference room corresponding to the selected avatar prompt message.
The identity information of any one of the participating users may be used to distinguish different users, and may specifically include information related to the user identity, for example, information such as a facial image, a head portrait, an account number or a name, a position, and the like of the user may be included. To ensure that different users can be quickly distinguished, the identity information of the user may be a combination of the face image and any other one or more kinds of information.
When the selected avatar prompt information is output to correspond to the respective identity information of at least one participating user, the identity information corresponding to each participating user can be displayed in detail, for example, the identity information of each participating user can be displayed in a mode of combining a face image with text description of information such as names and positions, so that the detailed information of the participating users can be displayed conveniently.
In a first application scenario provided by the present application, a plurality of conference terminals and a server connected to the plurality of conference terminals may be included. The selection operation for any avatar prompt information may be detected by any conference terminal, the conference terminal may include a display screen, the display screen may be a touch screen, any participating user may perform a selection operation triggered for any avatar prompt information on the display screen of a conference room in which the participating user is located, the conference terminal may send the selection operation for the avatar prompt information to the server after detecting that the selection operation for any avatar prompt information is triggered by the user on the display screen, and the server may respond to the selection operation for the avatar prompt information to show respective identity information of at least one participating user corresponding to the selected avatar prompt information. The detecting a selection operation for any avatar prompt information, and outputting the identity information of each of at least one participating user in a conference room corresponding to the selected avatar prompt information may include: and receiving selection operation aiming at any avatar prompt information sent by the conference terminal, and responding to the selection operation aiming at the avatar prompt information to display the respective identity information of at least one conference user in a conference room corresponding to the selected avatar prompt information.
In the second application scenario provided by the application, a plurality of conference terminals may be included, any conference terminal may detect a selection operation for any avatar prompt information, and for the master conference terminal, when detecting a selection operation triggered by the participant user for any avatar prompt information, the master conference terminal and the plurality of slave conference terminals may be controlled to display the identity information of at least one participant user corresponding to the selected avatar prompt information; for the slave conference terminal, when detecting that any participant user triggers the selection operation aiming at any avatar prompt information, the slave conference terminal can send the selection operation to the master conference terminal, and the master conference terminal responds to the selection operation aiming at any avatar prompt information to display the identity information of at least one participant user in a conference room corresponding to the selected avatar prompt information
In a third application scenario provided by the application, the server may control the plurality of display screens to display a conference interface, and at this time, the server may detect that any participant user triggers a selection operation for any avatar prompt information, and display at least one face image corresponding to the selected avatar prompt information in response to the selection operation for the avatar prompt information. The responding to the selection operation of any avatar prompt message, and the displaying of at least one face image corresponding to the selected avatar prompt message may include: and detecting selection operation aiming at any avatar prompt information, obtaining the selected avatar prompt information, and displaying at least one face image corresponding to the selected avatar prompt information.
In the embodiment of the application, when the selection operation aiming at any avatar prompt information is detected, the selected avatar prompt information can be determined, the identity information of at least one participant corresponding to the selected avatar prompt information is displayed, the detailed identity information of the corresponding participant in the conference room is prompted, and the prompt efficiency is improved.
In the conference interface, when the whiteboard content of the conference is displayed, the avatar prompt information can be displayed in the conference interface. When the at least one avatar prompt message is displayed in the conference interface, the avatar prompt message may be displayed in a certain area of the conference interface, and when the avatar prompt message is displayed, a prompt interface may be generated by using the at least one avatar prompt message, and the prompt interface may be a sub-interface of the conference interface, so as to facilitate the display. As shown in fig. 5, which is a flowchart of another embodiment of an information prompting method provided in the embodiment of the present application, the method may include the following steps:
501: determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively.
502: and combining the at least one conference room corresponding to the at least one face image respectively to obtain a composite image corresponding to the at least one conference room respectively.
The head portrait prompting information of any meeting room can comprise a composite image formed by combining at least one face image corresponding to the meeting room. The size of the composite image can be set, any meeting room is filled in a partial image area of the composite image corresponding to at least one face image, and the size of the face image in the composite image can also be set. For example, the size of the combined image may be set to 128 × 128, if there are four conference users in a conference room, the combined image may be optionally divided into 4 image areas, each of which is 64 × 64, and each image area is filled with a face image of a conference user. When the face image of the participating user is filled into the combined image, the shape of the face image may be determined based on different design requirements, for example, the face image may be set to be circular, so as to reduce the boundary conflict with other images, and achieve more independent display. In addition, the face image may also be a matrix, a polygon, or the like, where the shape of the face image in the composite image is not limited too much.
Optionally, the combining the at least one conference room corresponding to the at least one face image respectively to obtain the composite image corresponding to the at least one conference room respectively includes: initializing a composite image; dividing the composite image into image areas with the number of prompts based on the number of prompts corresponding to any meeting room; determining respective image areas of at least one face image corresponding to the conference room; and sequentially filling at least one face image of the conference room into the corresponding image area according to the respective prompt sequence to obtain the composite image.
503: and generating the head portrait prompt information of the conference room according to the composite image corresponding to any conference room and the conference room identification so as to obtain the head portrait prompt information of at least one conference room.
In some possible designs, a meeting room identifier may be set for each meeting room to identify the meeting room, the location of the meeting room may be used as the identifier of the meeting room, and the meeting room may be identified according to a certain numbering rule.
504: and displaying the head portrait prompt information of each conference room on a conference interface.
The conference interface is a conference white board of the network conference and can display conference video, conference key information, conference records and other information. The head portrait prompt information of each conference room can be displayed in a partial area of the conference interface. The head portrait prompt information of at least one conference room can be displayed through the prompt interface in the conference interface. The prompt interface is a sub-interface of the conference interface. When the prompt interface is displayed in the conference interface, the prompt interface may be displayed in a predetermined display area. And displaying at least one head portrait prompt message in the prompt interface.
In order to present the conference interface, in one possible design, the presenting the avatar prompt information of each of the at least one conference room in the conference interface may include:
generating a prompt interface according to the head portrait prompt information respectively corresponding to the at least one conference room;
and displaying the prompt interface in the conference interface.
As a possible implementation manner, the displaying the prompt interface in the conference interface may include:
determining a prompt area of the prompt interface in the conference interface;
and drawing the prompt interface in a prompt area in the conference interface.
The prompting interface comprises at least one prompting sub-interface of each meeting room, and each prompting sub-interface is used for prompting head portrait prompting information in the corresponding meeting room. In some embodiments, the generating a prompt interface according to the avatar prompt information respectively corresponding to the at least one conference room may include:
generating blank prompt sub-interfaces of the at least one conference room respectively;
drawing the head portrait prompt information of any conference room in a blank prompt sub-interface corresponding to the conference room, and obtaining the prompt sub-interface of the conference room so as to obtain the prompt sub-interface of each conference room;
and generating the prompt interface according to the prompt sub-interfaces respectively corresponding to the at least one conference room.
In the embodiment of the application, the head portrait prompt information is displayed in a mode of generating the prompt interface for the head portrait prompt information, so that the display effect of the head portrait prompt information is visual, and a user can browse the head portrait prompt information of each meeting room in a more visual mode conveniently.
For easy understanding, fig. 6 shows an example diagram of a conference interface, in the conference interface 600, a prompt interface 601 may be displayed, the prompt interface 601 may include a plurality of prompt sub-interfaces 602, each prompt sub-page draws a face image 603 of at least one conference user in a corresponding conference room, and each prompt sub-interface also draws a conference room identifier 604, which is a conference address of the conference room. In the example of fig. 6, the face image of each user is drawn in a circular shape, but in practical applications, the face image of each participating user may be drawn in any shape, for example, a square, a polygon, etc., for which the face image of each participating user is drawn.
In a possible design, if the number of the participating users in any conference room is too large, the head portrait prompt information of the conference room may not be prompted to all users in the conference room during prompting, and therefore, the prompting number of at least one face image in each conference room can be controlled. The generating respective avatar prompt information of the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room may include:
determining the target prompt quantity respectively corresponding to the at least one conference room;
aiming at least one conference user corresponding to any conference room, selecting a target prompt user with the target prompt quantity corresponding to the conference room, and acquiring a target prompt user corresponding to the conference room;
generating the head portrait prompt information of the conference room based on the face image of the prompt user corresponding to the target of any conference room so as to obtain the head portrait prompt information of at least one conference room
The target prompting user is determined from at least one participant user in a corresponding conference room, and the number of the participant users for prompting in each conference room is certain, so that the pertinence of head portrait prompting information prompting is realized, unnecessary prompting is reduced, and the prompting effect is improved.
As an embodiment, for at least one conference user corresponding to any conference room, selecting a target number of prompt users corresponding to the conference room, and obtaining the target number of prompt users corresponding to the conference room may include:
aiming at the sequence of at least one conference user entering the conference room corresponding to any conference room, determining the respective prompt sequence of at least one conference user corresponding to the conference room;
and sequentially selecting target prompt users with the target prompt quantity corresponding to the conference room according to the arrangement sequence of the prompt sequence of at least one conference user corresponding to the conference room from high to low, and obtaining the target prompt users corresponding to the conference room.
The selection of the target prompt users is carried out according to the sequence of the participant users entering the meeting room, and the head portraits can be sorted according to the priority of the participant users entering the meeting room to form head portraits prompt information so as to obtain the sequential prompt effect.
In the conference interface, when the whiteboard content of the conference is displayed, the avatar prompt information can be displayed in the conference interface. As shown in fig. 7, which is a flowchart of another embodiment of an information prompting method provided in the embodiment of the present application, the method may include the following steps:
701: determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively.
702: and respectively generating head portrait prompt information of at least one conference room based on that the at least one conference room respectively corresponds to the at least one face image.
703: and determining a publishing conference room which currently publishes conference content in the at least one conference room.
The currently published meeting content may be displayed in a meeting interface. And displaying the currently published conference content of the conference users in the publishing conference room in the conference interface.
704: and preferentially displaying the head portrait prompt information corresponding to the release meeting room in a meeting interface.
705: and displaying the avatar prompt information corresponding to the conference rooms except the publishing conference room in the at least one conference room after the avatar prompt information of the publishing conference room in the conference interface.
And preferentially displaying the head portrait prompt information of the issued meeting room in the meeting interface, namely setting the display sequence of the head portrait prompt information of the issued meeting room to be the first one and setting the display sequence of other meeting rooms in at least one meeting room to be the later one. For example, the first reminder sub-interface 602shanghai room shown in fig. 6 may be a publishing conference room that currently publishes conference content.
In the embodiment of the application, the publishing conference room which currently publishes the conference content can be determined, the publishing conference room is a requirement of the currently published conference content, and the display sequence of the publishing conference room when the conference interface is displayed can be set as the first display sequence, so that a user can browse at least one conference user in the conference room which currently publishes the conference content conveniently, the pertinence prompt is realized, and the prompt efficiency is improved.
In the process of meeting, the sound signal can be collected in the meeting room which distributes the meeting content, but the sound signal can not be collected in the meeting room which does not distribute the meeting content, therefore, whether the sound signal is collected can be used as the judgment standard for distributing the meeting room. As an embodiment, the determining a publishing conference room of the at least one conference room that currently publishes the conference content may include:
respectively collecting the current sound signals of the at least one conference room;
and if the current sound signal of any conference room is not empty, determining that the conference room is a publishing conference room.
The sound signal may be emitted by any of the participating users in the conference room. Any conference room can be equipped with sound collection equipment, for example microphone or intelligent audio amplifier etc. can gather the sound signal in the conference room through sound collection equipment to send sound signal to other conference rooms, in order to carry out the sound broadcast.
In the process of a conference, conference images in conference rooms of users may be collected, and the participant users in the conference images may be subjected to motion recognition, limb recognition, and the like to determine whether any participant user in one conference room is publishing conference content, and thus, as a further embodiment, the determining a publishing conference room in the at least one conference room, where the conference content is currently published, may include:
respectively collecting current conference images of the at least one conference room;
and if the current conference image of any conference room meets the publishing condition, determining that the conference room is a publishing conference room.
The current conference image in any conference room may be acquired by an image acquisition device, such as a camera, and the conference image may be an image including at least one participant in the conference room, or an image acquired by at least one participant in the conference room respectively. Any conference image containing images of all the participating users in the corresponding conference room can be subjected to gesture recognition processing on the current conference image of the conference room, the motion gesture of each participating user in the conference room is obtained, and when the motion gesture meets certain gesture standards, the conference room is determined to be a publishing conference room.
After the publishing conference room is determined, the publishing user for publishing the conference content can be determined, and the head portrait prompt information of the publishing user is displayed at the first time to highlight the publishing user and achieve the aim of improving the efficiency. In some embodiments, after determining a publishing conference room of the at least one conference room that currently publishes conference content, the method may further comprise:
determining a publishing user who currently publishes the conference content in the publishing conference room;
the head portrait prompt message of the release conference room can be determined by the following method:
and generating the head portrait prompt information of the publishing conference room according to the prior display of the face images of the publishing users and the display rules that the face images of the other participating users except the publishing users in the publishing conference room are displayed behind the publishing users.
The sound signals collected in the publishing conference room can be regarded as the sound of the participant who publishes the conference content, and the sound signals in the publishing conference room can be subjected to identity recognition to obtain the publishing user. As a possible implementation manner, the determining a publishing user who currently publishes the conference content in the publishing conference room may include:
collecting a release sound signal of the release conference room;
extracting the release sound characteristics of the release sound signals corresponding to the release conference room;
and performing identity recognition processing by using the publishing sound characteristics to obtain a publishing user which outputs the current conference content in the publishing conference room.
In addition, action postures of different conference users in the conference room may be different, for example, a user who publishes conference content may stand, there may be gesture actions, and the actions of the conference users may be recognized by capturing the published panoramic image in the conference room, and the recognized actions of the conference users are used to determine the publishing users. As another possible implementation manner, the determining a publishing user who currently publishes the conference content in the publishing conference room may further include:
collecting a release panoramic image in the release conference room;
identifying a motion gesture of at least one participant user in the published panoramic image;
and determining the participant users with the motion gestures in the publishing state as publishing users outputting the current conference content based on the motion gestures respectively corresponding to at least one participant user in the publishing panoramic image.
The publishing panoramic image may be a plurality of conference images of a publishing conference room collected in real time, may include images of all participating users in the publishing conference room, and may be acquired by arranging image collecting devices at different positions in the conference room to collect images in the whole conference room. In addition, an image capturing device capable of 360-degree image capturing may be employed to capture a panoramic image in a conference room. In addition, a mode of configuring one image acquisition device for each participating user can be adopted, so that images including each user can be acquired conveniently, and the acquired multiple images can form a release panoramic image.
To briefly explain the process of identifying a publishing user by motion gesture, as shown in fig. 8, an exemplary diagram of a publishing panorama may include 5 participating users 801 to 805, where all of the participating users 801, 802, 804, and 805 that can be identified by motion gesture are in a gesture of viewing the participating user 803, the participating user 803 is in a gesture of standing and continuously moving hands, and it may be determined that the participating user 803 is different from the other participating users in motion gesture, and therefore, the participating user 803 may be the determined publishing user.
On the basis of the above embodiment, a video display control can be set in the conference interface, and any participant user participating in the conference can trigger the video display control, so that the conference video can be displayed for the participant user in response to the control. As an embodiment, the method may further include:
and responding to the triggering operation of the video display control in the conference interface, and displaying the video conference of each conference room.
When the video conference of each of the at least one conference room is displayed, the video conference of the main conference room can be displayed in the whole interface, and the main conference room can be a conference room which currently publishes conference content, that is, a publishing conference room. As a possible design, in order to enable a participating user to timely obtain conference videos of other participating users to obtain overall conference content, the displaying a video conference of each of the at least one conference room in response to a triggering operation of a video display control in the conference interface may include:
responding to the triggering operation of a video display control in the conference interface, and respectively acquiring the conference video of each conference room;
determining a master conference room and at least one slave conference room of the at least one conference room;
generating a master video interface and at least one slave video interface which is covered on the master video interface for display;
and playing the conference video of the main conference room in the main video interface, and respectively playing the conference video of the at least one slave conference room in the at least one slave video interface.
The master video interface may be generated based on a video output component, and either video interface may be generated based on a video output component. The video output component may be, for example, a plug-in or application such as a flash component that may be used to output video.
In order to view the conference video of the main conference room, the size and occupied area of the slave video interface are generally set to be far smaller than that of the main video interface. In general, the slave video interface may be located at the lower left or lower right of the master video interface.
For convenience of describing the overlay display relationship between the master video interface and at least one slave video interface, as in the video playing interface in fig. 9, one master video interface 901 and four slave video interfaces 902 may be included, where the master video interface displays conference content in the master conference room, and the slave video interfaces display conference content in the slave conference rooms.
In some embodiments, the number of interfaces of the at least one slave video interface is certain, and when the number of interfaces of the at least one slave video interface is larger than the number of conference rooms of the at least one slave conference room, the video content of any slave conference room can be simultaneously output in the plurality of slave video interfaces; when the number of interfaces of the at least one slave video interface is less than the number of videos of the at least one slave conference room, a target slave conference room with the number of interfaces equal to that of the slave video interface can be selected from the at least one slave conference room, and the conference video of each target slave conference room is respectively displayed in the slave video interface.
As shown in fig. 10, which is a flowchart of another embodiment of an information prompting method provided in the embodiment of the present application, the method may include the following steps:
1001: determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively.
1002: and respectively generating head portrait prompt information of the at least one conference room based on that the at least one conference room respectively corresponds to the at least one face image.
1003: and displaying the head portrait prompt information of each conference room in a conference interface.
1004: newly added and/or disappeared changed users of any conference room are detected.
1005: and determining the facial image of the changed user.
1006: and regenerating the head portrait prompt information of the conference room based on the changed user and the respective face image of the at least one conference user corresponding to the conference room.
And re-changing the head portrait prompt information of the conference room where the user is located based on the changed user and the face image of the conference room where the changed user is located corresponding to at least one conference user.
1007: and updating the head portrait prompt information of each conference room in the conference interface.
Wherein, at least one participant user is the original participant user in the conference room.
The change users are actually newly added and/or disappeared participant users in the conference room, the newly added participant users are increased, the disappeared participant users are decreased, or the added and disappeared users exist at the same time, and in order to improve the timeliness of the avatar prompt information, the avatar prompt information of the conference room can be regenerated based on the respective face images of the change users and at least one participant user corresponding to the conference room.
When there is a newly added changed user, the regenerating of the avatar prompt information of the conference room based on the changed user and the face image of the at least one conference user corresponding to the conference room may include: and taking the changed user and the original at least one conference user in the conference room as at least one new conference user in the conference room, and regenerating the head portrait prompt information of the conference room by using the face image of the changed user and the respective face image of the at least one conference user.
When there is a disappearing changed user, the regenerating the avatar prompt information of the conference room based on the changed user and the face image of the at least one conference user corresponding to the conference room may include: deleting the changed user from the at least one participant user based on the face image of the changed user and the respective face image of the at least one participant user to obtain a new participant user, and regenerating the head portrait prompt information of the conference room by using the face image of the new participant user.
When added changed users and disappeared changed users exist at the same time, the added changed users can be added into the conference users in the conference room, the disappeared users are deleted from the conference users in the conference room, new conference users are obtained, and the head portrait prompt information of the conference room is regenerated by using the face images of the new conference users.
In the embodiment of the application, after the avatar prompt information of at least one conference room is displayed in the conference interface, newly added and/or disappeared users in any conference room, that is, changed users, can be detected. The change users are the people with changes in the conference room. At least one participant in the display area of the conference room may leave the conference room or a new participant may enter the conference room, which may be a change of people for different situations in the conference room, i.e. the participants in the conference room. The conference system has the advantages that the conference users in the conference room can be monitored, when the conference users in the conference room change, the head portrait prompt information of the conference room can be regenerated according to the changed users in the conference room and the face images of at least one conference user corresponding to the conference room, so that the conference staff prompted in the head portrait prompt information are up-to-date, and the improvement is realized.
As an embodiment, when the changed user is a newly added user, the regenerating the avatar prompt information of the conference room based on the changed user and the face image of the at least one conference user corresponding to the conference room may include:
if the change user is the user who currently issues the conference content, the face image of the change user is preferentially displayed, and the head portrait prompt information of the conference room is generated according to the display rule that the face image of at least one conference participating user in the conference room is displayed after the change user;
if the change user is not the user who currently issues the conference content, the face image of at least one conference user in the conference room is preferentially displayed, the display rule of the face image of the change user displayed behind the face image of the at least one conference user is changed, and the head portrait prompt information of the conference room is generated.
When the added users issue the conference content, the prompting sequence of the newly added users can be set as the first prompting sequence so as to achieve the pertinence of prompting, so that the information prompting is closer to the actual advancing condition of the conference, the timeliness of the avatar prompting information is higher, and the prompting efficiency is improved.
As an embodiment, when the changed user is a lost user, the generating of the avatar prompt information in any conference room based on the new changed user in any conference room and the respective face images of at least one participating user may include:
and regenerating the head portrait prompt information of the conference room by using other conference users except the conference user except the change user in the at least one conference user corresponding to the conference room.
When the participant user disappears, the disappeared participant user can be deleted to obtain at least one participant user in the conference room again, and the head portrait prompt information of the conference room is regenerated by using the obtained at least one participant user.
As a possible implementation manner, the detecting of the changed users newly added or disappeared in any conference room may include:
collecting a face image of at least one current conference user corresponding to any conference room;
comparing the face image of at least one current conference user acquired by the conference room with the face image of at least one conference user acquired at the last time to obtain a comparison result;
and determining newly added and/or disappeared changed users in the conference room according to the comparison result.
Whether the conference users in any conference room disappear or increase or not can be judged by acquiring the face images of the conference users in the conference room in real time, acquiring the face images of the latest acquired conference users, comparing the current conference users with the latest acquired conference users, judging that the conference users corresponding to the face images are newly increased conference users if the face images of the current conference users exist but the face images of the latest acquired conference users do not exist; the face images which do not exist in the current conference users and exist in the conference users which are collected for the last time can be regarded as the lost conference users, and then the obtained comparison result can be used for obtaining new added and/or lost change users in the conference room through the comparison results of the face images which are collected for two times.
In some embodiments, the determining the respective facial images of the at least one conference user corresponding to the at least one conference room respectively, and obtaining the at least one facial image corresponding to the at least one conference room respectively may include:
and acquiring a conference initiating request.
And responding to the conference initiating request, and acquiring respective face images of at least one conference user respectively corresponding to the at least one conference room to obtain at least one face image respectively corresponding to the at least one conference room.
Before the conference begins, the user may initiate a web conference. The user can trigger the conference initiation request by triggering the conference initiation control and other operations in the display screen. When the server acquires the request operation triggered by the user, the server can acquire the face images of the participating users. When the conference initiation request is detected by any one of the plurality of conference terminals, the conference initiation request may be sent to other conference terminals except the conference terminal that detected the request operation. When the conference initiating request is the master conference terminal, the conference initiating request can be sent to a plurality of slave conference terminals. The at least one conference room, which receives the conference initiation request, may display the conference initiation request in a display screen.
Optionally, the method may further include: and sending the conference interface to other conference rooms except the conference room which initiates the conference initiating request in at least one conference room, so that the other conference rooms can respectively display the conference interface. Specifically, the conference interface may be sent to a conference terminal in each conference room, so that the conference terminal outputs the conference interface on a display screen.
As an embodiment, a conference place where a conference room is located may be used as a prompt name of the conference room, and generating respective avatar prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room includes:
respectively determining conference room identifiers of the at least one conference room;
generating the avatar prompt information of any conference room based on at least one face image and the conference room identification corresponding to the conference room so as to obtain the avatar prompt information of the at least one conference room.
Optionally, the generating of the avatar prompt information of the conference room based on the at least one face image and the conference room identifier corresponding to any conference room to obtain the avatar prompt information of each of the at least one conference room may include: combining at least one face image corresponding to any meeting room to obtain a composite image corresponding to the meeting room; and generating the head portrait prompt information of the conference room by using the composite image corresponding to the conference room and the conference room identification so as to obtain the head portrait prompt information of the at least one conference room.
The conference room identification can be the conference address where the conference room is located, and the conference address is used as a part of the avatar prompt information, so that the location of each conference room is more definite, and the prompt efficiency is improved.
For the face image of the participating user, the user identity can be identified in a voiceprint recognition mode and obtained through the detailed identity information of the user. As another embodiment, the determining the respective facial images of the at least one conference user corresponding to the at least one conference room respectively, and the obtaining the at least one facial image corresponding to the at least one conference room respectively may include:
and respectively collecting the respective sound signals of at least one conference user corresponding to the at least one conference room.
And for any conference room, identifying the identity information of at least one participant corresponding to the conference room based on the respective sound signals of the at least one participant corresponding to the conference room.
And respectively extracting the face images in the identity information of at least one conference user corresponding to the conference room, and obtaining at least one face image corresponding to the conference room so as to obtain at least one face image corresponding to the at least one conference room.
The respective sound signals of at least one of the participating users in any conference room may be obtained by capturing the sound of each of the participating users in the conference room. The respective sound signals of at least one participant can be acquired after the participants have entered the conference room.
Optionally, for any conference room, the identifying, based on the respective sound signal of the at least one conference user corresponding to the conference room, the identity information of the at least one conference user corresponding to the conference room may include:
aiming at any meeting room, extracting the sound characteristics of each meeting participating user based on the respective sound signals of the meeting participating users corresponding to the meeting room;
searching a target sound characteristic matched with the sound characteristic of each participating user from a sound characteristic library;
and determining the face image associated with the target sound characteristic corresponding to each conference user as the face image of the conference user so as to obtain the face image of at least one conference user corresponding to the conference room.
Based on the face image of each conference user, when extracting the face features of each conference user, the voice features of the voice signal of each conference user can be extracted by adopting a voiceprint or voice recognition algorithm.
The sound feature library is the sound signals of all pre-established at least one user who possibly participates in the network conference and the corresponding features of the sound signals. Each sound signal may be associated with a user identity.
As shown in fig. 11, the method for providing a flowchart of another embodiment of the information prompting method according to the embodiment of the present application may include the following steps:
1101: the method comprises the steps of collecting respective face images of at least one conference user corresponding to at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively.
Some steps of the embodiments of the present application are the same as those of the embodiments described above, and are not described herein again.
1102: and respectively generating head portrait prompt information of the at least one conference room based on at least one face image acquired from the at least one conference room respectively.
1103: and displaying the head portrait prompt information of each conference room on a conference interface.
In the embodiment of the application, the respective face images of at least one conference user corresponding to at least one conference room respectively can be collected to obtain at least one face image corresponding to at least one conference room respectively. The method comprises the steps that at least one face image is a head portrait of a conference user corresponding to a conference room, after respective head portrait prompt information of the at least one conference room is generated respectively based on at least one face image acquired from the at least one conference room, the respective head portrait prompt information of the at least one conference room is displayed in a conference interface, so that unified prompt of the conference users of the conference rooms can be achieved, users viewing the conference interface can quickly and definitely refer to the conference users in the conference rooms, and prompt efficiency is improved.
In the foregoing embodiment, since a plurality of participating users may exist in one conference room, a corresponding prompt sequence may be set for the face image of each participating user, so that the avatar prompt information may prompt the participating users in the conference room according to a certain rule. As an embodiment, the method may further include:
determining the respective prompt sequence of at least one face image corresponding to any meeting room;
the generating respective avatar prompt information of the at least one conference room based on at least one face image acquired from the at least one conference room respectively comprises:
and generating the head portrait prompt information of the conference room aiming at the prompt sequence of any conference room corresponding to at least one face image so as to obtain the head portrait prompt information of at least one conference room.
The face images of each participating user in the conference room can be collected at the beginning of the conference. As another embodiment, the acquiring the respective facial images of the at least one conference user respectively corresponding to the at least one conference room to obtain the at least one facial image respectively corresponding to the at least one conference room may include:
acquiring a conference initiating request;
and responding to the conference initiating request, and acquiring respective face images of at least one conference user corresponding to at least one conference room respectively so as to obtain at least one face image corresponding to the at least one conference room respectively.
In some embodiments, in order to distinguish different conference rooms, a corresponding conference room identifier may be set for each conference room, and when generating avatar prompt information corresponding to each conference room, the conference room identifier of each conference room may be used as a part of the avatar prompt information in order to improve the prompt effect. Therefore, the generating respective avatar prompt information of the at least one conference room based on the at least one face image acquired from the at least one conference room respectively may include:
respectively determining conference room identifiers of the at least one conference room;
generating avatar prompt information of the conference room based on at least one face image acquired from any conference room and the conference room identification of the conference room, so as to acquire the avatar prompt information of each conference room.
The generation process of the avatar information is the same as that in the foregoing embodiment, and the specific steps and implementation thereof may refer to the description in the foregoing embodiment, which is not repeated herein.
In some embodiments, the number of the participating users in different conference rooms is different, in a possible application, a certain conference room may have only one user participating in a conference, and another conference room may have 10 users participating in a conference, and due to the size limitation of the conference interface, the avatar prompt information in each conference room has a certain display limitation, if the number of the participating users in a conference room is large, when the avatar prompt information in the conference room is generated, if the face images of all the participating users are used for generation, the display area of each participating user may be too small, all the participating users cannot be clearly distinguished through the avatar prompt information, so that the number of the users participating in the generation work of the avatar prompt information in each conference room can be limited, based on at least one face image acquired from each of the at least one conference room, generating the avatar prompt information of each of the at least one conference room may include:
determining the target prompt quantity respectively corresponding to the at least one conference room;
aiming at least one conference user corresponding to any conference room, selecting a target prompt user with the target prompt quantity corresponding to the conference room, and obtaining a target prompt user corresponding to the conference room;
and generating the head portrait prompt information of the conference room based on the face image of the prompt user corresponding to the target of any conference room so as to obtain the head portrait prompt information of at least one conference room.
The voice of the user can be used for identifying the identity of the user, and the identity of the user can be identified by collecting the voice of the user so as to obtain a face image of the user. As another embodiment, the acquiring the respective facial images of the at least one conference user respectively corresponding to the at least one conference room to obtain the at least one facial image respectively corresponding to the at least one conference room may include:
respectively collecting respective sound signals of at least one conference user corresponding to the at least one conference room;
for any conference room, identifying the identity information of at least one participant corresponding to the conference room based on the respective sound signals of the at least one participant corresponding to the conference room;
and respectively extracting the face images in the identity information of at least one conference user corresponding to the conference room, and obtaining at least one face image corresponding to the conference room so as to obtain at least one face image corresponding to the at least one conference room.
The parts not described in detail in the embodiment shown in fig. 11 may refer to the relevant descriptions of the embodiments or examples shown in fig. 2 to 10, and the implementation process and technical effects of the technical solution may refer to the descriptions in the embodiments or examples shown in fig. 2 to 10, which are not described again here.
As shown in fig. 12, which is a schematic structural diagram of an embodiment of a prompting device provided in the present application, the device may provide a display interface 1201 for at least one conference room respectively;
the display interface 1201 of each conference room is used for displaying a conference interface;
wherein, the conference interface displays the head portrait prompt information of at least one conference room; and generating the head portrait prompt information of each conference room based on the face image of each conference user corresponding to each conference room.
The display interface refers to a picture layout of human-computer interaction, and is used for displaying pictures, and the display interface can be a window or a panel.
By providing a display interface for each conference room, the conference interface containing the head portrait prompt information of at least one conference room can be output in each conference room, thereby realizing the synchronous prompt of each conference room and improving the display efficiency and the comprehensiveness and comprehensiveness of the prompt.
The manner of obtaining the avatar prompt information of at least one conference room displayed in the conference interface may refer to the embodiments shown in fig. 2 to 11, and details are not repeated here.
As shown in fig. 13, a schematic structural diagram of an embodiment of an information prompting device provided in the embodiment of the present application is shown, where the device may include:
the image determining module 1301 is configured to determine respective face images of at least one conference user corresponding to at least one conference room, and obtain at least one face image corresponding to the at least one conference room.
A first generating module 1302, configured to generate avatar prompt information for each conference room based on the face image of at least one conference user corresponding to the at least one conference room respectively.
And the first display module 1303 is configured to display the avatar prompt information of each of the at least one conference room in the conference interface.
As an embodiment, the apparatus may further include:
and the sequence determining module is used for determining the respective prompt sequence of at least one face image corresponding to any meeting room.
The first generating module may include:
the first generating unit is used for generating the head portrait prompt information of any conference room according to the prompt sequence of at least one face image corresponding to the conference room so as to obtain the head portrait prompt information of the conference room.
In some embodiments, the order determination module may include:
the first determining unit is used for determining the sequence of at least one conference user entering the conference room corresponding to any conference room.
And the second determining unit is used for determining a prompting sequence corresponding to the face image of at least one conference user in the conference room according to the sequence that the at least one conference user corresponding to the conference room enters the conference room respectively.
As an embodiment, the first generating module may include:
and the image generating unit is used for combining the at least one conference room corresponding to the at least one face image respectively to obtain a composite image corresponding to the at least one conference room respectively.
And the information generating unit is used for generating the head portrait prompt information of the conference room according to the composite image corresponding to any conference room and the conference room identification so as to obtain the head portrait prompt information of each conference room.
In some embodiments, the first display module may include:
the interface generating unit is used for generating a prompt interface according to the head portrait prompt information respectively corresponding to the at least one conference room;
and the interface display unit is used for displaying the prompt interface in the conference interface.
As a possible implementation manner, the interface generating unit may include:
and the first generating subunit is used for respectively generating blank prompt sub-interfaces of the at least one conference room.
And the interface obtaining sub-unit is used for drawing the head portrait prompt information of the conference room in the blank prompt sub-interface corresponding to any conference room, obtaining the prompt sub-interface of the conference room and obtaining the prompt sub-interface of each conference room.
And the interface generating subunit is used for generating the prompt interface according to the prompt sub-interfaces respectively corresponding to the at least one conference room.
As another possible implementation manner, the interface display unit may include:
and the area determining subunit is used for determining a prompt area of the prompt interface in the conference interface.
And the interface drawing subunit is used for drawing the prompt interface in the prompt area of the conference interface.
As an embodiment, the first display module may include:
the release determining unit is used for determining a release conference room which currently releases the conference content in the at least one conference room;
and the first display unit is used for preferentially displaying the avatar prompt information corresponding to the release conference room in a conference interface.
And the second display unit is used for displaying the avatar prompt information corresponding to the conference rooms except the publishing conference room in the at least one conference room after the avatar prompt information of the publishing conference room in the conference interface.
In some embodiments, the issue determination unit may include:
and the sound acquisition subunits are used for respectively acquiring the current sound signals of the at least one conference room.
The first determining subunit is configured to determine that the conference room is the publishing conference room if the current sound signal of any conference room is not empty.
In some embodiments, the issue determination unit may include:
and the image acquisition subunit is used for respectively acquiring the current conference image of the at least one conference room.
And the second determining subunit is used for determining that the conference room is the publishing conference room if the current conference image of any conference room meets the publishing condition.
In some embodiments, the apparatus may further comprise:
and the user determining module is used for determining the publishing user who currently publishes the conference content in the publishing conference room.
As a possible implementation manner, the avatar prompt information of the publishing conference room may be determined by:
and generating the head portrait prompt information of the publishing conference room according to the prior display of the face images of the publishing users and the display rules that the face images of the other participating users except the publishing users in the publishing conference room are displayed behind the publishing users.
In one possible design, the user determination module may include:
and the sound acquisition unit is used for acquiring the publishing sound signal of the publishing conference room.
The first extraction unit is used for extracting the distribution sound characteristics of the distribution sound signals corresponding to the distribution conference room.
And the first identification unit is used for carrying out identity identification processing by utilizing the publishing sound characteristics to obtain a publishing user who publishes the current conference content in the publishing conference room.
In yet another possible design, the user determination module may include:
and the image acquisition unit is used for acquiring the published panoramic image in the publishing conference room.
And the gesture recognition unit is used for recognizing the motion gesture of at least one participant user in the published panoramic image.
And the second identification unit is used for determining the participant users with the motion gestures in the publishing state as the publishing users who publish the conference content at present based on the motion gestures respectively corresponding to at least one participant user in the publishing panoramic image.
As an embodiment, the apparatus may further include:
and the user detection module is used for detecting the newly added and/or disappeared changed users of any conference room.
And the image determining module is used for determining the facial image of the changed user.
And the information updating module is used for regenerating the head portrait prompt information of the conference room based on the changed user and the face image of the conference room corresponding to at least one conference user.
And the interface updating module is used for updating the head portrait prompt information of each conference room in the conference interface.
In some embodiments, when the changed user is a newly added user, the information updating module may include:
and the second generating unit is used for generating the head portrait prompt information of the conference room according to the display rule that the face image of the changed user is preferentially displayed and the face image of at least one conference user in the conference room is displayed after the changed user if the changed user is the user who currently issues the conference content.
And a third generating unit, configured to, if the change user is not the user who currently issues the conference content, preferentially display the face image of the at least one conference-participating user in the conference room, and generate avatar prompt information in the conference room according to a display rule that the face image of the change user is displayed behind the face image of the at least one conference-participating user.
In some embodiments, when the changed user is a disappearing user, the information updating module may include:
and a fourth generating unit, configured to regenerate the avatar prompt information of the conference room by using the other conference users except the change user among the at least one conference user corresponding to the conference room.
As a possible implementation manner, the user detection module may include:
the first acquisition unit is used for acquiring the face image of at least one current conference user corresponding to any conference room.
And the image comparison unit is used for comparing the face image of at least one current conference user acquired by the conference room with the face image of at least one conference user acquired at the last time to obtain a comparison result.
And the change determining unit is used for determining newly added and/or disappeared changed users in the conference room according to the comparison result.
As an embodiment, the image determination module may include:
and the request acquisition unit is used for acquiring the conference initiation request.
And the request response unit is used for responding to the conference initiating request, acquiring the respective facial images of at least one conference user corresponding to the at least one conference room respectively, and acquiring the at least one facial image corresponding to the at least one conference room respectively.
As still another embodiment, the first generating module may include:
an identifier determining unit, configured to determine conference room identifiers of the at least one conference room respectively.
And the fifth generating unit is used for generating the head portrait prompt information of the conference room based on at least one face image and the conference room identification corresponding to any conference room so as to obtain the head portrait prompt information of at least one conference room.
As still another embodiment, the first generating module may include:
and the quantity determining unit is used for determining the target prompt quantity respectively corresponding to the at least one conference room.
And the user determining unit is used for selecting the target prompt users with the target prompt quantity corresponding to the conference room aiming at least one conference user corresponding to any conference room, and acquiring the target prompt users corresponding to the conference room.
And the sixth generating unit is used for generating the head portrait prompting information of the conference room based on the face image prompting the user by the corresponding target of any conference room so as to obtain the head portrait prompting information of each conference room.
In some embodiments, the user determination unit may include:
and the third determining subunit is used for determining the respective prompt sequence of the at least one participant corresponding to any conference room according to the sequence of the at least one participant entering the conference room corresponding to any conference room.
And the first obtaining subunit is configured to sequentially select target prompt users with a target prompt number corresponding to the conference room according to a high-to-low arrangement order of the prompt order of at least one conference user corresponding to the conference room, and obtain the target prompt users corresponding to the conference room. As an embodiment, the image determination module may include:
and the second acquisition unit is used for acquiring the respective face image of at least one conference user corresponding to the at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively.
As yet another embodiment, the apparatus may further include:
and the feature extraction module is used for extracting the respective face features of at least one conference user aiming at the respective face image of at least one conference user corresponding to any conference room.
And the identity identification module is used for identifying the identity information corresponding to the at least one conference user respectively corresponding to the conference room according to the respective face features of the at least one conference user corresponding to the conference room.
And the identity output module is used for detecting the selection operation aiming at any avatar prompt message and outputting the respective identity information of at least one conference user in the conference room corresponding to the selected avatar prompt message.
As yet another embodiment, the image determination module may include:
and the third acquisition unit is used for respectively acquiring the respective sound signals of at least one conference user corresponding to the at least one conference room.
And the voice identification unit is used for identifying the identity information of at least one participant corresponding to the conference room based on the respective voice signal of at least one participant corresponding to the conference room aiming at any conference room.
And the image determining unit is used for respectively extracting the face images in the identity information of at least one conference user corresponding to the conference room, and obtaining at least one face image corresponding to the conference room so as to obtain at least one face image corresponding to the at least one conference room.
As a possible implementation manner, the voice recognition unit may include:
and the feature extraction subunit is used for extracting the sound features of each participant according to the sound signals of the participants corresponding to the conference room in any conference room.
And the characteristic matching subunit is used for searching the target sound characteristic matched with the sound characteristic of each participating user from the sound characteristic library.
And the identity determining subunit is used for determining the identity information associated with the target sound feature corresponding to each conference user as the identity information of the conference user so as to obtain the identity information of at least one conference user corresponding to the conference room.
As an embodiment, the apparatus may further include:
and the video display module is used for responding to the triggering operation of the video display control in the conference interface and displaying the conference video of each conference room.
As a possible implementation, the video display module may include:
and the trigger response unit is used for responding to the trigger operation aiming at the video display control in the conference interface and respectively acquiring the conference video of the at least one conference room.
A third determining unit, configured to determine a master conference room and at least one slave conference room of the at least one conference room.
And the video interface unit is used for generating a main video interface and at least one slave video interface which is covered on the main video interface for display.
And the video display unit is used for playing the conference video of the main conference room in the main video interface and playing the conference video of the at least one slave conference room in the at least one slave video interface respectively.
The information prompting device apparatus shown in fig. 13 may execute the information prompting method shown in the embodiment shown in fig. 1, and the implementation principle and the technical effect are not described again. The specific manner of operations performed by each module, unit, and sub-unit in the information prompting device in the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail here.
In some possible designs, the embodiment shown in fig. 13 may be configured as an information prompting device, for example, the information prompting device may refer to a server, any conference terminal or a main conference terminal, and the like. The apparatus may include: storage component 1401 and processing component 1402; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component 1402 is configured to:
determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room; and displaying the head portrait prompt information of each conference room on a conference interface.
The device may further include a presentation component 1403, where the presentation component may be configured to present the conference interface, where the conference interface includes avatar prompt information of at least one conference room. The information prompting device can simultaneously control a plurality of presentation components to output the conference interface. An information prompting device 1501, as shown in fig. 15, can simultaneously connect with multiple presentation components 1502 and control each presentation component 1502 to output a conference interface. The display component, that is, the display screen may refer to a display whiteboard, a computer screen, a projection screen, and the like, and the type of the display component is not limited too much.
As an embodiment, the processing component may be further to: and determining the respective prompt sequence of at least one face image corresponding to any meeting room.
The specific examples of the processing component generating the respective avatar prompt information of the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room may be:
and generating the head portrait prompt information of the conference room aiming at the prompt sequence of any conference room corresponding to at least one face image so as to obtain the head portrait prompt information of at least one conference room.
In some embodiments, the processing component may specifically determine that the prompting sequence of at least one face image corresponding to any conference room is as follows:
aiming at any meeting room, determining the sequence of at least one conference user corresponding to the meeting room entering the meeting room respectively; and determining a prompt sequence corresponding to the respective face images of at least one participant in the conference room according to the sequence in which the at least one participant corresponding to the conference room enters the conference room.
As another embodiment, the generating, by the processing component, the avatar prompt information of each of the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room may specifically be:
combining the at least one conference room with at least one face image respectively to obtain composite images respectively corresponding to the at least one conference room;
and generating the head portrait prompt information of the conference room according to the composite image corresponding to any conference room and the conference room identification so as to obtain the head portrait prompt information of at least one conference room.
In some embodiments, the presenting, by the processing component, the avatar prompt information of each of the at least one conference room in the conference interface may specifically be: generating a prompt interface according to the head portrait prompt information respectively corresponding to the at least one conference room; and displaying the prompt interface in the conference interface.
As another embodiment, the generating, by the processing component, a prompt interface according to the avatar prompt information respectively corresponding to the at least one conference room may specifically be:
generating blank prompt sub-interfaces of the at least one conference room respectively;
drawing the head portrait prompt information of any conference room in a blank prompt sub-interface corresponding to the conference room, and obtaining the prompt sub-interface of the conference room so as to obtain the prompt sub-interface of each conference room;
and generating the prompt interface according to the prompt sub-interfaces respectively corresponding to the at least one conference room.
In some embodiments, the displaying, by the processing component, the prompt interface in the conference interface may specifically be: determining a prompt area of the prompt interface in the conference interface; and drawing the prompt interface in a prompt area of the conference interface.
As an embodiment, the displaying, by the processing component, the avatar prompt information of each of the at least one conference room in the conference interface may specifically be:
determining a publishing conference room which currently publishes conference content in the at least one conference room; preferentially displaying the head portrait prompt information corresponding to the release meeting room in a meeting interface; and displaying the avatar prompt information corresponding to the conference rooms except the publishing conference room in the at least one conference room after the avatar prompt information of the publishing conference room in the conference interface.
As another embodiment, the determining, by the processing component, that the publishing conference room currently publishing the conference content in the at least one conference room may specifically be:
respectively collecting the current sound signals of the at least one conference room; and if the current sound signal of any conference room is not empty, determining that the conference room is the publishing conference room.
As a possible implementation manner, the determining, by the processing component, a publishing conference room in the at least one conference room, where the conference content is currently published may specifically be:
respectively collecting current conference images of the at least one conference room; and if the current conference image of any conference room meets the publishing condition, determining that the conference room is the publishing conference room.
As another possible implementation, the processing component may be further configured to:
determining a publishing user who currently publishes the conference content in the publishing conference room;
the head portrait prompt message of the release conference room is determined by the following method:
and generating the head portrait prompt information of the publishing conference room according to the prior display of the face images of the publishing users and the display rules that the face images of the other participating users except the publishing users in the publishing conference room are displayed behind the publishing users.
Optionally, the determining, by the processing component, that the publishing user currently publishing the conference content in the publishing conference room may specifically be:
collecting a release sound signal of the release conference room;
extracting the release sound characteristics of the release sound signals corresponding to the release conference room;
and performing identity recognition processing by using the publishing sound characteristics to obtain a publishing user which outputs the current conference content in the publishing conference room.
Optionally, the determining, by the processing component, that the publishing user currently publishing the conference content in the publishing conference room may specifically be:
collecting a release panoramic image in the release conference room;
identifying a motion gesture of at least one participant user in the published panoramic image;
and determining the participant users with the motion gestures in the publishing state as the publishing users who publish the conference content at present based on the motion gestures respectively corresponding to at least one participant user in the publishing panoramic image.
As yet another embodiment, the processing component may be further to:
detecting newly added and/or disappeared changed users of any conference room; determining a face image of the change user; regenerating the head portrait prompt information of the conference room based on the changed user and the respective face images of at least one conference participating user corresponding to the conference room; and updating the head portrait prompt information of each conference room in the conference interface.
As an embodiment, when the processing component changes the user to a newly added user, the regenerating of the avatar prompt information of the conference room based on the changed user and the face image of the at least one conference user corresponding to the conference room may specifically be:
if the change user is the user who currently issues the conference content, the face image of the change user is preferentially displayed, and the head portrait prompt information of the conference room is generated according to the display rule that the face image of at least one conference participating user in the conference room is displayed after the change user;
if the change user is not the user who currently issues the conference content, the face image of at least one conference user in the conference room is preferentially displayed, the display rule of the face image of the change user displayed behind the face image of the at least one conference user is changed, and the head portrait prompt information of the conference room is generated.
In some embodiments, when the processing component changes the user to a disappearing user, the regenerating of the avatar prompt information of the conference room based on the changed user and the face image of the at least one conference-participating user corresponding to the conference room may specifically be:
and regenerating the head portrait prompt information of the conference room by using other conference users except the conference user except the change user in the at least one conference user corresponding to the conference room.
In some embodiments, the modified user whose processing component detects new addition and/or disappearance of any conference room may specifically be:
collecting a face image of at least one current conference user corresponding to any conference room;
comparing the face image of at least one current conference user acquired by the conference room with the face image of at least one conference user acquired at the last time to obtain a comparison result;
and determining newly added and/or disappeared changed users in the conference room according to the comparison result.
As an embodiment, the determining, by the processing component, respective face images of at least one conference user corresponding to at least one conference room respectively, and the obtaining at least one face image corresponding to at least one conference room respectively may specifically be:
acquiring a conference initiating request; and responding to the conference initiating request, acquiring respective face images of at least one conference user corresponding to the at least one conference room respectively, and acquiring at least one face image corresponding to the at least one conference room respectively.
As another embodiment, the generating, by the processing component, the avatar prompt information of each of the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room may specifically be:
respectively determining conference room identifiers of the at least one conference room;
generating the avatar prompt information of any conference room based on at least one face image and the conference room identification corresponding to the conference room so as to obtain the avatar prompt information of the at least one conference room.
In some embodiments, the generating, by the processing component, the avatar prompt information of each of the at least one conference room based on the at least one face image corresponding to each of the at least one conference room may specifically be:
determining the target prompt quantity respectively corresponding to the at least one conference room; aiming at least one conference user corresponding to any conference room, selecting a target prompt user with the target prompt quantity corresponding to the conference room, and acquiring a target prompt user corresponding to the conference room; and generating the head portrait prompt information of the conference room based on the face image of the prompt user corresponding to the target of any conference room so as to obtain the head portrait prompt information of at least one conference room.
As a possible implementation manner, for at least one conference user corresponding to any conference room, the processing component selects a target number of prompt users corresponding to the conference room, and the obtaining of the target prompt users corresponding to the conference room may specifically be:
aiming at the sequence of at least one conference user entering the conference room corresponding to any conference room, determining the respective prompt sequence of at least one conference user corresponding to the conference room;
and sequentially selecting target prompt users with the target prompt quantity corresponding to the conference room according to the arrangement sequence of the prompt sequence of at least one conference user corresponding to the conference room from high to low, and obtaining the target prompt users corresponding to the conference room. As another embodiment, the determining, by the processing component, the respective facial images of at least one conference user corresponding to at least one conference room respectively, and the obtaining of the at least one facial image corresponding to the at least one conference room respectively may specifically be:
and acquiring the respective face image of at least one conference user corresponding to the at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively.
In some embodiments, the processing component may be further operative to:
and extracting the respective face features of at least one conference user aiming at the respective face image of at least one conference user corresponding to any conference room.
And identifying the identity information corresponding to the at least one conference user respectively corresponding to the conference room according to the respective face characteristics of the at least one conference user corresponding to the conference room.
And detecting selection operation aiming at any avatar prompt message, and outputting the identity information of at least one conference user in a conference room corresponding to the selected avatar prompt message.
As an embodiment, the determining, by the processing component, respective face images of at least one conference user corresponding to at least one conference room respectively, and the obtaining at least one face image corresponding to at least one conference room respectively may specifically be:
respectively collecting respective sound signals of at least one conference user corresponding to the at least one conference room;
for any conference room, identifying the identity information of at least one participant corresponding to the conference room based on the respective sound signals of the at least one participant corresponding to the conference room;
and respectively extracting the face images in the identity information of at least one conference user corresponding to the conference room, and obtaining at least one face image corresponding to the conference room so as to obtain at least one face image corresponding to the at least one conference room.
As another embodiment, for any conference room, based on the respective sound signal of the at least one participant corresponding to the conference room, the identification information for identifying the at least one participant corresponding to the conference room may specifically be:
aiming at any meeting room, extracting the sound characteristics of each meeting participating user based on the respective sound signals of the meeting participating users corresponding to the meeting room; searching a target sound characteristic matched with the sound characteristic of each participating user from a sound characteristic library; and determining the identity information associated with the target sound characteristics corresponding to each participating user as the identity information of the participating user so as to obtain the identity information of at least one participating user corresponding to the conference room.
As an embodiment, the processing component may be further to:
and in response to the triggering operation of the video display control in the conference interface, displaying the conference video of each conference room.
In some embodiments, the processing component, in response to a triggering operation for the video display control in the conference interface, may specifically display the conference video of each of the at least one conference room by:
responding to the triggering operation of a video display control in the conference interface, and respectively acquiring the conference videos of the at least one conference room;
determining a master conference room and at least one slave conference room of the at least one conference room;
generating a master video interface and at least one slave video interface which is covered on the master video interface for display;
and playing the conference video of the main conference room in the main video interface, and respectively playing the conference video of the at least one slave conference room in the at least one slave video interface.
The information prompting device shown in fig. 14 may execute the information prompting method shown in the embodiment shown in fig. 1, and the implementation principle and the technical effect are not described again. The specific manner of operation performed by the processing component of the information prompting device in the above embodiments has been described in detail in the embodiments related to the method, and will not be elaborated here.
As shown in fig. 16, a schematic structural diagram of another embodiment of an information prompting device provided in the embodiment of the present application is shown, where the device may include:
the image acquisition module 1601 is configured to acquire respective face images of at least one conference user corresponding to at least one conference room, so as to obtain at least one face image corresponding to the at least one conference room.
A second generating module 1602, configured to generate respective avatar prompt information of the at least one conference room based on at least one facial image acquired from the at least one conference room respectively.
A second presenting module 1603, configured to present the avatar prompt information of each of the at least one conference room in the conference interface.
As an embodiment, the apparatus may further include:
the first determining module is used for determining the respective prompt sequence of at least one face image corresponding to any meeting room.
The second generating module may include:
and the seventh generating unit is used for generating the head portrait prompting information of the conference room aiming at the prompting sequence of at least one face image corresponding to any conference room so as to obtain the head portrait prompting information of at least one conference room.
As an embodiment, the image acquisition module may include:
the first acquisition unit is used for acquiring the conference initiation request.
And the first response unit is used for responding to the conference initiating request, and acquiring the respective facial image of at least one conference user corresponding to at least one conference room respectively so as to obtain at least one facial image corresponding to the at least one conference room respectively.
In some embodiments, the second generating module may include:
a conference determining unit, configured to determine conference room identifiers of the at least one conference room respectively.
The information obtaining unit is used for generating the head portrait prompt information of the conference room based on at least one face image acquired from any conference room and the conference room identification of the conference room so as to obtain the head portrait prompt information of each conference room.
In some embodiments, the second generating module may include:
and the fourth determining unit is used for determining the target prompt quantity respectively corresponding to the at least one conference room.
And the target selection unit is used for selecting the target prompt users with the target prompt quantity corresponding to the conference room aiming at least one conference user corresponding to any conference room, and acquiring the target prompt users corresponding to the conference room.
And the target generating unit is used for generating the head portrait prompt information of the conference room based on the face image of the prompt user corresponding to the target in any conference room so as to obtain the head portrait prompt information of at least one conference room.
In some embodiments, the image acquisition module may include:
and the signal acquisition unit is used for respectively acquiring the respective sound signals of at least one conference user corresponding to the at least one conference room.
And the signal identification unit is used for identifying the identity information of at least one participant corresponding to the conference room based on the respective sound signal of at least one participant corresponding to the conference room aiming at any conference room.
And the eighth generating unit is configured to extract the face images in the respective identity information of the at least one conference user corresponding to the conference room, and obtain the at least one face image corresponding to the conference room, so as to obtain the at least one face image corresponding to the at least one conference room.
The information prompting device apparatus shown in fig. 16 may execute the information prompting method shown in the embodiment shown in fig. 11, and the implementation principle and the technical effect are not described again. The specific manner of operations performed by each module, unit, and sub-unit in the information prompting device in the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail here.
As shown in fig. 17, a schematic structural diagram of an embodiment of an information prompting device provided in the embodiment of the present application is shown, where the device may include: a storage component 1701 and a processing component 1702; the storage component 1701 is configured to store one or more computer instructions, wherein the one or more computer instructions are for execution invoked by the processing component 1702;
the processing component 1702 is configured to:
acquiring respective face images of at least one conference user corresponding to at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of each conference room based on at least one face image acquired from each conference room; and displaying the head portrait prompt information of each conference room on a conference interface.
The processing component may be further to:
and determining the respective prompt sequence of at least one face image corresponding to any meeting room.
The processing component may specifically generate, based on at least one face image acquired from each of the at least one conference room, avatar prompt information of each of the at least one conference room, where the avatar prompt information is generated by:
and generating the head portrait prompt information of the conference room aiming at the prompt sequence of any conference room corresponding to at least one face image so as to obtain the head portrait prompt information of at least one conference room.
As an embodiment, the acquiring, by the processing component, the respective face images of at least one conference user corresponding to at least one conference room respectively to obtain the at least one face image corresponding to the at least one conference room respectively may specifically be:
acquiring a conference initiating request; and responding to the conference initiating request, and acquiring respective face images of at least one conference user corresponding to at least one conference room respectively so as to obtain at least one face image corresponding to the at least one conference room respectively.
In some embodiments, the generating, by the processing component, the respective avatar prompt information of the at least one conference room based on the at least one facial image acquired from the at least one conference room respectively may specifically be:
respectively determining conference room identifiers of the at least one conference room; generating avatar prompt information of the conference room based on at least one face image acquired from any conference room and the conference room identification of the conference room, so as to acquire the avatar prompt information of each conference room.
As a possible implementation manner, the generating, by the processing component, respective avatar prompt information of the at least one conference room based on the at least one face image acquired from the at least one conference room respectively may specifically be:
determining the target prompt quantity respectively corresponding to the at least one conference room; aiming at least one conference user corresponding to any conference room, selecting a target prompt user with the target prompt quantity corresponding to the conference room, and obtaining a target prompt user corresponding to the conference room; and generating the head portrait prompt information of the conference room based on the face image of the prompt user corresponding to the target of any conference room so as to obtain the head portrait prompt information of at least one conference room.
In some embodiments, the acquiring, by the processing component, the respective facial images of at least one conference user corresponding to at least one conference room respectively to obtain the at least one facial image corresponding to the at least one conference room respectively may specifically be:
respectively collecting respective sound signals of at least one conference user corresponding to the at least one conference room; for any conference room, identifying the identity information of at least one participant corresponding to the conference room based on the respective sound signals of the at least one participant corresponding to the conference room;
and respectively extracting the face images in the identity information of at least one conference user corresponding to the conference room, and obtaining at least one face image corresponding to the conference room so as to obtain at least one face image corresponding to the at least one conference room.
The information prompting device shown in fig. 17 may execute the information prompting method shown in the embodiment shown in fig. 11, and the implementation principle and the technical effect are not described again. The specific manner in which the processing component of the information prompting system performs operations in the above embodiments has been described in detail in relation to the embodiments of the method, and will not be described in detail here.
As shown in fig. 18, a schematic structural diagram of an embodiment of an information prompting system provided in the embodiment of the present application is shown, where the system may include:
a server 1801, and at least one conference terminal 1802;
the server 1801 is configured to: determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room; displaying the head portrait prompt information of each conference room on a conference interface; sending the conference interface to each conference terminal;
any of the conference terminals 1802 is configured to: receiving a conference interface which is sent by the server and shows the head portrait prompt information of each conference room; and displaying the conference interface containing the head portrait prompt information of each conference room.
Each conference terminal comprises a display component, namely a display screen, wherein the display screen can refer to a whiteboard of a conference television or a conference computer, a screen of a computer, a projection screen or a mobile phone screen, and the like, and the type of the display component is not limited too much. The conference terminal in fig. 18 is merely an exemplary one, and should not be construed as a specific limitation to the conference terminal according to the embodiment of the present application. The server is configured with the information prompting device shown in fig. 13, which is the same as each component included in the information prompting device shown in fig. 14, and is not described herein again.
As shown in fig. 19, an intelligent conference system may be composed of a master conference terminal 1901 and at least one slave conference terminal 1902.
The main conference terminal 1901 is configured to: determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room; displaying the head portrait prompt information of each conference room on a conference interface; and sending the conference interface to each slave conference terminal.
Any of the slave conference terminals 1902 is configured to: receiving a conference interface which is sent by the main conference terminal and shows the head portrait prompt information of each conference room; and displaying the conference interface containing the head portrait prompt information of each conference room.
The conference terminal can refer to a conference television or an electronic device comprising a host computer and a display screen. The master conference terminal and the slave conference terminals shown in fig. 19 are merely illustrative, and should not be construed as a specific limitation of the conference terminals in the present application. The information prompting device shown in fig. 13 is configured in the main conference terminal, and is the same as each component included in the information prompting device shown in fig. 14, and details are not repeated here.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (38)

1. An information prompting method, comprising:
determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively;
respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room;
and displaying the head portrait prompt information of each conference room on a conference interface.
2. The method of claim 1, further comprising:
determining the respective prompt sequence of at least one face image corresponding to any meeting room;
the generating respective avatar prompt information of the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room includes:
and generating the head portrait prompt information of the conference room aiming at the prompt sequence of any conference room corresponding to at least one face image so as to obtain the head portrait prompt information of at least one conference room.
3. The method of claim 2, wherein the confirming the prompting order of each of the at least one face image corresponding to any one of the conference rooms comprises:
aiming at any meeting room, determining the sequence of at least one conference user corresponding to the meeting room entering the meeting room respectively;
and determining a prompt sequence corresponding to the respective face images of at least one participant in the conference room according to the sequence in which the at least one participant corresponding to the conference room enters the conference room.
4. The method of claim 1, wherein the generating respective avatar prompt information for the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room comprises:
combining the at least one conference room with at least one face image respectively to obtain composite images respectively corresponding to the at least one conference room;
and generating the head portrait prompt information of the conference room according to the composite image corresponding to any conference room and the conference room identification so as to obtain the head portrait prompt information of at least one conference room.
5. The method of claim 4, wherein presenting the avatar prompt information of each of the at least one conference room in the conference interface comprises:
generating a prompt interface according to the head portrait prompt information respectively corresponding to the at least one conference room;
and displaying the prompt interface in the conference interface.
6. The method of claim 5, wherein the generating a prompt interface according to the avatar prompt messages respectively corresponding to the at least one conference room comprises:
generating blank prompt sub-interfaces of the at least one conference room respectively;
drawing the head portrait prompt information of any conference room in a blank prompt sub-interface corresponding to the conference room, and obtaining the prompt sub-interface of the conference room so as to obtain the prompt sub-interface of each conference room;
and generating the prompt interface according to the prompt sub-interfaces respectively corresponding to the at least one conference room.
7. The method of claim 5, wherein displaying the prompt interface in the conference interface comprises:
determining a prompt area of the prompt interface in the conference interface;
and drawing the prompt interface in a prompt area of the conference interface.
8. The method of claim 1, wherein presenting the avatar prompt information of each of the at least one conference room in the conference interface comprises:
determining a publishing conference room which currently publishes conference content in the at least one conference room;
preferentially displaying the head portrait prompt information corresponding to the release meeting room in a meeting interface;
and displaying the avatar prompt information corresponding to the conference rooms except the publishing conference room in the at least one conference room after the avatar prompt information of the publishing conference room in the conference interface.
9. The method of claim 8, wherein determining a publishing conference room of the at least one conference room that currently publishes conference content comprises:
respectively collecting the current sound signals of the at least one conference room;
and if the current sound signal of any conference room is not empty, determining that the conference room is the publishing conference room.
10. The method of claim 8, wherein determining a publishing conference room of the at least one conference room that currently publishes conference content comprises:
respectively collecting current conference images of the at least one conference room;
and if the current conference image of any conference room meets the publishing condition, determining that the conference room is the publishing conference room.
11. The method of claim 8, wherein after determining a publishing conference room of the at least one conference room that currently publishes conference content, the method further comprises:
determining a publishing user who currently publishes the conference content in the publishing conference room;
the head portrait prompt message of the release conference room is determined by the following method:
and generating the head portrait prompt information of the publishing conference room according to the prior display of the face images of the publishing users and the display rules that the face images of the other participating users except the publishing users in the publishing conference room are displayed behind the publishing users.
12. The method of claim 11, wherein determining the publishing user currently publishing the meeting content in the publishing meeting room comprises:
collecting a release sound signal of the release conference room;
extracting the release sound characteristics of the release sound signals corresponding to the release conference room;
and carrying out identity recognition processing by using the publishing sound characteristics to obtain a publishing user who publishes the current conference content in the publishing conference room.
13. The method of claim 11, wherein determining the publishing user currently publishing the meeting content in the publishing meeting room comprises:
collecting a release panoramic image in the release conference room;
identifying a motion gesture of at least one participant user in the published panoramic image;
and determining the participant users with the motion gestures in the publishing state as the publishing users who publish the conference content at present based on the motion gestures respectively corresponding to at least one participant user in the publishing panoramic image.
14. The method of claim 1, further comprising:
detecting newly added and/or disappeared changed users of any conference room;
determining a face image of the change user;
regenerating the head portrait prompt information of the conference room based on the changed user and the respective face images of at least one conference participating user corresponding to the conference room;
and updating the head portrait prompt information of each conference room in the conference interface.
15. The method of claim 14, wherein when the changed user is a newly added user, the regenerating of the avatar prompt information of the conference room based on the changed user and the face image of the at least one conference-participating user corresponding to the conference room comprises:
if the change user is the user who currently issues the conference content, the face image of the change user is preferentially displayed, and the head portrait prompt information of the conference room is generated according to the display rule that the face image of at least one conference participating user in the conference room is displayed after the change user;
if the change user is not the user who currently issues the conference content, the face image of at least one conference user in the conference room is preferentially displayed, the display rule of the face image of the change user displayed behind the face image of the at least one conference user is changed, and the head portrait prompt information of the conference room is generated.
16. The method of claim 14, wherein, when the changed user is a lost user, the regenerating the avatar prompt information of the conference room based on the respective face images of the changed user and the at least one conference user corresponding to the conference room comprises:
and regenerating the head portrait prompt information of the conference room by using other conference users except the conference user except the change user in the at least one conference user corresponding to the conference room.
17. The method of claim 14, wherein detecting a new addition and/or disappearance of any conference room change comprises:
collecting a face image of at least one current conference user corresponding to any conference room;
comparing the face image of at least one current conference user acquired by the conference room with the face image of at least one conference user acquired at the last time to obtain a comparison result;
and determining newly added and/or disappeared changed users in the conference room according to the comparison result.
18. The method of claim 1, wherein the determining the respective facial images of the at least one conference room respectively corresponding to the at least one conference user, and the obtaining the respective at least one facial image of the at least one conference room respectively corresponding to the at least one conference user comprises:
acquiring a conference initiating request;
and responding to the conference initiating request, acquiring respective face images of at least one conference user corresponding to the at least one conference room respectively, and acquiring at least one face image corresponding to the at least one conference room respectively.
19. The method of claim 1, wherein the generating respective avatar prompt information for the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room comprises:
respectively determining conference room identifiers of the at least one conference room;
generating the avatar prompt information of any conference room based on at least one face image and the conference room identification corresponding to the conference room so as to obtain the avatar prompt information of the at least one conference room.
20. The method of claim 1, wherein the generating respective avatar prompt information for the at least one conference room based on the at least one face image respectively corresponding to the at least one conference room comprises:
determining the target prompt quantity respectively corresponding to the at least one conference room;
aiming at least one conference user corresponding to any conference room, selecting a target prompt user with the target prompt quantity corresponding to the conference room, and acquiring a target prompt user corresponding to the conference room;
and generating the head portrait prompt information of the conference room based on the face image of the prompt user corresponding to the target of any conference room so as to obtain the head portrait prompt information of at least one conference room.
21. The method according to claim 20, wherein the selecting, for at least one conference user corresponding to any one conference room, a target number of the target prompt users corresponding to the conference room, and the obtaining the target number of the target prompt users corresponding to the conference room comprises:
aiming at the sequence of at least one conference user entering the conference room corresponding to any conference room, determining the respective prompt sequence of at least one conference user corresponding to the conference room;
and sequentially selecting target prompt users with the target prompt quantity corresponding to the conference room according to the arrangement sequence of the prompt sequence of at least one conference user corresponding to the conference room from high to low, and obtaining the target prompt users corresponding to the conference room. A target number of prompts.
22. The method of claim 1, wherein the determining the respective facial images of the at least one conference room respectively corresponding to the at least one conference user, and the obtaining the respective at least one facial image of the at least one conference room respectively corresponding to the at least one conference user comprises:
and acquiring the respective face image of at least one conference user corresponding to the at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively.
23. The method of claim 22, further comprising:
extracting respective face features of at least one conference user aiming at respective face images of at least one conference user corresponding to any conference room;
according to the respective face features of at least one conference user corresponding to the conference room, identifying identity information corresponding to the at least one conference user corresponding to the conference room respectively;
after the head portrait prompt information of each conference room is displayed on the conference interface, the method further comprises the following steps:
and detecting selection operation aiming at any avatar prompt message, and outputting the identity information of at least one conference user in a conference room corresponding to the selected avatar prompt message.
24. The method of claim 1, wherein the determining the respective facial images of the at least one conference room respectively corresponding to the at least one conference user, and the obtaining the respective at least one facial image of the at least one conference room respectively corresponding to the at least one conference user comprises:
respectively collecting respective sound signals of at least one conference user corresponding to the at least one conference room;
for any conference room, identifying the identity information of at least one participant corresponding to the conference room based on the respective sound signals of the at least one participant corresponding to the conference room;
and respectively extracting the face images in the identity information of at least one conference user corresponding to the conference room, and obtaining at least one face image corresponding to the conference room so as to obtain at least one face image corresponding to the at least one conference room.
25. The method of claim 24, wherein the identifying, for any one of the conference rooms, the identity information of the at least one participant in the conference room based on the respective voice signals of the at least one participant in the conference room comprises:
aiming at any meeting room, extracting the sound characteristics of each meeting participating user based on the respective sound signals of the meeting participating users corresponding to the meeting room;
searching a target sound characteristic matched with the sound characteristic of each participating user from a sound characteristic library;
and determining the identity information associated with the target sound characteristics corresponding to each participating user as the identity information of the participating user so as to obtain the identity information of at least one participating user corresponding to the conference room.
26. The method of claim 1, further comprising:
and in response to the triggering operation of the video display control in the conference interface, displaying the conference video of each conference room.
27. The method of claim 26, wherein displaying the respective conference video for the at least one conference room in response to the triggering operation for the video display control in the conference interface comprises:
responding to the triggering operation of a video display control in the conference interface, and respectively acquiring the conference videos of the at least one conference room;
determining a master conference room and at least one slave conference room of the at least one conference room;
generating a master video interface and at least one slave video interface which is covered on the master video interface for display;
and playing the conference video of the main conference room in the main video interface, and respectively playing the conference video of the at least one slave conference room in the at least one slave video interface.
28. An information prompting method, comprising:
acquiring respective face images of at least one conference user corresponding to at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively;
respectively generating head portrait prompt information of each conference room based on at least one face image acquired from each conference room;
and displaying the head portrait prompt information of each conference room on a conference interface.
29. The method of claim 28, further comprising:
determining the respective prompt sequence of at least one face image corresponding to any meeting room;
the generating respective avatar prompt information of the at least one conference room based on at least one face image acquired from the at least one conference room respectively comprises:
and generating the head portrait prompt information of the conference room aiming at the prompt sequence of any conference room corresponding to at least one face image so as to obtain the head portrait prompt information of at least one conference room.
30. The method of claim 28, wherein the acquiring the respective facial images of the at least one conference user respectively corresponding to the at least one conference room to obtain the at least one facial image respectively corresponding to the at least one conference room comprises:
acquiring a conference initiating request;
and responding to the conference initiating request, and acquiring respective face images of at least one conference user corresponding to at least one conference room respectively so as to obtain at least one face image corresponding to the at least one conference room respectively.
31. The method of claim 28, wherein the generating respective avatar prompt information for the at least one conference room based on the at least one facial image respectively acquired from the at least one conference room comprises:
respectively determining conference room identifiers of the at least one conference room;
generating avatar prompt information of the conference room based on at least one face image acquired from any conference room and the conference room identification of the conference room, so as to acquire the avatar prompt information of each conference room.
32. The method of claim 28, wherein the generating respective avatar prompt information for the at least one conference room based on the at least one facial image respectively acquired from the at least one conference room comprises:
determining the target prompt quantity respectively corresponding to the at least one conference room;
aiming at least one conference user corresponding to any conference room, selecting a target prompt user with the target prompt quantity corresponding to the conference room, and obtaining a target prompt user corresponding to the conference room;
and generating the head portrait prompt information of the conference room based on the face image of the prompt user corresponding to the target of any conference room so as to obtain the head portrait prompt information of at least one conference room.
33. The method of claim 28, wherein the acquiring the respective facial images of the at least one conference user respectively corresponding to the at least one conference room to obtain the at least one facial image respectively corresponding to the at least one conference room comprises:
respectively collecting respective sound signals of at least one conference user corresponding to the at least one conference room;
for any conference room, identifying the identity information of at least one participant corresponding to the conference room based on the respective sound signals of the at least one participant corresponding to the conference room;
and respectively extracting the face images in the identity information of at least one conference user corresponding to the conference room, and obtaining at least one face image corresponding to the conference room so as to obtain at least one face image corresponding to the at least one conference room.
34. An information prompting device is characterized in that a display interface is respectively provided for at least one conference room;
the display interface of each conference room is used for displaying a conference interface;
the conference interface displays head portrait prompt information of at least one conference room; and generating the head portrait prompt information of each conference room based on the face image of each conference user corresponding to each conference room.
35. An information presentation device, comprising:
the image determining module is used for determining respective face images of at least one conference user corresponding to at least one conference room respectively and obtaining at least one face image corresponding to the at least one conference room respectively;
the first generation module is used for respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room;
and the first display module is used for displaying the head portrait prompt information of each conference room in the conference interface.
36. An information presentation device, comprising:
the image acquisition module is used for acquiring respective face images of at least one conference user corresponding to at least one conference room respectively so as to obtain at least one face image corresponding to the at least one conference room respectively;
the second generation module is used for respectively generating head portrait prompt information of the at least one conference room based on at least one face image acquired from the at least one conference room;
and the second display module is used for displaying the head portrait prompt information of each conference room on the conference interface.
37. An information presentation device, comprising: the display device comprises a storage component, a processing component and a display component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
determining respective face images of at least one conference user corresponding to at least one conference room respectively, and obtaining at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of the at least one conference room based on at least one face image respectively corresponding to the at least one conference room; and displaying the head portrait prompt information of each conference room on a conference interface.
38. An information presentation device, comprising: the display device comprises a storage component, a processing component and a display component; the storage component is used for storing one or more computer instructions, wherein the one or more computer instructions are used for the processing component to call and execute;
the processing component is to:
acquiring respective face images of at least one conference user corresponding to at least one conference room respectively to obtain at least one face image corresponding to the at least one conference room respectively; respectively generating head portrait prompt information of each conference room based on at least one face image acquired from each conference room; and displaying the head portrait prompt information of each conference room on a conference interface.
CN201911320524.8A 2019-12-19 2019-12-19 Information prompting method, device and equipment Active CN113014852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911320524.8A CN113014852B (en) 2019-12-19 2019-12-19 Information prompting method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911320524.8A CN113014852B (en) 2019-12-19 2019-12-19 Information prompting method, device and equipment

Publications (2)

Publication Number Publication Date
CN113014852A true CN113014852A (en) 2021-06-22
CN113014852B CN113014852B (en) 2024-08-27

Family

ID=76381388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911320524.8A Active CN113014852B (en) 2019-12-19 2019-12-19 Information prompting method, device and equipment

Country Status (1)

Country Link
CN (1) CN113014852B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339130A (en) * 2021-12-31 2022-04-12 中国工商银行股份有限公司 Conference information monitoring method
WO2023103740A1 (en) * 2021-12-08 2023-06-15 苏州景昱医疗器械有限公司 Picture display control method and device, remote consultation system and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110271192A1 (en) * 2010-04-30 2011-11-03 American Teleconferencing Services Ltd. Managing conference sessions via a conference user interface
US20120204119A1 (en) * 2011-02-08 2012-08-09 Lefar Marc P Systems and methods for conducting and replaying virtual meetings
WO2012109006A2 (en) * 2011-02-08 2012-08-16 Vonage Network, Llc Systems and methods for conducting and replaying virtual meetings
CN104408769A (en) * 2014-11-27 2015-03-11 苏州福丰科技有限公司 Virtual netmeeting method based on three-dimensional face recognition
CN104767963A (en) * 2015-03-27 2015-07-08 华为技术有限公司 Method and device for representing information of persons participating in video conference
CN105893948A (en) * 2016-03-29 2016-08-24 乐视控股(北京)有限公司 Method and apparatus for face identification in video conference
WO2016165261A1 (en) * 2015-04-13 2016-10-20 中兴通讯股份有限公司 Video conference method, server and terminal
WO2019008320A1 (en) * 2017-07-05 2019-01-10 Maria Francisca Jones Virtual meeting participant response indication method and system
CN109934082A (en) * 2018-11-08 2019-06-25 闽江学院 A kind of group technology and device based on head portrait identification

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110271192A1 (en) * 2010-04-30 2011-11-03 American Teleconferencing Services Ltd. Managing conference sessions via a conference user interface
US20120204119A1 (en) * 2011-02-08 2012-08-09 Lefar Marc P Systems and methods for conducting and replaying virtual meetings
WO2012109006A2 (en) * 2011-02-08 2012-08-16 Vonage Network, Llc Systems and methods for conducting and replaying virtual meetings
CN104408769A (en) * 2014-11-27 2015-03-11 苏州福丰科技有限公司 Virtual netmeeting method based on three-dimensional face recognition
CN104767963A (en) * 2015-03-27 2015-07-08 华为技术有限公司 Method and device for representing information of persons participating in video conference
WO2016165261A1 (en) * 2015-04-13 2016-10-20 中兴通讯股份有限公司 Video conference method, server and terminal
CN105893948A (en) * 2016-03-29 2016-08-24 乐视控股(北京)有限公司 Method and apparatus for face identification in video conference
WO2019008320A1 (en) * 2017-07-05 2019-01-10 Maria Francisca Jones Virtual meeting participant response indication method and system
CN109934082A (en) * 2018-11-08 2019-06-25 闽江学院 A kind of group technology and device based on head portrait identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103740A1 (en) * 2021-12-08 2023-06-15 苏州景昱医疗器械有限公司 Picture display control method and device, remote consultation system and storage medium
CN114339130A (en) * 2021-12-31 2022-04-12 中国工商银行股份有限公司 Conference information monitoring method

Also Published As

Publication number Publication date
CN113014852B (en) 2024-08-27

Similar Documents

Publication Publication Date Title
CN107333087B (en) Information sharing method and device based on video session
JP6986187B2 (en) Person identification methods, devices, electronic devices, storage media, and programs
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN105611215A (en) Video call method and device
CN110472099B (en) Interactive video generation method and device and storage medium
US20240259627A1 (en) Same-screen interaction control method and apparatus, and electronic device and non-transitory storage medium
CN110798615A (en) Shooting method, shooting device, storage medium and terminal
CN113014852B (en) Information prompting method, device and equipment
CN106131291B (en) Information expands screen display method and device
US20240160331A1 (en) Audio and visual equipment and applied method thereof
CN111569436A (en) Processing method, device and equipment based on interaction in live broadcast fighting
CN113996053A (en) Information synchronization method, device, computer equipment, storage medium and program product
CN113784156A (en) Live broadcast method and device, electronic equipment and storage medium
CN111586432A (en) Method and device for determining air-broadcast live broadcast room, server and storage medium
CN114025185B (en) Video playback method and device, electronic equipment and storage medium
CN113840177B (en) Live interaction method and device, storage medium and electronic equipment
US20230195403A1 (en) Information processing method and electronic device
CN111107283A (en) Information display method, electronic equipment and storage medium
CN112118414B (en) Video session method, electronic device, and computer storage medium
US11838338B2 (en) Method and device for conference control and conference participation, server, terminal, and storage medium
CN115424156A (en) Virtual video conference method and related device
CN114429484A (en) Image processing method and device, intelligent equipment and storage medium
CN113784058A (en) Image generation method and device, storage medium and electronic equipment
WO2020248682A1 (en) Display device and virtual scene generation method
CN115379250B (en) Video processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant