CN111835531A - Session processing method, device, computer equipment and storage medium - Google Patents
Session processing method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111835531A CN111835531A CN202010753249.5A CN202010753249A CN111835531A CN 111835531 A CN111835531 A CN 111835531A CN 202010753249 A CN202010753249 A CN 202010753249A CN 111835531 A CN111835531 A CN 111835531A
- Authority
- CN
- China
- Prior art keywords
- session
- user account
- joining
- terminal
- application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1822—Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0861—Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0876—Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Power Engineering (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biomedical Technology (AREA)
- Information Transfer Between Computers (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application provides a session processing method, a session processing device, computer equipment and a storage medium, and belongs to the technical field of Internet. The method comprises the following steps: responding to an application joining operation of a first user account on a target session, and performing face acquisition to obtain a first face image corresponding to the first user account; sending an application joining request to a server, wherein the application joining request carries the first face image, the first user account and the session identifier of the target session; and in response to the server determining that the first face image of the first user account meets the joining conditions of the target session, joining the first user account to the target session. According to the technical scheme, users who do not meet the adding conditions can be automatically and effectively filtered without manual review, and the filtering accuracy is improved.
Description
Technical Field
The present application relates to the field of internet technologies, and in particular, to a session processing method and apparatus, a computer device, and a storage medium.
Background
With the development of the technology of the internet of things, users can converse with other users far away from thousands of miles through various intelligent hardware. Group conversation is increasingly popular with users as an effective way for multi-user communication.
At present, when a user applies for joining a group session, a manager of the group session needs to perform auditing, and only users meeting the joining condition of the group session can join the group session.
The scheme has the problems that the content submitted by the user when applying for joining the group session is possibly false content, and the manual review efficiency is low, so that the user which does not meet the joining condition cannot be effectively filtered.
Disclosure of Invention
The embodiment of the application provides a session processing method and device, computer equipment and a storage medium, wherein a face image of a user account is collected before the user account is added into a target session, so that the user account is allowed to be added into a group session when the face image of the user account meets the adding condition of the target session, users who do not meet the adding condition can be automatically and effectively filtered without manual examination, and the filtering accuracy is improved. The technical scheme is as follows:
in one aspect, a session processing method is provided and applied to a terminal, and the method includes:
responding to the application joining operation of the first user account on the target session, and performing face acquisition to obtain a first face image;
sending an application joining request to a server, wherein the application joining request carries the first face image, the first user account and the session identifier of the target session;
displaying a session interface of the target session in response to the server determining that the first face image of the first user account satisfies a join condition of the target session.
In another aspect, a session processing method is provided, which is applied to a server, and the method includes:
receiving an application joining request of a terminal, wherein the application joining request carries a first face image acquired by the terminal, a first user account of the terminal and a session identifier of the target session;
acquiring a joining condition of the target session according to the session identifier of the target session;
in response to the first face image satisfying the join condition, joining the first user account to the target session.
In an optional implementation manner, after the first user account is joined to the target session, the method includes:
sequencing audio and video windows of all user accounts in a session interface of the target session according to the face scores of all user accounts in the target session;
and displaying the session interface to the terminal.
In an optional implementation manner, after the first user account is joined to the target session, the method includes:
determining a second user account which is closest to the face score of the first user account according to the face score of each user account in the target session;
arranging audio and video windows of the first user account and the second user account adjacently in a session interface of the target session;
and displaying the session interface to the terminal.
In an optional implementation manner, after the first user account is joined to the target session, the method includes:
acquiring the current queuing sequence of the first user account, and displaying a queuing waiting interface comprising the queuing sequence to the terminal;
and responding to zero of the sequencing order, and displaying a session interface comprising the audio and video window of the first user account and the audio and video windows of other user accounts in the target group to the terminal.
In an optional implementation manner, the target session is created based on a session creation interface provided by the application client, and the session creation interface is used for providing a setting function of a joining condition.
In an alternative implementation, the creating of the target session includes:
receiving a session creation request, wherein the session creation request carries a management account and a joining condition of the target session;
and creating the target session, and adding the management account into the target session.
In another aspect, a session processing apparatus is provided, which is applied to a terminal, and the apparatus includes:
the system comprises a face acquisition module, a target session registration module and a face recognition module, wherein the face acquisition module is used for responding to an application joining operation of a first user account on a target session, and acquiring a face to obtain a first face image corresponding to the first user account;
a request sending module, configured to send an application join request to a server, where the application join request carries the first facial image, the first user account, and a session identifier of the target session;
and the session processing module is used for responding to the server to determine that the first face image of the first user account meets the joining condition of the target session, and joining the first user account into the target session.
In an optional implementation manner, the application joining operation is initiated by the first user account in a group session, and the target session is an audio/video session initiated in the group session; or the like, or, alternatively,
the application joining operation is initiated by the first user account in a session display interface, the target session is an audio and video session, and the session display interface is used for displaying at least one ongoing audio and video session.
In an optional implementation, the apparatus further includes: and the display module is used for displaying the audio and video windows of the user accounts from high to low according to the face scores of the user accounts in the target session.
In an optional implementation, the apparatus further includes: and the display module is used for displaying an audio and video window of a second user account which is closest to the face score of the first user account at the adjacent position of the audio and video window of the first user account.
In an optional implementation, the apparatus further includes: the display module is used for displaying the current queuing sequence of the first user account in a queuing waiting interface; and responding to zero queuing sequence, and displaying the audio and video window of the first user account and the audio and video window of at least one other user account.
In an optional implementation manner, the face acquisition module is configured to perform face recognition based on an image in a view finder of the terminal in response to an application joining operation of a first user account to a target session; and responding to the recognized face, and performing face acquisition to obtain a first face image corresponding to the first user account.
In an optional implementation, the apparatus further includes:
and the prompt module is used for responding to the situation that the human face is not recognized and displaying prompt information, wherein the prompt information is used for indicating that the human face is not recognized currently.
In an optional implementation, the apparatus further includes: a display module, configured to display a session display interface in response to the server determining that the first face image of the first user account does not satisfy the join condition, where the session display interface is used to display at least one ongoing audio/video session.
In an optional implementation manner, the target session is created based on a session creation interface provided by the application client, and the session creation interface is used for providing a setting function of a joining condition.
In an alternative implementation, the creating of the target session includes:
responding to the creation group operation of the management account of the target session, and displaying a session creation interface provided by the application client;
and on the session creation page, receiving the setting operation of the management account on the joining condition, and sending a session creation request to the server, wherein the session creation request carries the joining condition.
In another aspect, a session processing apparatus is provided, which is applied to a server, and includes:
the system comprises a request receiving module, a request sending module and a target session sending module, wherein the request receiving module is used for receiving an application joining request of a terminal, and the application joining request carries a first face image acquired by the terminal, a first user account of the terminal and a session identifier of the target session;
the joining condition acquisition module is used for acquiring the joining condition of the target session according to the session identifier of the target session;
and the session joining module is used for joining the first user account into the target session in response to the first face image meeting the joining condition.
In an optional implementation, the apparatus further includes: the display module is used for sequencing audio and video windows of all user accounts in a session interface of the target session according to the face scores of all user accounts in the target session; and displaying the session interface to the terminal.
In an optional implementation, the apparatus further includes: the display module is used for determining a second user account which is closest to the face score of the first user account according to the face score of each user account in the target session; arranging audio and video windows of the first user account and the second user account adjacently in a session interface of the target session; and displaying the session interface to the terminal.
In an optional implementation, the apparatus further includes: the display module is used for acquiring the current queuing sequence of the first user account and displaying a queuing waiting interface comprising the queuing sequence to the terminal; and responding to zero of the sequencing order, and displaying a session interface comprising the audio and video window of the first user account and the audio and video windows of other user accounts in the target group to the terminal.
In an optional implementation manner, the target session is created based on a session creation interface provided by the application client, and the session creation interface is used for providing a setting function of a joining condition.
In an optional implementation manner, the creating of the target session includes:
receiving a session creation request, wherein the session creation request carries a management account and a joining condition of the target session;
and creating the target session, and adding the management account into the target session.
In another aspect, a computer device is provided, where the computer device is a terminal, and the terminal includes a processor and a memory, where the memory is used to store at least one program code, and the at least one program code is loaded and executed by the processor to implement the operations performed in the session processing method in the embodiments of the present application.
In another aspect, a computer device is provided, where the computer device is a server, and the server includes a processor and a memory, where the memory is used to store at least one piece of program code, and the at least one piece of program code is loaded and executed by the processor to implement the operations performed in the session processing method in the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the at least one program code is loaded and executed by the processor to implement the operations performed in the session processing method in the embodiment of the present application.
In another aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer readable storage medium. The processor of the computer device reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the computer device performs the session processing method described above or the session processing method provided in the various alternative implementations.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the face image of the user account is collected before the user account is added into the target session, so that the user account is allowed to be added into the group session when the face image of the user account meets the adding condition of the target session, users who do not meet the adding condition can be automatically and effectively filtered without manual examination, and the filtering accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a session processing method according to an embodiment of the present application;
fig. 2 is a flowchart of a session processing method according to an embodiment of the present application;
fig. 3 is a flowchart of another session processing method provided according to an embodiment of the present application;
fig. 4 is a flowchart of another session processing method provided according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a session interface of a group session according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a session presentation interface provided in accordance with an embodiment of the present application;
FIG. 7 is a schematic diagram of a trigger display session creation interface provided in an embodiment of the present application;
fig. 8 is a schematic diagram of face scoring according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a gender detection system provided by an embodiment of the present application;
fig. 10 is a schematic logical structure diagram of an application client in a terminal according to an embodiment of the present application;
FIG. 11 is a flow chart of another session processing method provided according to an embodiment of the application;
fig. 12 is a flowchart of another session processing method provided in the embodiment of the present application;
fig. 13 is a block diagram of a session processing apparatus according to an embodiment of the present application;
fig. 14 is a block diagram of a session processing apparatus according to an embodiment of the present application;
fig. 15 is a block diagram of a terminal 1500 provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of a server provided according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Techniques that may be used with embodiments of the present application are described below.
The cloud conference is an efficient, convenient and low-cost conference form based on a cloud computing technology. A user can share voice, data files and videos with teams and clients all over the world quickly and efficiently only by performing simple and easy-to-use operation through an internet interface, and complex technologies such as transmission and processing of data in a conference are assisted by a cloud conference service provider to operate.
At present, domestic cloud conferences mainly focus on Service contents mainly in a Software as a Service (SaaS a Service) mode, including Service forms such as telephones, networks and videos, and cloud computing-based video conferences are called cloud conferences.
In the cloud conference era, data transmission, processing and storage are all processed by computer resources of video conference manufacturers, users do not need to purchase expensive hardware and install complicated software, and efficient teleconferencing can be performed only by opening a browser and logging in a corresponding interface.
The cloud conference system supports multi-server dynamic cluster deployment, provides a plurality of high-performance servers, and greatly improves conference stability, safety and usability. In recent years, video conferences are popular with many users because of greatly improving communication efficiency, continuously reducing communication cost and bringing about upgrading of internal management level, and the video conferences are widely applied to various fields such as governments, armies, transportation, finance, operators, education, enterprises and the like. Undoubtedly, after the video conference uses cloud computing, the cloud computing has stronger attraction in convenience, rapidness and usability, and the arrival of new climax of video conference application is necessarily stimulated. The cloud conference can also be regarded as one of group sessions, the session processing method provided by the embodiment of the application can be applied to a scene of screening participants of the cloud conference, and for an open cloud conference, namely a cloud conference in which participants are not designated, users meeting the joining conditions can be screened to join the cloud conference by the session processing method provided by the embodiment of the application.
Cloud Social interaction (Cloud Social) is a virtual Social application mode of internet of things, Cloud computing and mobile internet interactive application, aims to establish a famous resource sharing relationship map, and further develops network Social interaction, and is mainly characterized in that a large number of Social resources are uniformly integrated and evaluated to form a resource effective pool to provide services for users as required. The more users that participate in the sharing, the greater the value of the utility that can be created. The session processing method provided by the embodiment of the application can be applied to a scene of screening two parties performing cloud social contact, and for any party performing cloud social contact, a group session can be established and a joining condition for joining the group session can be set through the session processing method provided by the embodiment of the application, so that users meeting the joining condition can be screened to enter the group session for social contact. Of course, the conversation processing method provided in the embodiment of the present application can also be applied to scenes such as 1-to-1 video chat object screening, multi-person video chat object screening, and recruitment object screening.
An artificial intelligence cloud Service is also commonly referred to as AIaaS (AI as a Service, chinese). The method is a service mode of an Artificial Intelligence platform, and particularly, the AIaaS platform splits several types of common AI (Artificial Intelligence) services, and provides an independent or packaged service at a cloud. This service model is similar to the one opened in an AI theme mall: all developers can access one or more artificial intelligence services provided by the platform through an API (application programming interface), and part of the qualified developers can also use an AI framework and an AI infrastructure provided by the platform to deploy and operate and maintain the self-dedicated cloud artificial intelligence services. In the embodiment of the application, the server can determine the face score of the user account according to the face image uploaded by the user account through the artificial intelligence cloud service.
Next, an implementation environment of the session processing method provided in the embodiment of the present application is described, and fig. 1 is a schematic diagram of an implementation environment of the session processing method provided in the embodiment of the present application. The implementation environment comprises a first terminal 101, a second terminal 102 and a server 103.
The first terminal 101 and the server 103 can be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. Optionally, the first terminal 101 is an intelligent terminal capable of running instant messaging software or social software, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart television, and the like, but is not limited thereto. The first terminal 101 can be installed and run with an application client that can be used to engage in online activities. Optionally, the application client is a client of a social application, a client of a shopping application, or a client of a recruitment application. Illustratively, the first terminal 101 is a terminal used by a first user, and a first user account of the first user is logged in an application client running in the first terminal 101, where the first user account is an account applying for joining a group session.
The second terminal 102 and the server 103 can be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. Optionally, the second terminal 102 is an intelligent terminal capable of running instant messaging software or social applications, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and a smart television, but is not limited thereto. The second terminal 102 can be installed and run with an application client that can be used to create online activities. Optionally, the application client is a client of a social application, a client of an online education application, a client of a video or live application, a client of a shopping application, or a client of a recruitment application. Illustratively, the second terminal 102 is a terminal used by a second user, and a second user account of the second user is logged in an application client running in the second terminal 102, where the second user account is a management account for creating a group session.
It should be noted that, in the embodiment of the present application, the division of the first terminal and the second terminal is only for convenience of description, and as for any one of the first terminal and the second terminal, it can be used as both the first terminal and the second terminal, that is, an application client installed in the terminal can be used for creating a group session and applying for joining the group session.
The server 103 may be an independent physical server, a server cluster or a distributed system including a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The server 103 is configured to provide a background service for the application clients executed by the first terminal 101 and the second terminal 102.
Optionally, in the process of group session processing, the server 103 undertakes primary group session processing, and the first terminal 101 and the second terminal 102 undertake secondary group session processing; or, the server 103 undertakes the secondary group session processing work, and the first terminal 101 and the second terminal 102 undertake the primary group session processing work; alternatively, the server 103, the first terminal 101, or the second terminal 102 can individually undertake the group session processing work.
Optionally, the server 103 includes: the system comprises an access server, a face detection server and a database. The access server is used to provide access services for the first terminal 101 and the second terminal 102. The face detection server is used for providing face detection service. The face detection server can be one or more. When there are multiple face detection servers, there are at least two face detection servers for providing different services, and/or there are at least two face detection servers for providing the same service, for example, providing the same service in a load balancing manner, which is not limited in the embodiments of the present application. The face detection server can be provided with an AI face detection model for determining a face score from the face image. The database is used for storing data such as user account numbers, session identifications, face images, joining conditions, group session records and the like.
Optionally, the first terminal 101 and the second terminal 102 generally refer to two of the plurality of terminals, and this embodiment is only illustrated by the first terminal 101 and the second terminal 102. Those skilled in the art will appreciate that the number of the first terminals 101 can be greater. For example, the number of the first terminals 101 is several tens or several hundreds, or more, and the environment for implementing the session processing method includes other terminals. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Optionally, the wireless network or wired network described above uses standard communication techniques and/or protocols. The Network is typically the Internet, but can be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible markup Language (XML), and the like. All or some of the links can also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet protocol Security (IPsec), and so on. In other embodiments, custom and/or dedicated data communication techniques can also be used in place of or in addition to the data communication techniques described above.
In an optional implementation manner, the conversation processing method provided in the embodiment of the present application can be applied to a social scene, and the following description takes the first terminal and the second terminal as terminals used by a friend making platform user as an example.
A second user of the second terminal can request the server to create an audio and video session for social contact through the social client, and the second user screens users who join the audio and video session by setting joining conditions when the second user requests to create the audio and video session. And the server of the social platform can create an audio and video session on the social platform according to the session creation request sent by the second terminal, and add the second user account of the second terminal into the audio and video session. The server can send a session link for the established audio-video session to the social platform. The first user of the first terminal can check audio and video sessions created by other users on the social platform, and applies for joining the audio and video sessions by clicking session links of any audio and video sessions. If the first user clicks a session link of an audio and video session created by the second user, the first terminal sends an application joining request to the server, and the server determines whether the first face image meets the joining condition of the audio and video session according to the first face image, the first user account and the session identification of the audio and video session carried in the application joining request. If the joining condition is met, the server joins the first user account into the audio and video session; and if the joining condition is not met, the server returns a prompt of joining failure to the first terminal. In the social process, the creator of the audio and video conversation sets the joining condition, and then the server screens the users applying for joining the group conversation according to the set joining condition, so that a brand-new social mode is provided, social scenes are enriched, curiosity of the users can be met, social interestingness is improved, and the user activity of the social platform is improved. In addition, as the users joining the audio and video conversation are users meeting the joining conditions, the social efficiency and the success rate are improved.
In an optional implementation manner, the conversation processing method provided in the embodiment of the present application can be used in a recruitment scenario, and the following description takes a first terminal as a terminal used by an application user and a second terminal as an example of a terminal used by a recruitment user.
The second user, namely the recruitment user, of the second terminal can request the server to create an audio and video conversation for recruitment on the recruitment platform through the recruitment client, and the second user recruits users with specific gender and appearance, such as a female model, a male model and female actors through setting joining conditions such as gender and face scores when the audio and video conversation is requested to be created. And the server of the recruitment platform can create an audio and video session on the recruitment platform according to the session creation request sent by the second terminal and add the second user account of the second terminal into the audio and video session. The first user of the first terminal, namely the employing user, can check the recruitment information issued by other users on the recruitment platform, wherein the recruitment information comprises a session link of an audio and video session, and the first user can apply for joining the audio and video session by clicking the session link of any audio and video session. If the first user clicks the session link of the audio and video session created by the second user, the first terminal sends an application joining request to the server, and the server determines whether the first face image meets the joining condition of the audio and video session according to the face image, the first user account and the session identification of the audio and video session carried in the application joining request. If the joining condition is met, the server joins the first user account into the audio and video session; and if the joining condition is not met, the server returns a prompt of joining failure to the first terminal. The recruitment similar to the field recruitment mode is carried out on line in a group conversation mode, the creator of the audio and video conversation sets the joining condition, and then the server screens the users applying for joining the group conversation according to the set joining condition, so that unnecessary communication is reduced, a brand-new recruitment mode is provided, the time of a recruiter is saved, and the recruitment efficiency and success rate are improved.
Fig. 2 is a flowchart of a session processing method according to an embodiment of the present application, and as shown in fig. 2, the session processing method is described in the embodiment of the present application by taking the session processing method as an example. The conversation processing method comprises the following steps:
201. and responding to the application joining operation of the first user account on the target session, and carrying out face acquisition by the terminal to obtain a first face image corresponding to the first user account.
In this embodiment of the application, optionally, the terminal is a first terminal 101 shown in fig. 1, and an application client of the first terminal logs in a first user account of a first user. The first is for a user applying for joining the target session. The first terminal can acquire the human face in real time by calling the camera to obtain a first human face image.
202. The terminal sends an application joining request to the server, wherein the application joining request carries the first face image, the first user account and the session identification of the target session.
In an embodiment of the application, the application joining request can instruct the server to detect the first face image, so as to determine whether the first face image meets a joining condition of the target session.
203. And in response to the server determining that the first face image of the first user account meets the joining condition of the target session, the terminal joins the first user account in the target session.
In the embodiment of the application, if the first face image meets the joining condition, the first user is indicated to be able to join the target session, and the terminal is able to join the first user account into the target session and display a session interface of the target session.
In the embodiment of the application, the face image of the user account is collected before the user account is added into the target session, so that the user account is allowed to be added into the group session when the face image of the user account meets the adding condition of the target session, users who do not meet the adding condition can be automatically and effectively filtered without manual examination, and the filtering accuracy is improved.
Fig. 3 is a flowchart of another session processing method provided in the embodiment of the present application, and as shown in fig. 3, the embodiment of the present application is described by taking an application to a server as an example. The conversation processing method comprises the following steps:
301. the server receives an application joining request of the terminal, wherein the application joining request carries the first face image acquired by the terminal, the first user account of the terminal and the session identification of the target session.
In the embodiment of the application, after receiving the application joining request sent by the terminal, the server can analyze the application joining request to obtain the first face portrait, the first user account and the session identifier.
302. And the server acquires the joining condition of the target session according to the session identifier of the target session.
In the embodiment of the application, the server stores the joining conditions of each created group session, and the server can acquire the joining conditions of the target session according to the association relationship between the session identifier of the group session and the joining conditions.
303. In response to the first face image satisfying the join condition, the server joins the first user account to the target session.
In the embodiment of the application, the server can detect the first face image based on the AI face detection model, and determine whether the first face image meets the joining condition of the target session. If the joining condition is met, joining the first user account into the target session; and if the adding condition is not met, returning a prompt that the adding condition is not met.
Optionally, the server may display the session interface of the target session to the terminal, that is, the rendered session interface is sent to the terminal, and the terminal displays the session interface.
In the embodiment of the application, after an application joining request sent by a first terminal is received, joining conditions of a target session are obtained according to a session identifier carried in the application joining request, whether a first face image carried in the application joining request meets the joining conditions or not is determined, and then a first user account is added into the target session when the joining conditions are met, so that users who do not meet the joining conditions of a group session can be automatically and effectively filtered, and the filtering accuracy is improved.
Fig. 4 is a flowchart of another session processing method provided in the embodiment of the present application, and as shown in fig. 4, the application to a terminal is taken as an example in the embodiment of the present application for description. The method comprises the following steps:
401. and responding to the application joining operation of the first user account on the target session, and carrying out face acquisition by the terminal to obtain a first face image corresponding to the first user account.
In this embodiment, the terminal is the first terminal 101 shown in fig. 1, and an application client of the first terminal logs in a first user account of a first user. Optionally, the target conversation is a conversation for online friend making, a conversation for online recruitment, a conversation for online education, a conversation for online competition, and the like, and the application of the target conversation is not limited in the embodiment of the present application. Optionally, the target session is in the form of an audio session, a video session, or other forms of sessions, and the form of the target session is not limited in the embodiments of the present application. Optionally, the application joining operation of the target session can be initiated in a group session by the first user account, where the first user account is a user account already joined in the group session, or initiated in a session display interface by the first user account, where the session display interface is used to display at least one ongoing audio/video session. The terminal can respond to the detected application adding operation, call audio and video acquisition equipment such as a camera and a microphone to carry out face acquisition and sound acquisition, and obtain a first face image corresponding to the first user account and sound information corresponding to the first user account.
For example, taking the target session as an audio/video session, and the application joining operation of the target session can be initiated in the group session by the first user account, as shown in fig. 5, fig. 5 is a schematic view of a session interface of the group session provided according to an embodiment of the present application. As shown in fig. 5, the session interface of the group session displays a session link of the target session: the AAA initiates a video call and a chat log of each user in the group session, and the user can apply for joining the target session by clicking the session link, such as clicking a join button.
For another example, taking the target session as an audio/video session, and the application joining operation of the target session can be initiated in the session display interface by the first user account, as shown in fig. 6, where fig. 6 is a schematic diagram of a session display interface provided according to an embodiment of the present application. As shown in fig. 6, at least one session link of an ongoing audio/video session is displayed on the session interface of the group session, a user can apply to join the audio/video session by clicking any session link, and the terminal can use the audio/video session corresponding to the session link clicked by the user as a target session.
In an alternative implementation, the target session is created based on a session creation interface provided by the application client, and the session creation interface is used for providing a setting function of a joining condition. The session creation interface may be displayed after the second user account of the second terminal 102 triggers the setting operation of the join condition in the group session, or displayed after the second user account triggers the setting operation of the join condition in the session display interface. By providing the session creation interface in the application client, the user can set the joining condition of the group session through the session creation interface, so as to screen the users who can join the group session. Optionally, the user can input the join condition in the session creation interface or select according to a preset join condition.
For example, taking an example that the session creation interface can be triggered and displayed in the group session by the second user account, and the target session is an audio/video session, see fig. 7, where fig. 7 is a schematic diagram of a trigger display session creation interface provided according to an embodiment of the present application. As shown in fig. 7, (a) in fig. 7 is a session interface of the group session. The user triggers and displays two options of "normal video chat" and "high-value video chat" by clicking a video call function in the session interface, as shown in fig. 7 (b). In response to the user clicking the "high-color video chat option," the terminal displays a session creation interface, as shown in fig. 7 (c), that includes gender settings options: "women-in-men-may-participate", "women-only-participate", "men-only-participate", and color value setting options: "Male color greater than" and "female color greater than". The user can make the setting by way of input or selection. After the user clicks the "male color score greater than" option, the alternative color score is displayed, as shown in fig. 7 (d). After the user has finished selecting, this is shown in fig. 7 (e). The user clicks the "initiate video chat" button in fig. 7 and a session creation request is sent to the server. The gender and the color value set by the second user account are the joining conditions of the target session. Of course, if the user selects "women only can participate", the setting option of "men's color value is greater than" is no longer displayed; similarly, if the user selects "only men can participate", the setting option of "women's color value is greater than" is not displayed any more.
In an alternative implementation, the creating of the target session includes: and the second terminal responds to the creation group operation of the management account of the target session and displays a session creation interface provided by the application client. And the second terminal receives the setting operation of the management account on the joining condition on the session creation page. And the second terminal sends a session creation request to the server, wherein the session creation request carries the joining condition.
It should be noted that the session creation interface can also include other appearance-related options, such as hair style, eye size, face shape, whether glasses are worn, etc.; other options unrelated to appearance can also be included, such as age, height, etc. Optionally, the session creation interface may also be configured to set a session name of the audio/video session, such as "80 cents chat spot", "welcome beautiful girls to chat in handsome boys", "high face value handsome boys and other your chat", and the like, which is not limited in this embodiment of the present application.
It should be noted that, when a group session is created based on a group session, an account with a creation authority can create the group session. Optionally, the user with the creation authority is an account for creating a group session, an account having a management authority of the group session, or an account stool assigned with the creation authority. The group session is created based on the session display interface, so that the creation permission of the group session can be limited without limiting, and the creation permission of the group session can also be limited without limiting.
In an optional implementation manner, the terminal may perform face acquisition based on the viewing frame, and correspondingly, in response to an application joining operation of the first user account for the target session, the terminal performs face acquisition, and the step of obtaining the first face image is as follows: and responding to the application joining operation of the first user account on the target session, carrying out face recognition on the terminal based on the image in the view-finding frame of the terminal, responding to the recognized face, and carrying out face acquisition to obtain a first face image. Optionally, in response to that no face is recognized, prompt information is displayed, where the prompt information is used to indicate that no face is currently recognized.
It should be noted that, based on different target session application scenarios, the human face recognition functions are different and the same. When the application scene of the target session is a scene that a specific user can be limited to join, such as a scene of online education, an online test, an online match and the like, the face recognition can be used for recognizing whether the acquired face is the face of the terminal user, namely, the user authorized to participate in the online education, the online test and the online match, so as to prevent people from being falsely signed. When the application scene of the target conversation is a scene that a specific user is not limited to join, such as online friend making, online recruitment and the like, face recognition can be used for recognizing whether the collected scene is a face.
402. The terminal sends an application joining request to the server, wherein the application joining request carries the first face image, the first user account and the session identification of the target session.
In the embodiment of the application, the terminal can generate the application joining request according to the acquired first face image, the currently logged first user account and the session identifier of the target session indicated by the application joining operation.
403. And in response to the server determining that the first face image of the first user account meets the joining condition of the target session, the terminal joins the first user account in the target session.
In the embodiment of the application, after the terminal joins the first user account into the target session, the terminal can display prompt information for prompting the successful joining and can also display a session interface of the target session, wherein the session interface is used for displaying at least one audio and video window, and one audio and video window corresponds to one user account. Optionally, the audio/video window of the at least one user account can be displayed in different arrangements in the session interface. Certainly, the terminal can also directly display at least one audio/video window without displaying the session interface, which is not limited in the embodiment of the present application.
In an optional implementation manner, after the terminal joins the first user account in the target session: the terminal can display the audio and video windows of the user accounts from high to low in the session interface of the target group session according to the face scores of the user accounts in the target session. The face score of each user account is determined by the server according to the face image of each user account.
In an optional implementation manner, after the terminal joins the first user account in the target session: the terminal can display an audio and video window of a second user account which is closest to the face score of the first user account at the position adjacent to the audio and video window of the first user account in a conversation interface of the target group conversation. And the other user accounts are sorted according to the adding sequence by default.
For example, the target session includes 5 user accounts, and the joining order is user account a, user account B, user account C, user account D, and user account E. The face scores of the user account A and the user account C are closest, and the face scores of the user account B and the user account D are closest. In a session interface displayed on the terminal of the user account a, the order of the audio/video windows of each user account is as follows: A-C-B-D-E; the sequence of the audio and video windows of each user account in the session interface displayed on the terminal of the user account B is as follows: A-B-D-C-E; the sequence of audio and video windows of each user account in a session interface displayed on the terminal of the user account C is A-C-B-D-E; and the sequence of audio and video windows of each user account in a session interface displayed on the terminal of the user account D is A-B-D-C-E.
It should be noted that the terminal may also be configured to display only the audio/video window of the first user account and the audio/video window of the second user account closest to the face score of the first user account, hide the audio/video windows of other user accounts, and display the audio/video window of the speaking user account only when the other user accounts speak.
In an optional implementation manner, when the target session limits the number of people joining, the first user account needs to be queued to enter the target session, and after the terminal joins the first user account in the target session: the terminal can display the current queuing sequence of the first user account in a queuing waiting interface of the application client. And responding to zero queuing sequence, and displaying an audio and video window of the first user account and an audio and video window of at least one other user account in a session interface of the target session by the terminal.
For example, the target session is an audio and video session for recruitment, the target session can only be added to one user account at a time, and only the previous user account exits and the next user account enters. And queuing the first user account after meeting the joining condition, displaying the current queuing sequence by the terminal, updating the queuing sequence in real time, and joining the first user account into the target session when the queuing sequence is zero when the first user account is turned. Of course, the target session can also join multiple user accounts at a time, which is not described herein.
404. In response to the server determining that the first face image of the first user account does not meet the join condition of the target session, displaying a session presentation interface for presenting at least one group session in progress.
In the embodiment of the application, if the first face image does not meet the joining condition of the target session, a session display interface is displayed so that the first user account can select other ongoing group sessions. Optionally, if the application join operation of the target session is initiated by the first user account in the group session, the terminal may display the session interface of the group session instead of displaying the session display interface when the server determines that the first face image of the first user account does not satisfy the join condition of the target session. The embodiment of the present application does not limit this.
Before displaying the session interface of the target session, the terminal may further display a detection result of the first face image of the first user account returned by the server, where the detection result is used to indicate whether the first face image meets the join condition of the target session.
For example, when the joining condition of the target session is male, if the first face image is a female face image, the terminal displays the first face image, displays that the genders are inconsistent, and fails to apply; and if the first face image is a male face image, the terminal displays the first face image, displays the sex coincidence, applies for the pass, and then displays a conversation interface of the target conversation.
For another example, referring to fig. 8, fig. 8 is a schematic diagram of a face score according to an embodiment of the present application. When the adding condition of the target session is that the color value is larger than 80 minutes, if the face score of the first face image is 75 minutes, the terminal displays the first face image, displays the color value of 75 minutes, and the application fails; and if the face score of the first face is 85 points, the terminal displays the first face image, displays the face value of 85 points, applies for passing and then displays a conversation interface of the target conversation.
It should be noted that, when the joining condition of the target session includes gender, the terminal may perform gender detection on the first facial image, send the request for joining to the server after the first facial image passes gender detection, and if the first facial image does not pass gender detection, display the failing prompt message and no longer send the request for joining to the server.
For example, referring to fig. 9, fig. 9 is a schematic diagram of a performance test provided according to an embodiment of the present application. As shown in fig. 9, the first face image is used for gender detection, and if the gender detection is failed, the failed prompt message is displayed.
In order to make the flow of the conversation processing method described in the above-described steps 401 to 404 clearer, the following description is made from another point of view. Referring to fig. 10, fig. 10 is a schematic diagram of a logical structure of an application client in a terminal according to an embodiment of the present application. As shown in fig. 10, the application client includes three major structures, namely a network layer, a data layer and a presentation layer.
The network layer is used for providing communication between the application client and the server, and the network layer can be used for generating an application joining request, sending the application joining request to the server, processing a joining condition setting request, recognizing a face and receiving pushing of the server. The network layer can send the data received by the application client to the data layer, and the data layer updates the data. The communication Protocol among the network layer, the data layer and the presentation layer adopts UDP (User Datagram Protocol). The network layer is also used to prompt connection failure when network connection is not available.
The data layer is used for storing data related to the application client, such as group basic data, group chat data, face feature point data and the like. The group basic data includes group member information, such as account number, nickname, etc. The group chatting data includes video chatting data, text chatting data, chatting time, chatting user information, and the like. The face feature point data is obtained by extracting the first face image acquired by the terminal camera through the face recognition module, namely the face feature point data is carried in the application adding request sent by the terminal instead of the complete first face image, so that the sent data volume can be reduced. Optionally, the data stored in the data layer may be stored in an internal memory Cache (a Cache in the computer, which is a memory with a small scale and a high speed between the CPU and a main memory dram (dynamic Random Access memory)), and a local database, and when there is no data in the internal memory Cache, the terminal may load corresponding data from the database and Cache the data in the internal memory Cache, thereby increasing the speed of acquiring the data. And after the data layer receives data returned by the server sent by the network layer, updating the memory Cache memory and the local database.
The display layer is used for displaying interfaces of the application client, such as a session creation interface for setting joining conditions, a camera preview interface for face acquisition, namely a view finder interface, a session interface of audio and video sessions, and a session interface of group sessions. The session creation interface comprises a gender selection box, a text input box, a color value score input box and the like, and can be displayed through a standard system control in the application client and respond to an event triggered by a user. The camera preview interface is used for displaying the acquired images in real time, optionally, the images acquired by the camera preview interface can be subjected to face recognition by the face recognition unit, and the characteristic point acquisition unit is used for acquiring the face characteristic points, and can also be used for displaying the result of gender recognition or face scores. Alternatively, the face recognition unit can provide corresponding functions by a third-party SDK (Software Development Kit). The session interface of the audio and video session comprises an audio and video window which is used for displaying a real-time preview picture, voice messages and the like of a user. The session interface of the group session comprises a group name, a group text message list, a group voice message list, a message input box and the like, and can be displayed through a standard system control in the application client and respond to an event triggered by a user. The display layer is also responsible for responding to the interactive operation of the user, monitoring click and dragging events, calling back to the corresponding function for processing, and providing capability support by a standard system control.
Correspondingly, the implementation flow of the session processing method provided by the embodiment of the application is as follows: and the display layer responds to the application adding operation of the first user account on the target session, carries out face acquisition through a camera preview interface to obtain a first face image, and extracts face feature point data. The data layer stores face feature point data of the first face image. The network layer generates an application joining request and sends the application joining request to the server, wherein the joining request carries the face feature point data obtained from the data layer, the first user account and the session identification of the target session. The network layer receives the pushing of the server, and the display layer displays a session interface of the target session.
In the embodiment of the application, the face image of the user account is collected before the user account is added into the target session, so that the user account is allowed to be added into the group session when the face image of the user account meets the adding condition of the target session, users who do not meet the adding condition can be automatically and effectively filtered without manual examination, and the filtering accuracy is improved.
Fig. 11 is a flowchart of another session processing method provided in the embodiment of the present application, and as shown in fig. 11, the application to a server is taken as an example in the embodiment of the present application for description. The method comprises the following steps:
1101. the server receives a session creation request, wherein the session creation request carries a management account and a joining condition of a target session.
In this embodiment of the application, the management account is the second user account of the second terminal 102, and the management account sets the joining condition through a session creation interface provided by an application client of the second terminal. The server receives and stores the management account and the joining condition. That is, the target session is created based on a session creation interface provided by the application client of the second terminal, and the session creation interface is used for providing a join condition setting function. And the second terminal responds to the creation group operation of the management account of the target session and displays a session creation interface provided by the application client. And the second terminal sends a session creation request to the server based on the joining condition set by the management account on the session creation page, wherein the session creation request carries the joining condition.
1102. The server creates a target session and adds the management account to the target session.
In this embodiment of the application, after a target session is created, a server can add the management account into the target session, at this time, only the management account is in the target session, the server displays a session interface of the target session to a second terminal that manages the account, and if the rendered session interface is sent to the second terminal, the second terminal can display the session interface of the target session.
It should be noted that the session creation request sent by the second terminal can also carry a second face image of the second user account, and the server creates the target session when determining that the second face image meets the join condition; and when the server determines that the second face image does not meet the joining condition, returning a prompt of refusing to create to the second terminal. Therefore, all user accounts in the target session can be ensured to meet the joining condition, and the malicious user is prevented from carrying out cheating on other users by setting a higher joining condition.
1103. The server receives an application joining request of the terminal, wherein the application joining request carries the first face image acquired by the terminal, the first user account of the terminal and the session identification of the target session.
In this embodiment of the application, the terminal is the first terminal 101 shown in fig. 1, an application client of the first terminal logs in a first user account of a first user, and after receiving an application join request sent by the first terminal, the server can analyze the application join request to obtain a first face avatar, the first user account, and a session identifier of the first user account.
1104. And the server acquires the joining condition of the target session according to the session identifier of the target session.
In the embodiment of the application, the server stores the joining conditions of each created group session, and the server can acquire the joining conditions of the target session according to the association relationship between the session identifier of the group session and the joining conditions.
1105. In response to the first face image satisfying the join condition, the server joins the first user account to the target session.
In this embodiment of the application, if the first face image meets the join condition, the server may join the first user account into the target session, and establish a communication connection between the first user account and the second user account.
In an optional implementation manner, when the adding condition is a face score, the server may perform face detection on the first face image to obtain a face score of the first face image, and in response to that the face score of the first face image is not less than the face score specified by the adding condition, the server determines that the first face image satisfies the adding condition.
It should be noted that, optionally, the first terminal may be configured to perform face feature extraction on the first face image, and the application joining request carries face feature data, and accordingly, the server determines whether the face feature data meets the joining condition. Optionally, the first terminal may further be configured to obtain a join condition of the target session from the server by applying for the join request, and determine, by the first terminal, whether the first face image conforms to the join condition.
In an optional implementation manner, when the joining condition is an authorized face, the server may perform face recognition on the first face image, and determine whether the face in the first image is an authorized face according to a result of the face recognition. In response to the face in the first face image being an authorized face, the server determines that the first face image satisfies the join condition. Of course, when the request for applying for joining carries the face feature data, the server can also determine whether the face is an authorized face directly according to the face feature data.
Optionally, after the server joins the first user account in the target session, a session interface of the target session can be presented to the terminal. The server can adjust the position of an audio/video window of each user account in the session interface according to the face score of each user account in the target session, then render the session interface, and send the rendered session interface to the terminal.
In an optional implementation manner, the server can sort the audio/video windows of the user accounts in the session interface of the target session according to the face scores of the user accounts in the target session. The session interface is then presented to the terminal.
In an optional implementation manner, the server may determine, according to the face scores of the user accounts in the target session, a second user account closest to the face score of the first user account, and arrange audio and video windows of the first user account and the second user account adjacently in a session interface of the target session. Then, the session interface is presented to the terminal.
In an alternative implementation manner, the server may obtain a current queuing order of the first user account, and display a queuing waiting interface including the queuing order to the terminal. And responding to zero sequencing order, and displaying a session interface comprising the audio and video window of the first user account and the audio and video windows of other user accounts in the target group to the terminal by the server.
In order to make the flow of the session processing method described in the above steps 1101 to 1106 clearer, please refer to fig. 12, where fig. 12 is a flowchart of another session processing method provided in the embodiment of the present application. Taking the target session as the premium video chat session as an example, as shown in fig. 12, user a is a creator of the premium video chat session, and users B and C are participants of the premium video chat session. The user A sends a creation request to the server through the terminal, creates a high-color-value video chat session, and sets the gender and the color value score as joining conditions. And the user B and the user C respectively apply for joining the high-color-value video chat session through the terminal, and the application joining request respectively carries the face feature point data of the user B and the user C. The server respectively determines the gender and the face value score of the face feature point data uploaded by the user B and the user C based on the AI face detection service, and respectively pushes the detection result to the user B and the user C according to the gender and the face value score. The detection result is used to indicate whether user B and user C can join the high-color-value video chat session.
In the embodiment of the application, after an application joining request sent by a first terminal is received, joining conditions of a target session are obtained according to a session identifier carried in the application joining request, whether a first face image carried in the application joining request meets the joining conditions or not is determined, and then a first user account is added into the target session when the joining conditions are met, so that users who do not meet the joining conditions of a group session can be automatically and effectively filtered, and the filtering accuracy is improved.
Fig. 13 is a block diagram of a session processing apparatus according to an embodiment of the present application. The apparatus is applied to a terminal, and is used for executing the steps executed by the session processing method, referring to fig. 13, the apparatus includes: a face acquisition module 1301, a request sending module 1302 and a session processing module 1303.
The face acquisition module 1301 is configured to perform face acquisition in response to an application joining operation of a first user account on a target session, so as to obtain a first face image corresponding to the first user account;
a request sending module 1302, configured to send an application join request to a server, where the application join request carries the first facial image, the first user account, and the session identifier of the target session;
and the session processing module 1303, configured to join the first user account in the target session in response to the server determining that the first face image of the first user account meets the joining condition of the target session.
In an optional implementation manner, the application joining operation is initiated by the first user account in a group session, and the target session is an audio/video session initiated in the group session; or the like, or, alternatively,
the application joining operation is initiated by the first user account in a session display interface, the target session is an audio and video session, and the session display interface is used for displaying at least one ongoing audio and video session.
In an optional implementation, the apparatus further includes: and the display module 1304 is used for displaying the audio and video windows of the user accounts from high to low according to the face scores of the user accounts in the target session.
In an optional implementation, the apparatus further includes: the display module 1304 is configured to display, in an adjacent position of the audio/video window of the first user account, an audio/video window of a second user account having a face score closest to that of the first user account.
In an optional implementation, the apparatus further includes: a display module 1304, configured to display, in a queuing waiting interface, a current queuing order of the first user account; and responding to zero queuing sequence, and displaying the audio and video window of the first user account and the audio and video window of at least one other user account.
In an optional implementation manner, the face acquisition module 1301 is configured to perform face recognition based on an image in a view finder of the terminal in response to an application joining operation of a first user account to a target session; and responding to the recognized face, and performing face acquisition to obtain a first face image corresponding to the first user account.
In an optional implementation, the apparatus further includes:
a prompt module 1305, configured to, in response to no face being recognized, display prompt information, where the prompt information indicates that no face is currently recognized.
In an optional implementation, the apparatus further comprises: a display module 1304, configured to, in response to the server determining that the first face image of the first user account does not satisfy the join condition, display a session display interface, where the session display interface is used to display at least one ongoing audio/video session.
In an alternative implementation, the target session is created based on a session creation interface provided by the application client, and the session creation interface is used for providing a setting function of a joining condition.
In an alternative implementation, the creating of the target session includes:
responding to the creation group operation of the management account of the target session, and displaying a session creation interface provided by the application client;
and receiving the setting operation of the management account on the joining condition on the session creating page, and sending a session creating request to the server, wherein the session creating request carries the joining condition.
In the embodiment of the application, the face image of the user account is collected before the user account is added into the target session, so that the user account is allowed to be added into the group session when the face image of the user account meets the adding condition of the target session, users who do not meet the adding condition can be automatically and effectively filtered without manual examination, and the filtering accuracy is improved.
Fig. 14 is a block diagram of a session processing apparatus according to an embodiment of the present application. The device is applied to a server and used for executing the steps executed by the session processing method, and referring to fig. 13, the device comprises: a request receiving module 1401, a joining condition obtaining module 1402 and a session joining module 1403.
A request receiving module 1401, configured to receive an application joining request of a terminal, where the application joining request carries a first face image acquired by the terminal, a first user account of the terminal, and a session identifier of the target session;
a join condition obtaining module 1402, configured to obtain a join condition of the target session according to the session identifier of the target session;
a session joining module 1403, configured to join the first user account in the target session in response to the first face image satisfying the joining condition.
In an optional implementation, the apparatus further comprises: a display module 1404, configured to sort, according to the face score of each user account in the target session, audio and video windows of each user account in a session interface of the target session; and displaying the session interface to the terminal.
In an optional implementation, the apparatus further comprises: a presentation module 1404, configured to determine, according to the face score of each user account in the target session, a second user account that is closest to the face score of the first user account; in a session interface of the target session, arranging audio and video windows of the first user account and the second user account adjacently; and displaying the session interface to the terminal.
In an optional implementation, the apparatus further comprises: a displaying module 1404, configured to obtain a current queuing order of the first user account, and display a queuing waiting interface including the queuing order to the terminal; and responding to the zero sequencing order, and displaying a session interface comprising the audio and video window of the first user account and the audio and video windows of other user accounts in the target group to the terminal.
In an alternative implementation, the target session is created based on a session creation interface provided by the application client, and the session creation interface is used for providing a join condition setting function.
In an alternative implementation, the creating of the target session includes:
receiving a session establishing request, wherein the session establishing request carries a management account and a joining condition of the target session;
and creating the target session, and adding the management account into the target session.
In the embodiment of the application, after an application joining request sent by a first terminal is received, joining conditions of a target session are obtained according to a session identifier carried in the application joining request, whether a first face image carried in the application joining request meets the joining conditions or not is determined, and then a first user account is added into the target session when the joining conditions are met, so that users who do not meet the joining conditions of a group session can be automatically and effectively filtered, and the filtering accuracy is improved.
It should be noted that: in the session processing device provided in the foregoing embodiment, when performing group session processing, only the division of the functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the session processing apparatus and the session processing method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
In this embodiment of the present application, the computer device can be configured as a terminal or a server, when the computer device is configured as a terminal, the terminal can be used as an execution subject to implement the technical solution provided in the embodiment of the present application, when the computer device is configured as a server, the server can be used as an execution subject to implement the technical solution provided in the embodiment of the present application, or the technical solution provided in the present application can be implemented through interaction between the terminal and the server, which is not limited in this embodiment of the present application.
When the computer device is configured as a terminal, fig. 15 is a block diagram of a terminal 1500 provided according to an embodiment of the present application. Alternatively, the terminal 1500 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer iii, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1500 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1502 is used to store at least one program code for execution by the processor 1501 to implement the session processing methods provided by the method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1504, a display 1505, a camera assembly 1506, an audio circuit 1507, a positioning assembly 1508, and a power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, provided on the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-emitting diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is used to locate a current geographic position of the terminal 1500 to implement navigation or LBS (location based Service). The positioning component 1508 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the display screen 1505 to display the user interface in a landscape view or a portrait view based on the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side frame of terminal 1500 and/or underneath display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the display screen 1505, the processor 1501 controls the operability control on the UI interface in accordance with the pressure operation of the user on the display screen 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of display screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the display screen 1505 is adjusted down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also known as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 9 does not constitute a limitation of terminal 1500 and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
When the computer device is configured as a server, fig. 16 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1600 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1601 and one or more memories 1602, where at least one program code is stored in the memories 1602, and is loaded and executed by the processors 1601 to implement the session Processing method provided by each of the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, which is applied to a terminal, and at least one program code is stored in the computer-readable storage medium, and the at least one program code is loaded and executed by a processor to implement the operations performed by the terminal in the session processing method of the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, which is applied to a server, and at least one program code is stored in the computer-readable storage medium, and is loaded and executed by a processor to implement the operations performed by the server in the session processing method of the foregoing embodiment.
Embodiments of the present application also provide a computer program product or a computer program comprising computer program code stored in a computer readable storage medium. The processor of the computer device reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the computer device performs the session processing method provided in the above-described various alternative implementations.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by hardware associated with program code, and the program may be stored in a computer readable storage medium, and the above mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (15)
1. A session processing method is applied to a terminal, and the method comprises the following steps:
responding to an application joining operation of a first user account on a target session, and performing face acquisition to obtain a first face image corresponding to the first user account;
sending an application joining request to a server, wherein the application joining request carries the first face image, the first user account and the session identifier of the target session;
in response to the server determining that the first facial image of the first user account satisfies a join condition of the target session, joining the first user account to the target session.
2. The method according to claim 1, wherein the application for joining operation is initiated by the first user account in a group session, and the target session is an audio-video session initiated in the group session; or the like, or, alternatively,
the application joining operation is initiated by the first user account in a session display interface, the target session is an audio and video session, and the session display interface is used for displaying at least one ongoing audio and video session.
3. The method of claim 1, wherein after the joining of the first user account to the target session, the method comprises:
and displaying the audio and video windows of the user accounts from high to low according to the face scores of the user accounts in the target session.
4. The method of claim 1, wherein after the joining of the first user account to the target session, the method comprises:
and displaying an audio and video window of a second user account which is closest to the face score of the first user account at the adjacent position of the audio and video window of the first user account.
5. The method of claim 1, wherein after the joining of the first user account to the target session, the method comprises:
displaying the current queuing sequence of the first user account in a queuing and waiting interface;
and responding to zero queuing sequence, and displaying the audio and video window of the first user account and the audio and video window of at least one other user account.
6. The method according to claim 1, wherein the performing face acquisition in response to an application joining operation of a first user account for a target session to obtain a first face image corresponding to the first user account comprises:
responding to the application joining operation of the first user account on the target session, and performing face recognition based on the image in the view finder of the terminal;
and responding to the recognized face, and performing face acquisition to obtain a first face image corresponding to the first user account.
7. The method according to claim 6, wherein after the face recognition based on the image in the view finder of the terminal, the method comprises:
and responding to that the human face is not recognized, and displaying prompt information, wherein the prompt information is used for indicating that the human face is not currently recognized.
8. The method of claim 1, further comprising:
and in response to the server determining that the first face image of the first user account does not satisfy the joining condition, displaying a session display interface, wherein the session display interface is used for displaying at least one ongoing audio and video session.
9. The method of claim 1, wherein the target session is created based on a session creation interface provided by an application client, the session creation interface providing a setup function for join conditions.
10. The method of claim 9, wherein the creation of the target session comprises:
responding to the creation group operation of the management account of the target session, and displaying a session creation interface provided by the application client;
receiving the setting operation of the management account on the joining condition on the session creation page;
and sending a session creation request to the server, wherein the session creation request carries the joining condition.
11. A session processing method is applied to a server, and the method comprises the following steps:
receiving an application joining request of a terminal, wherein the application joining request carries a first face image acquired by the terminal, a first user account of the terminal and a session identifier of the target session;
acquiring a joining condition of the target session according to the session identifier of the target session;
in response to the first face image satisfying the join condition, joining the first user account to the target session.
12. A session processing apparatus, applied to a terminal, the apparatus comprising:
the system comprises a face acquisition module, a target session registration module and a face recognition module, wherein the face acquisition module is used for responding to an application joining operation of a first user account on a target session, and acquiring a face to obtain a first face image corresponding to the first user account;
a request sending module, configured to send an application join request to a server, where the application join request carries the first facial image, the first user account, and a session identifier of the target session;
and the session processing module is used for responding to the server to determine that the first face image of the first user account meets the joining condition of the target session, and joining the first user account into the target session.
13. A session processing apparatus, applied to a server, the apparatus comprising:
the system comprises a request receiving module, a request sending module and a target session sending module, wherein the request receiving module is used for receiving an application joining request of a terminal, and the application joining request carries a first face image acquired by the terminal, a first user account of the terminal and a session identifier of the target session;
the joining condition acquisition module is used for acquiring the joining condition of the target session according to the session identifier of the target session;
and the session joining module is used for joining the first user account into the target session in response to the first face image meeting the joining condition.
14. A computer device comprising a processor and a memory, the memory storing at least one piece of program code, the at least one piece of program code being loaded by the processor and performing the session handling method of any one of claims 1 to 10 or the session handling method of claim 11.
15. A storage medium for storing at least one piece of program code for performing the session handling method of any one of claims 1 to 10 or for performing the session handling method of claim 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010753249.5A CN111835531B (en) | 2020-07-30 | 2020-07-30 | Session processing method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010753249.5A CN111835531B (en) | 2020-07-30 | 2020-07-30 | Session processing method, device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111835531A true CN111835531A (en) | 2020-10-27 |
CN111835531B CN111835531B (en) | 2023-08-25 |
Family
ID=72920614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010753249.5A Active CN111835531B (en) | 2020-07-30 | 2020-07-30 | Session processing method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111835531B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112584187A (en) * | 2020-11-30 | 2021-03-30 | 北京达佳互联信息技术有限公司 | Session creation method, device, server and storage medium |
CN113139101A (en) * | 2021-05-17 | 2021-07-20 | 清华大学 | Data processing method and device, computer equipment and storage medium |
CN113411539A (en) * | 2021-06-21 | 2021-09-17 | 维沃移动通信(杭州)有限公司 | Multi-user chat initiating method and device |
CN114520795A (en) * | 2020-11-19 | 2022-05-20 | 腾讯科技(深圳)有限公司 | Group creation method, group creation device, computer equipment and storage medium |
WO2022105426A1 (en) * | 2020-11-17 | 2022-05-27 | Oppo广东移动通信有限公司 | Device joining method and apparatus, and electronic device and computer-readable medium |
CN115914162A (en) * | 2021-09-30 | 2023-04-04 | 上海掌门科技有限公司 | Method, apparatus, medium and program product for providing group status |
TWI810104B (en) * | 2022-11-01 | 2023-07-21 | 南開科技大學 | Interactive digital photo frame system with communication function and method thereof |
WO2023185650A1 (en) * | 2022-03-28 | 2023-10-05 | 华为技术有限公司 | Communication method, apparatus and system |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090094367A1 (en) * | 2006-06-28 | 2009-04-09 | Huawei Technologies Co., Ltd. | Method, system and device for establishing group session |
CN102630004A (en) * | 2012-04-12 | 2012-08-08 | 华为技术有限公司 | Authentication method for video conference and related device |
US20150213304A1 (en) * | 2014-01-10 | 2015-07-30 | Securus Technologies, Inc. | Verifying Presence of a Person During an Electronic Visitation |
US9282130B1 (en) * | 2014-09-29 | 2016-03-08 | Edifire LLC | Dynamic media negotiation in secure media-based conferencing |
CN107292986A (en) * | 2017-07-11 | 2017-10-24 | 北京眼神科技有限公司 | A kind of conference service method, server and host terminal |
CN107770478A (en) * | 2017-10-27 | 2018-03-06 | 广东欧珀移动通信有限公司 | video call method and related product |
US20180268200A1 (en) * | 2017-03-20 | 2018-09-20 | Motorola Mobility Llc | Face recognition in an enterprise video conference |
CN109766156A (en) * | 2018-12-24 | 2019-05-17 | 维沃移动通信有限公司 | A kind of conversation establishing method and terminal device |
CN110233743A (en) * | 2019-05-31 | 2019-09-13 | 维沃移动通信有限公司 | A kind of communication group creating method, terminal device and computer readable storage medium |
CN110381373A (en) * | 2019-06-14 | 2019-10-25 | 平安科技(深圳)有限公司 | Method for processing video frequency, device, computer equipment and storage medium |
CN110427227A (en) * | 2019-06-28 | 2019-11-08 | 广东虚拟现实科技有限公司 | Generation method, device, electronic equipment and the storage medium of virtual scene |
CN110489979A (en) * | 2019-07-10 | 2019-11-22 | 平安科技(深圳)有限公司 | Conferencing information methods of exhibiting, device, computer equipment and storage medium |
CN111191205A (en) * | 2019-12-17 | 2020-05-22 | 中移(杭州)信息技术有限公司 | Method for managing teleconference, server, and computer-readable storage medium |
US20200201969A1 (en) * | 2018-12-19 | 2020-06-25 | LINE Plus Corporation | Method and system for managing image based on interworking face image and messenger account |
-
2020
- 2020-07-30 CN CN202010753249.5A patent/CN111835531B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090094367A1 (en) * | 2006-06-28 | 2009-04-09 | Huawei Technologies Co., Ltd. | Method, system and device for establishing group session |
CN102630004A (en) * | 2012-04-12 | 2012-08-08 | 华为技术有限公司 | Authentication method for video conference and related device |
US20150213304A1 (en) * | 2014-01-10 | 2015-07-30 | Securus Technologies, Inc. | Verifying Presence of a Person During an Electronic Visitation |
US9282130B1 (en) * | 2014-09-29 | 2016-03-08 | Edifire LLC | Dynamic media negotiation in secure media-based conferencing |
US20180268200A1 (en) * | 2017-03-20 | 2018-09-20 | Motorola Mobility Llc | Face recognition in an enterprise video conference |
CN107292986A (en) * | 2017-07-11 | 2017-10-24 | 北京眼神科技有限公司 | A kind of conference service method, server and host terminal |
CN107770478A (en) * | 2017-10-27 | 2018-03-06 | 广东欧珀移动通信有限公司 | video call method and related product |
US20200201969A1 (en) * | 2018-12-19 | 2020-06-25 | LINE Plus Corporation | Method and system for managing image based on interworking face image and messenger account |
CN109766156A (en) * | 2018-12-24 | 2019-05-17 | 维沃移动通信有限公司 | A kind of conversation establishing method and terminal device |
CN110233743A (en) * | 2019-05-31 | 2019-09-13 | 维沃移动通信有限公司 | A kind of communication group creating method, terminal device and computer readable storage medium |
CN110381373A (en) * | 2019-06-14 | 2019-10-25 | 平安科技(深圳)有限公司 | Method for processing video frequency, device, computer equipment and storage medium |
CN110427227A (en) * | 2019-06-28 | 2019-11-08 | 广东虚拟现实科技有限公司 | Generation method, device, electronic equipment and the storage medium of virtual scene |
CN110489979A (en) * | 2019-07-10 | 2019-11-22 | 平安科技(深圳)有限公司 | Conferencing information methods of exhibiting, device, computer equipment and storage medium |
CN111191205A (en) * | 2019-12-17 | 2020-05-22 | 中移(杭州)信息技术有限公司 | Method for managing teleconference, server, and computer-readable storage medium |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022105426A1 (en) * | 2020-11-17 | 2022-05-27 | Oppo广东移动通信有限公司 | Device joining method and apparatus, and electronic device and computer-readable medium |
CN114520795A (en) * | 2020-11-19 | 2022-05-20 | 腾讯科技(深圳)有限公司 | Group creation method, group creation device, computer equipment and storage medium |
CN114520795B (en) * | 2020-11-19 | 2024-02-09 | 腾讯科技(深圳)有限公司 | Group creation method, group creation device, computer device and storage medium |
CN112584187A (en) * | 2020-11-30 | 2021-03-30 | 北京达佳互联信息技术有限公司 | Session creation method, device, server and storage medium |
CN112584187B (en) * | 2020-11-30 | 2023-03-21 | 北京达佳互联信息技术有限公司 | Session creation method, device, server and storage medium |
CN113139101A (en) * | 2021-05-17 | 2021-07-20 | 清华大学 | Data processing method and device, computer equipment and storage medium |
CN113411539A (en) * | 2021-06-21 | 2021-09-17 | 维沃移动通信(杭州)有限公司 | Multi-user chat initiating method and device |
CN113411539B (en) * | 2021-06-21 | 2022-06-28 | 维沃移动通信(杭州)有限公司 | Multi-user chat initiation method and device |
CN115914162A (en) * | 2021-09-30 | 2023-04-04 | 上海掌门科技有限公司 | Method, apparatus, medium and program product for providing group status |
WO2023185650A1 (en) * | 2022-03-28 | 2023-10-05 | 华为技术有限公司 | Communication method, apparatus and system |
TWI810104B (en) * | 2022-11-01 | 2023-07-21 | 南開科技大學 | Interactive digital photo frame system with communication function and method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN111835531B (en) | 2023-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111835531B (en) | Session processing method, device, computer equipment and storage medium | |
CN112672176B (en) | Interaction method, device, terminal, server and medium based on virtual resources | |
CN112291583A (en) | Live broadcast wheat connecting method and device, server, terminal and storage medium | |
CN111918086B (en) | Video connection method, device, terminal, server and readable storage medium | |
CN108833262B (en) | Session processing method, device, terminal and storage medium | |
CN111343346B (en) | Incoming call pickup method and device based on man-machine conversation, storage medium and equipment | |
CN110932963A (en) | Multimedia resource sharing method, system, device, terminal, server and medium | |
CN112492339A (en) | Live broadcast method, device, server, terminal and storage medium | |
CN113518265B (en) | Live broadcast data processing method and device, computer equipment and medium | |
CN112118477A (en) | Virtual gift display method, device, equipment and storage medium | |
CN110102063B (en) | Identification binding method, device, terminal, server and storage medium | |
CN113596499B (en) | Live broadcast data processing method and device, computer equipment and medium | |
CN108579075B (en) | Operation request response method, device, storage medium and system | |
CN112163406A (en) | Interactive message display method and device, computer equipment and storage medium | |
CN111031391A (en) | Video dubbing method, device, server, terminal and storage medium | |
CN112583806A (en) | Resource sharing method, device, terminal, server and storage medium | |
CN114116053A (en) | Resource display method and device, computer equipment and medium | |
CN113204671A (en) | Resource display method, device, terminal, server, medium and product | |
CN112468884A (en) | Dynamic resource display method, device, terminal, server and storage medium | |
CN113518198B (en) | Session interface display method, conference interface display method, device and electronic equipment | |
CN112423011B (en) | Message reply method, device, equipment and storage medium | |
CN112261482B (en) | Interactive video playing method, device and equipment and readable storage medium | |
CN113064981A (en) | Group head portrait generation method, device, equipment and storage medium | |
CN112163862A (en) | Target function processing method, device, terminal and storage medium | |
CN114826799A (en) | Information acquisition method, device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40030643 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |