CN111414892B - Information sending method in live broadcast - Google Patents

Information sending method in live broadcast Download PDF

Info

Publication number
CN111414892B
CN111414892B CN202010272136.3A CN202010272136A CN111414892B CN 111414892 B CN111414892 B CN 111414892B CN 202010272136 A CN202010272136 A CN 202010272136A CN 111414892 B CN111414892 B CN 111414892B
Authority
CN
China
Prior art keywords
user
preset
terminal device
face
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010272136.3A
Other languages
Chinese (zh)
Other versions
CN111414892A (en
Inventor
罗剑嵘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shengpay E Payment Service Co ltd
Original Assignee
Shanghai Shengpay E Payment Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shengpay E Payment Service Co ltd filed Critical Shanghai Shengpay E Payment Service Co ltd
Priority to CN202010272136.3A priority Critical patent/CN111414892B/en
Publication of CN111414892A publication Critical patent/CN111414892A/en
Application granted granted Critical
Publication of CN111414892B publication Critical patent/CN111414892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations

Abstract

The embodiment of the disclosure discloses an information sending method in live broadcast. The method is applied to a server, at least two terminal devices communicate in a live broadcast mode through the server, wherein the at least two terminal devices comprise a first terminal device and a second terminal device, and one specific implementation mode of the method comprises the following steps: acquiring a face image of a user acquired by second terminal equipment, and extracting face key points from the acquired face image; determining a relative position of the face of the user and a screen of the second terminal device based on the extracted key points; and sending prompt information to the user in response to determining that the relative position of the face of the user and the screen of the second terminal device meets a preset condition, wherein the prompt information is used for prompting the user to adjust the relative position of the face and the second terminal device. The embodiment can accurately detect whether the user is distracted in the process of watching live broadcast, and remind the user when the user is distracted.

Description

Information sending method in live broadcast
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to an information sending method in live broadcasting.
Background
With the continuous development of computer application technology and network technology, the live broadcast application range is wider and wider. As an example, live broadcast may be used in online video education.
Online education (e-Learning), also known as distance education or online Learning, refers to an educational mode in which a teacher uploads a teaching video or teaching resources to a server in a live broadcast manner, and then a learner performs on-demand or real-time viewing through a terminal. The education mode of online education not only can break through the limitation of time and space and improve the learning efficiency, but also can span the unequal distribution of education resources caused by regions and the like, so that the education resources are shared, and the learning threshold is reduced. Since the online education has the above-mentioned advantages, the online education is accepted by more and more teachers and students. However, compared with the face-to-face on-line teaching, the teacher has difficulty in paying attention to the current concentration state of each student when directly broadcasting the education online, so that the teacher has difficulty in finding out if the student is unconscious.
Disclosure of Invention
The embodiment of the disclosure provides an information sending method in live broadcast.
In a first aspect, an embodiment of the present disclosure provides an information sending method in live broadcast, which is applied to a server, where at least two terminal devices communicate in a live broadcast manner through the server, where the at least two terminal devices include a first terminal device and a second terminal device, and the method includes: acquiring a face image of a user acquired by second terminal equipment, and extracting face key points from the acquired face image; determining a relative position of the face of the user and a screen of the second terminal device based on the extracted key points; and sending prompt information to the user in response to determining that the relative position of the face of the user and the screen of the second terminal device meets a preset condition, wherein the prompt information is used for prompting the user to adjust the relative position of the face and the second terminal device.
In a second aspect, an embodiment of the present disclosure provides an information sending apparatus in live broadcast, applied to a server, where at least two terminal devices communicate in a live broadcast manner through the server, where the at least two terminal devices include a first terminal device and a second terminal device, and the apparatus includes: the extraction unit is configured to acquire a face image of a user acquired by the second terminal equipment and extract facial key points from the acquired face image; a determining unit configured to determine a relative position of the face of the user and a screen of the second terminal device based on the extracted key points; and the first sending unit is configured to send prompt information to the user in response to determining that the relative position of the face of the user and the screen of the second terminal device meets a preset condition, wherein the prompt information is used for prompting the user to adjust the relative position of the face and the second terminal device.
In a third aspect, embodiments of the present disclosure provide a server comprising: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements the method as described in the first aspect.
According to the information sending method in live broadcasting provided by the embodiment of the disclosure, the face image of the user acquired by the second terminal equipment can be acquired, the face key points are extracted from the acquired face image, then the relative position of the face of the user and the screen of the second terminal equipment can be determined based on the extracted key points, finally prompt information can be sent to the user to prompt the user to adjust the relative position of the face and the screen of the second terminal equipment in response to determining that the relative position of the face of the user and the screen of the second terminal equipment meets the preset condition, and whether the user has a distraction in the live broadcasting watching process can be accurately judged through the relative position of the face of the user and the screen of the second terminal equipment, and prompt information is sent to prompt the user to attend to the lessons when the user is determined to walk.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture diagram in which some embodiments of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method of information transmission in live broadcast according to the present disclosure;
fig. 3 is a flowchart of yet another embodiment of a method of information transmission in live broadcast according to the present disclosure;
FIG. 4 is a flowchart of one implementation of sending prompt information to a user in a method of sending information in live broadcast according to the present embodiment;
fig. 5 is a schematic structural view of an embodiment of an in-live information transmitting apparatus according to the present disclosure;
fig. 6 is a schematic diagram of a server suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which an in-live information transmission method or an in-live information transmission apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a first terminal device 101, a network 102, a server 103, and second terminal devices 104, 105, 106. The network 102 serves as a medium for providing communication links between the first terminal device 101 and the server 103, and between the server 103 and the second terminal devices 104, 105, 106. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
In general, a user may interact with the server 103 through the network 102 using the first and second terminal devices 101, 104, 105, 106 to receive or send messages or the like. For example, a teacher may interact with the server 103 using the first terminal device 101 and a learner may interact with the server 103 using the second terminal devices 104, 105, 106. The teacher carries out teaching live broadcast through the second terminal equipment 106, and the teaching live broadcast is uploaded to the server 105, and each student can watch the teaching live broadcast of the teacher through the second terminal equipment 104, 105 and 106. Various communication client applications, such as live broadcast applications, social platform software, online teaching clients, etc., can be installed on the first terminal devices 101, 102, 103 and the second terminal device 106.
The first terminal device 101 and the second terminal devices 104, 105, 106 may be hardware or software. When the first and second terminal devices 101, 104, 105, 106 are hardware, they may be a variety of electronic devices having display screens and supporting live and live video viewing, including but not limited to smartphones, tablets, electronic book readers, laptop and desktop computers, and the like. When the first terminal apparatus 101 and the second terminal apparatuses 104, 105, 106 are software, they can be installed in the above-listed electronic apparatuses. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
The server 103 may be a server providing various services, such as a background server providing support for live content of the second terminal device. The background server may analyze and process the data such as the face image of the user acquired by the second terminal devices 104, 105, 106, and feed back the processing result (for example, the prompt information) to the second terminal device.
Note that, the method for transmitting information in live broadcast provided by the embodiment of the present disclosure may be performed by the server 103. Accordingly, the information transmitting apparatus in live broadcast may be provided in the server 103. The present invention is not particularly limited herein.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of first terminal devices, second terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of first terminal devices, second terminal devices, networks and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method of information transmission in live broadcast according to the present disclosure is shown. The information sending method in live broadcasting comprises the following steps:
step 201, acquiring a face image of a user acquired by the second terminal device, and extracting facial key points from the acquired face image.
In general, in live broadcasting, a live broadcasting user (e.g., a teacher of online video education) may perform live broadcasting through a first terminal device, generate a live broadcasting stream after processing such as encoding compression, and transmit the live broadcasting stream to a server (e.g., the server shown in fig. 1). The server can issue and rebroadcast the live stream to a second terminal device where a live viewing user (e.g., a learner of online learning) is located, and the live viewing user can view live video at the second terminal device. The live broadcast watching user can display the live broadcast in the live broadcast watching process at the first terminal device where the live broadcast user is located, or the live broadcast watching user can not display the live broadcast in the live broadcast watching process at the first terminal device, which is not limited only.
In this embodiment, the execution body (for example, the server shown in fig. 1) of the information sending method in live broadcast may receive, from the second terminal device with which the user views live broadcast, the face image of the user acquired by the second terminal device through a wired connection manner or a wireless connection manner. It can be understood that when the user views the live video by using the second terminal device, the second terminal device can acquire the face image of the user, so as to obtain the face image of the user. Then, the executing body may extract the facial key points of the obtained face image by adopting various means, so as to obtain the facial key points of the face image. It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
Here, the executing body may input the acquired face image into a pre-trained face key point model, and may obtain the face key points in the face image. Wherein the facial key point extraction model can be used to extract facial key points in the face image. The facial key points may include points for constructing the contours of the face, eyes, eyebrows, lips, and nose. Of course, the executing body may extract the facial key points in the face image in other manners, which is not limited herein.
In some optional implementations of this embodiment, after the executing body acquires the face image of the user acquired by the second terminal, the executing body may match the acquired face image in a preset face image library. Here, a large number of face images of specific users may be contained in the face image library, and the specific users herein may be users having live video viewing rights. As an example, the live video is an online educational video for a class, and a particular user in the face image library may be a learner in the class. If the face image library is determined to comprise the acquired face images, the user can be determined to have the authority of watching the live video, and the execution main body can extract the face key points from the acquired face images. According to the scheme disclosed by the implementation mode, the distraction judgment can be carried out only on the users with the live video watching authority, and the judgment is not needed for the users without the authority, so that the calculation amount of the main body can be reduced.
In some optional implementations of this embodiment, if it is determined that the obtained face image is not included in the face image library, it may be determined that the user does not have permission to view live video, and the executing body may send a warning message to the user. The warning message may be used to indicate the user that the rights are not sufficient. In live education, the implementation mode can avoid the occurrence of the condition of taking lessons instead.
Step 202, determining a relative position of the face of the user and the screen of the second terminal device based on the extracted key points.
In this embodiment, based on the key points extracted in step 201, the execution subject (e.g., the server shown in fig. 1) may determine the relative position of the face and the screen of the second terminal device when the user views the live video on the second terminal device in various manners. For example, the executing body may determine the relative position of the face of the user and the screen of the second terminal device by using the relative positions of the key points and the screen of the second terminal device, or the executing body may determine the face of the user by using the extracted key points and then directly determine the relative position of the face of the user and the screen of the second terminal device.
And step 203, in response to determining that the relative position of the face of the user and the screen of the second terminal device meets the preset condition, sending prompt information to the user.
In this embodiment, based on the relative position of the face of the user and the screen of the second terminal device determined in step 202, the execution body may determine whether the relative position of the face of the user and the screen of the second terminal device satisfies a preset condition. The preset condition is the basis for judging the distraction of the user. If the relative position of the face of the user and the screen of the second terminal device meets the preset condition, it can be determined that the user is highly likely to get distracted when watching live broadcast, and prompt information can be sent to the user in various ways. The prompting information can be used for prompting the user to walk away, and the relative position of the face and the second terminal equipment needs to be adjusted to continue watching the live video. It will be appreciated that if the relative position of the face of the user and the screen of the second terminal device does not meet the preset condition, it may be determined that the user is focusing on the live video.
It is to be understood that the above-described preset condition may be various conditions for judging the user's distraction. For example, the preset condition may be that the face of the user deviates from the screen of the second terminal device, or the preset condition may also be that the face of the user makes a specific angle with the second terminal device. The above preset conditions can be set by those skilled in the art according to actual requirements, and there is no unique limitation herein.
In some optional implementations of this embodiment, after determining that the relative position between the face of the user and the screen of the second terminal device meets the preset condition, the executing body may send a prompt message to the second terminal device for viewing the live video. The realization mode enables the user to directly receive the prompt information in the process of watching the live video by using the second terminal equipment, and the method is more direct and convenient.
In some optional implementations of this embodiment, the executing body may send the prompt information to the third terminal device after determining that the relative position of the face of the user and the screen of the second terminal device meets the preset condition. The third terminal device may be a terminal device related to the user, for example, the third terminal device may be a mobile phone of the user watching the live video, or the third terminal device may also be a mobile phone of a parent or the like watching the live video. The implementation mode provides a plurality of modes for receiving the prompt information, so that the modes for receiving the prompt information by the user can be diversified.
According to the method provided by the embodiment of the application, the face image of the user acquired by the second terminal equipment can be acquired, the face key points are extracted from the acquired face image, then the relative position of the face of the user and the screen of the second terminal equipment can be determined based on the extracted key points, finally prompt information can be sent to the user to prompt the user to adjust the relative position of the face and the second terminal equipment in response to determining that the relative position of the face of the user and the screen of the second terminal equipment meets the preset condition.
With further reference to fig. 3, a flow 300 of yet another embodiment of a method of information transmission in live broadcast is shown. The flow 300 of the information sending method in live broadcast includes the following steps:
step 301, acquiring a face image of a user acquired by a second terminal device, and extracting facial key points from the acquired face image.
Step 302, determining a relative position of the face of the user and the screen of the second terminal device based on the extracted key points.
In this embodiment, the contents of steps 301 to 302 are similar to those of steps 201 to 202 in the above embodiment, and will not be described again.
Step 303, in response to determining that the relative position of the face of the user and the screen of the second terminal device meets the preset condition, acquiring at least one face image acquired by the second terminal device for the user again.
In this embodiment, based on the relative position of the face of the user and the screen of the second terminal device determined in step 302, the executing body may determine that the user watching the live video is likely to walk away if it is determined that the face of the user and the second terminal device meet the preset condition. In order to improve the accuracy of the judgment, the executing body may send an instruction for acquiring the face image of the user to the second terminal device, so that the second terminal may acquire the face image of the user again, and at least one face image is obtained. The re-acquired face image may be used for further determination of whether the user is distracting.
Step 304, for a face image in at least one face image, extracting key points from the face image, and determining a relative position of a face of a user in the face image and a screen of the second terminal device based on the extracted key points.
In this embodiment, based on the at least one face image obtained in step 303, the executing body may determine a relative position between a face image of a face in each face image and a screen of the second terminal device. Specifically, for any face image in the at least one face image, the executing body may extract a key point from the face image, and then process the key point extracted from the face image in various manners, so as to determine a relative position between a face of the user in the face image and a screen of the second terminal device.
And step 305, in response to determining that the relative positions of the face of the user and the screen of the second terminal device in each face image meet the preset conditions, sending prompt information to the user.
In this embodiment, based on the relative positions of the face of the user and the screen of the second terminal device in the face images determined in step 304, the executing body may determine whether the relative positions of the face of the user and the screen of the second terminal device in the face images satisfy the preset condition. If the relative positions of the face of the user and the screen of the second terminal device in each face image meet the preset condition, the execution subject can send prompt information to the user.
It is to be understood that the above-described preset condition may be various conditions for judging the user's distraction. For example, the preset condition may be that a face of the user in the at least one face image in which the preset number of face images exist deviates from a screen of the second terminal device, or the preset condition may be that the face of the user in the at least one face image in which the preset number of face images exist makes a specific angle with the second terminal device. The above preset conditions can be set by those skilled in the art according to actual requirements, and there is no unique limitation herein.
In some optional implementations of this embodiment, the at least one face image that is re-acquired by the second terminal device may be a face image that is re-acquired within a preset period of time. Therefore, the execution subject can determine whether the user is distracted by using the facial feature points of the user within the preset period. It will be appreciated that the user tends not to concentrate on a moment, but for a sustained period of time, and therefore, the facial feature points of the user over a period of time are used to determine whether the user is distracted with greater accuracy.
In some optional implementations of this embodiment, after determining that the relative position of the face of the user and the screen of the second terminal device in each face image meets the preset condition, the executing body may determine that the user is likely to walk away, where the executing body may send preset information to the second terminal device, where the preset information may be used to prompt the user to click on a preset control. Or, the preset information may further include a control to be clicked. And then, the execution body can receive feedback for the preset control from the second terminal equipment. It may be understood that the feedback of the preset control may be feedback generated by the user performing the clicking operation on the second terminal, or the feedback of the preset control may also be feedback generated by the preset control not receiving any operation beyond the preset duration. The execution body can analyze the received feedback after receiving the feedback, so that whether the user clicks the preset control within the preset duration can be determined. If the user does not click on the preset control within the preset time, the execution body can determine that the user is going away, and prompt information can be sent to the user at the moment.
In some optional implementations of this embodiment, the "sending prompt information to user" in step 305 may be implemented specifically by the following steps:
step 401, sending a preset question to a second terminal device.
In this implementation manner, after determining that the relative position between the face of the user and the screen of the second terminal device in each face image meets the preset condition, the executing body may determine that the user is likely to walk away, and may send the preset problem to the second terminal device where the user is located. Here, the preset question may be used to further determine whether the user is distracted, and the preset question may be related to live video content.
In some alternative implementations, the preset problem may be a problem that is sent in advance by the first terminal device. It can be understood that in the live-broadcast scene of online teaching, the preset problem can be a problem that a teacher uploads a server through the first terminal device where the teacher is located. It will be understood, of course, that the foregoing preset problem may also be a problem of the system itself in the live client, which is not limited only herein.
Step 402, receiving feedback of the second terminal device for the preset problem.
In this implementation manner, after the second terminal device where the user is located receives the preset problem sent by the execution body, the second terminal device may generate feedback of the preset problem in various manners, and send the generated feedback to the execution body. The execution body may receive feedback of the second terminal device for the preset problem. It should be noted that, the feedback sent by the second terminal device may be answer information input by the user at the second terminal device for a preset question, or the feedback information sent by the second terminal device may also be timeout alarm information generated by the second terminal device not receiving the input of the user for more than a preset time, which is not limited herein.
Step 403, determining whether the second terminal device receives feedback information for the preset problem input by the user within a preset duration based on the received feedback.
In this implementation manner, after receiving feedback of the preset problem from the second terminal device, the executing body may analyze the received information, so as to determine whether the second terminal device receives feedback information for the preset problem, which is input by the user, within a preset duration. If the second terminal device does not receive the feedback information for the preset problem input by the user within the preset duration, it may determine that the user is distracted, at which time the execution body may continue to execute step 404. If the second terminal device receives feedback information for the preset problem, which is input by the user, within the preset time period, the user can be determined not to go wrong.
In some optional implementations, if the executing body receives feedback information for the preset problem input by the user within a preset duration, the received feedback information may be sent to the first terminal device. The method provided by the implementation manner can enable the first terminal device to acquire the answer information of the user to the preset questions, and the implementation manner can be suitable for on-site questioning in live broadcasting. As an example, in the context of online education, a teacher may obtain an answer to a preset question from a user through a first terminal device, and thus may check whether the user of a second terminal device listens to a class carefully.
And step 404, in response to determining that the feedback information for the preset problem input by the user is not received within the preset time period, sending prompt information to the user.
In this implementation manner, the execution body may send prompt information to the user when it is determined that feedback information for a preset problem input by the user is not received within a preset time period. The prompting information can be used for prompting the user to walk away, and the relative position of the face and the second terminal equipment needs to be adjusted to continue watching the live video.
According to the scheme provided by the implementation manner of the embodiment, when the fact that the user possibly has the distraction situation is judged by using the fact that the relative position of the face of the user in the face image and the screen of the second terminal device meets the preset condition, the execution main body can continuously send the preset problem to the second terminal device where the user is located, whether the user distraction is further determined according to feedback of the preset problem, and therefore accuracy of judging the distraction of the user is improved.
As can be seen from fig. 3, after determining that the user may walk according to the relative position between the face of the user and the screen of the second terminal device, the process 300 of the information sending method in live broadcast in this embodiment may use the second terminal device to acquire at least one face image again, extract face key points from the acquired face image, and further determine whether the user walks according to the extracted face key points, thereby improving the accuracy of determining the user walk.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an information sending apparatus in live broadcast, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 5, the information transmitting apparatus 500 in live broadcast of the present embodiment may be applied to a server through which at least two terminal devices communicate in live broadcast, wherein the at least two terminal devices include a first terminal device and a second terminal device. The information transmitting apparatus 500 in live broadcast may include: an extraction unit 501, a determination unit 502, a first transmission unit 503. The extracting unit 501 is configured to acquire a face image of a user acquired by the second terminal device, and extract facial key points from the acquired face image; a determining unit 502 configured to determine a relative position of the face of the user and the screen of the second terminal device based on the extracted key points; and a first sending unit 503 configured to send, in response to determining that the relative position of the face of the user and the screen of the second terminal device meets a preset condition, a prompt message to the user, where the prompt message is used to prompt the user to adjust the relative position of the face and the second terminal device.
In some optional implementations of the present embodiment, the extraction unit 501 is further configured to: acquiring face images of a user acquired by second terminal equipment, and matching the acquired face images in a preset face image library; in response to determining that the acquired face image is included in the face image library, facial key points are extracted from the acquired face image.
In some optional implementations of this embodiment, the apparatus 500 further includes: and the second sending unit is configured to send warning information to the user in response to the fact that the acquired face images are not included in the face image library, wherein the warning information is used for prompting that the authority of the user is insufficient.
In some optional implementations of the present embodiment, the first sending unit 503 is further configured to: responding to the fact that the relative position of the face of the user and the screen of the second terminal device meets the preset condition, and acquiring at least one face image acquired by the second terminal device for the user again; for a face image in at least one face image, extracting key points from the face image, and determining the relative position of the face of the user in the face image and the screen of the second terminal device based on the extracted key points; and sending prompt information to the user in response to determining that the relative positions of the face of the user and the screen of the second terminal device in each face image meet preset conditions.
In some optional implementations of the present embodiment, the first transmitting unit 503 includes: the first sending module is configured to send a preset problem to the second terminal equipment; the first receiving module is configured to receive feedback of the second terminal equipment aiming at a preset problem; the determining module is configured to determine whether feedback information aiming at a preset problem, which is input by a user, is received by the second terminal device within a preset duration based on the received feedback; and the second sending module is configured to send prompt information to the user in response to determining that feedback information for the preset problem input by the user is not received within the preset time.
In some optional implementations of this embodiment, the first sending unit 503 further includes: the second receiving module is configured to receive the preset problem sent by the first terminal equipment.
In some optional implementations of this embodiment, the first sending unit 503 further includes: and the third sending module is configured to send the feedback information to the first terminal equipment in response to determining that the feedback information for the preset problem input by the user is received within the preset time.
In some optional implementations of the present embodiment, the first sending unit 503 is further configured to: sending preset information to the second terminal equipment, wherein the preset information comprises voice information and is used for prompting a user to click a preset control; receiving feedback of the second terminal equipment aiming at a preset control; based on the received feedback, determining whether the user clicks a preset control within a preset duration; and sending prompt information to the user in response to determining that the user does not click on the preset control within the preset time.
In some optional implementations of the present embodiment, the first sending unit 503 is further configured to: and sending prompt information to the third terminal equipment.
In some optional implementations of the present embodiment, the first sending unit 503 is further configured to: and sending prompt information to the second terminal equipment.
In some optional implementations of this embodiment, the at least one face image is a face image that is acquired by the second terminal device again in a preset period of time.
The elements recited in apparatus 500 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations and features described above with respect to the method are equally applicable to the apparatus 500 and the units contained therein, and are not described in detail herein.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., server in fig. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The server illustrated in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure in any way.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 6 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 601. It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a face image of a user acquired by second terminal equipment, and extracting face key points from the acquired face image; determining a relative position of the face of the user and a screen of the second terminal device based on the extracted key points; and sending prompt information to the user in response to determining that the relative position of the face of the user and the screen of the second terminal device meets a preset condition, wherein the prompt information is used for prompting the user to adjust the relative position of the face and the second terminal device.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an extraction unit, a determination unit, a first transmission unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the extraction unit may also be described as "a unit that acquires a face image of a user acquired by the second terminal device, and extracts facial key points from the acquired face image".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (12)

1. An information sending method in live broadcast is applied to a server, at least two terminal devices communicate in a live broadcast mode through the server, wherein the at least two terminal devices comprise a first terminal device and a second terminal device, and the method is characterized by comprising the following steps:
acquiring a face image of a user acquired by the second terminal equipment, and extracting face key points from the acquired face image;
determining a relative position of the face of the user and a screen of the second terminal device based on the extracted key points;
Responsive to determining that the relative position of the face of the user and the screen of the second terminal device meets a preset condition, sending prompt information to the user, wherein the prompt information is used for prompting the user to adjust the relative position of the face and the second terminal device;
wherein the sending, in response to determining that the relative position of the face of the user and the screen of the second terminal device meets a preset condition, a prompt message to the user includes:
acquiring at least one face image acquired by the second terminal equipment again for the user in response to determining that the relative position of the face of the user and the screen of the second terminal equipment meets a preset condition;
extracting key points from the face image aiming at the face image in the at least one face image, and determining the relative position of the face of the user in the face image and the screen of the second terminal device based on the extracted key points;
and sending prompt information to the user in response to determining that the relative positions of the face of the user and the screen of the second terminal device in each face image meet preset conditions.
2. The method according to claim 1, wherein the acquiring the face image of the user acquired by the second terminal device, and extracting the face key points from the acquired face image, includes:
acquiring face images of a user acquired by the second terminal equipment, and matching the acquired face images in a preset face image library;
and in response to determining that the acquired face image is included in the face image library, extracting facial key points from the acquired face image.
3. The method according to claim 2, wherein the method further comprises:
and sending warning information to the user in response to determining that the acquired face image is not included in the face image library, wherein the warning information is used for prompting that the authority of the user is insufficient.
4. The method of claim 1, wherein the sending the prompt message to the user comprises:
sending a preset problem to the second terminal equipment;
receiving feedback of the second terminal equipment aiming at the preset problem;
based on the received feedback, determining whether the second terminal device receives feedback information, which is input by the user and aims at the preset problem, within a preset duration;
And sending prompt information to the user in response to determining that feedback information, which is input by the user and aims at the preset problem, is not received within a preset time period.
5. The method of claim 4, wherein prior to sending the preset questions to the second terminal device, the method further comprises:
and receiving a preset problem sent by the first terminal equipment.
6. The method of claim 5, wherein the method further comprises:
and responding to the feedback information which is received in a preset time and is input by the user and aims at the preset problem, and sending the feedback information to the first terminal equipment.
7. The method of claim 1, wherein the sending the prompt message to the user comprises:
sending preset information to the second terminal equipment, wherein the preset information comprises voice information and is used for prompting a user to click a preset control;
receiving feedback of the second terminal equipment aiming at the preset control;
based on the received feedback, determining whether the user clicks the preset control within a preset time period;
and sending prompt information to the user in response to determining that the user does not click the preset control within the preset time.
8. The method of claim 1, wherein the sending the prompt message to the user comprises:
and sending prompt information to the third terminal equipment.
9. The method of claim 1, wherein the sending the prompt message to the user comprises:
and sending prompt information to the second terminal equipment.
10. The method of claim 1, wherein the at least one face image is a face image that is re-acquired by the second terminal device within a preset period of time.
11. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-10.
12. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-10.
CN202010272136.3A 2020-04-09 2020-04-09 Information sending method in live broadcast Active CN111414892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010272136.3A CN111414892B (en) 2020-04-09 2020-04-09 Information sending method in live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010272136.3A CN111414892B (en) 2020-04-09 2020-04-09 Information sending method in live broadcast

Publications (2)

Publication Number Publication Date
CN111414892A CN111414892A (en) 2020-07-14
CN111414892B true CN111414892B (en) 2023-05-12

Family

ID=71493511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010272136.3A Active CN111414892B (en) 2020-04-09 2020-04-09 Information sending method in live broadcast

Country Status (1)

Country Link
CN (1) CN111414892B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470131A (en) * 2018-03-27 2018-08-31 百度在线网络技术(北京)有限公司 Method and apparatus for generating prompt message
CN108615159A (en) * 2018-05-03 2018-10-02 百度在线网络技术(北京)有限公司 Access control method and device based on blinkpunkt detection
CN108734084A (en) * 2018-03-21 2018-11-02 百度在线网络技术(北京)有限公司 Face registration method and apparatus
CN109471605A (en) * 2018-11-14 2019-03-15 维沃移动通信有限公司 A kind of information processing method and terminal device
CN110334697A (en) * 2018-08-11 2019-10-15 昆山美卓智能科技有限公司 Intelligent table, monitoring system server and monitoring method with condition monitoring function
WO2019227920A1 (en) * 2018-05-31 2019-12-05 上海掌门科技有限公司 Method and device for pushing information and presenting information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6515787B2 (en) * 2015-11-02 2019-05-22 富士通株式会社 Virtual desktop program, virtual desktop processing method, and virtual desktop system
KR101961241B1 (en) * 2017-09-07 2019-03-22 라인 가부시키가이샤 Method and system for providing game based on video call and object recognition
US10635893B2 (en) * 2017-10-31 2020-04-28 Baidu Usa Llc Identity authentication method, terminal device, and computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734084A (en) * 2018-03-21 2018-11-02 百度在线网络技术(北京)有限公司 Face registration method and apparatus
CN108470131A (en) * 2018-03-27 2018-08-31 百度在线网络技术(北京)有限公司 Method and apparatus for generating prompt message
CN108615159A (en) * 2018-05-03 2018-10-02 百度在线网络技术(北京)有限公司 Access control method and device based on blinkpunkt detection
WO2019227920A1 (en) * 2018-05-31 2019-12-05 上海掌门科技有限公司 Method and device for pushing information and presenting information
CN110334697A (en) * 2018-08-11 2019-10-15 昆山美卓智能科技有限公司 Intelligent table, monitoring system server and monitoring method with condition monitoring function
CN109471605A (en) * 2018-11-14 2019-03-15 维沃移动通信有限公司 A kind of information processing method and terminal device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
彭辉 ; 铁菊红 ; .多媒体直播课堂软件系统的设计.计算机工程与设计.2007,(09),全文. *
郭志涛 ; 韩海净 ; 孔江浩 ; 杨革宇 ; 曹小青 ; .基于Android移动终端的多功能视频监控系统设计.现代电子技术.2018,(16),全文. *

Also Published As

Publication number Publication date
CN111414892A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
US9686133B2 (en) Establishment of connection channels between complementary agents
CN110021052B (en) Method and apparatus for generating fundus image generation model
WO2022089192A1 (en) Interaction processing method and apparatus, electronic device, and storage medium
CN110059623B (en) Method and apparatus for generating information
CN113141518B (en) Control method and control device for video frame images in live classroom
WO2024061119A1 (en) Display method and apparatus for session page, and device, readable storage medium and product
CN111862705A (en) Method, device, medium and electronic equipment for prompting live broadcast teaching target
CN110855626B (en) Electronic whiteboard packet loss processing method, system, medium and electronic equipment
CN114363686B (en) Method, device, equipment and medium for publishing multimedia content
CN110008926B (en) Method and device for identifying age
CN112307323B (en) Information pushing method and device
CN111797822B (en) Text object evaluation method and device and electronic equipment
CN113283383A (en) Live broadcast behavior recognition method, device, equipment and readable medium
CN111414892B (en) Information sending method in live broadcast
CN113038197A (en) Grouping method, device, medium and electronic equipment for live broadcast teaching classroom
CN110060477B (en) Method and device for pushing information
CN110751370A (en) Method, device, medium and electronic equipment for managing online experience lessons
CN111787226B (en) Remote teaching method, device, electronic equipment and medium
CN113099254B (en) Online teaching method, system, equipment and storage medium for regional variable resolution
CN112863277B (en) Interaction method, device, medium and electronic equipment for live broadcast teaching
WO2023079370A1 (en) System and method for enhancing quality of a teaching-learning experience
KR20130109510A (en) Telephone learning service providing method by plural mentors managed by automatic generated schedule
CN110851097A (en) Handwriting data consistency control method, device, medium and electronic equipment
CN114038255B (en) Answering system and method
CN112308744A (en) Learning management system, method and device for processing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant