CN114374876A - Information interaction method, display terminal and storage medium - Google Patents

Information interaction method, display terminal and storage medium Download PDF

Info

Publication number
CN114374876A
CN114374876A CN202210026854.1A CN202210026854A CN114374876A CN 114374876 A CN114374876 A CN 114374876A CN 202210026854 A CN202210026854 A CN 202210026854A CN 114374876 A CN114374876 A CN 114374876A
Authority
CN
China
Prior art keywords
user
task
time
display
task information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210026854.1A
Other languages
Chinese (zh)
Inventor
王申博
赵静
王墨涵
辛孟怡
黄昊
穆东磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202210026854.1A priority Critical patent/CN114374876A/en
Publication of CN114374876A publication Critical patent/CN114374876A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information interaction method, a display terminal and a storage medium, wherein the method comprises the following steps: responding to a preset operation for triggering a target task executed by a user, performing identity authentication on the user, if the user passes the identity authentication, displaying task information corresponding to the target task, triggering a camera to start video recording, detecting whether the user passing the identity authentication leaves a screen in the task information displaying process based on a recorded video picture, and if so, respectively recording the displaying time of the task information when the user leaves and returns each time; and stopping video recording after the display of the task information is finished, and determining whether the user finishes the target task or not based on the display duration of the task information and the display time of the task information when the user leaves and returns each time. Therefore, the user can finish the watching of the task information through the interaction with the display terminal, a more real task finishing result is obtained, and the functions of the display terminal are effectively enriched.

Description

Information interaction method, display terminal and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an information interaction method, a display terminal, and a storage medium.
Background
Nowadays, display terminals are widely used in work and life of people, such as advertising, video learning, and business processing. Taking the bank intelligent financial terminal as an example, the business data can be displayed through the bank intelligent financial terminal, so that the user can conveniently know the related financial business by self, and the manpower and material resources are saved. However, the existing display terminal still cannot meet the increasing use requirements of people, and therefore, further enriching the functions becomes an important development direction of the display terminal.
Disclosure of Invention
In view of the above problems, the present invention has been made to provide an information interaction method, a display terminal, and a storage medium that overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present specification provides an information interaction method, which is applied to a display terminal, and the method includes:
responding to the preset operation for triggering the target task executed by the user, and performing identity authentication on the user;
if the verification is passed, displaying task information corresponding to the target task, and triggering a camera to start video recording, wherein the camera is used for shooting a face image of a user in front of a screen;
detecting whether the user leaves the screen in the task information display process or not based on the recorded video picture, and if so, respectively recording the display time of the task information when the user leaves and returns each time;
and stopping the video recording after the display of the task information is finished, and determining whether the user finishes the target task or not based on the display duration of the task information and the display time of the task information when the user leaves and returns each time.
Further, after stopping recording the video, the method further includes:
and generating a certification file for the user to execute the target task based on the recorded video, and storing the certification file into a target folder.
Further, the determining whether the user completes the target task based on the display duration of the task information and the display time of the task information when the user leaves and returns each time includes:
obtaining the online time of the user based on the display time of the task information when the user leaves and returns each time, wherein the online time is used for representing the accumulated time for the user to watch the task information;
obtaining the online rate of the user by comparing the online time with the display time;
and if the online rate is greater than a first preset threshold value, judging that the user completes the target task.
Further, the determining whether the user completes the target task based on the display duration of the task information and the display time of the task information when the user leaves and returns each time includes:
obtaining the off-line time of the user based on the display time of the task information when the user leaves and returns each time, wherein the off-line time is used for representing the accumulated time before the user leaves the screen in the task information display process;
comparing the offline duration with the display duration to obtain the offline rate of the user;
and if the off-line rate is smaller than a second preset threshold value, judging that the user completes the target task.
Further, after determining whether the user completes the target task, the method further comprises:
displaying a target task completion result of the user;
and if the user does not finish the target task, sending prompt information to related personnel.
Further, if there are a plurality of users, the authenticating the user includes:
sequentially carrying out identity authentication on each user;
the detecting whether the user leaves the screen in the task information display process based on the recorded video pictures comprises the following steps: and respectively detecting whether each user passing identity authentication leaves the screen in the task information display process based on the recorded video picture so as to respectively determine whether each user completes the target task.
Further, the authenticating the user includes:
reading the identity card information of the user through an identity card reader, wherein the identity card information comprises a reference face image;
and acquiring the face image of the user through the camera, matching the face image acquired by the camera with the reference face image, and obtaining and displaying an identity verification result based on the matching result.
Further, the detecting whether the user leaves the screen in the task information display process based on the recorded video pictures includes:
and in the task information display process, judging whether a face image matched with the reference face image of the user exists in the video image, and if not, judging that the user leaves the front of the screen.
Further, the information interaction method further includes:
and responding to the preset operation for triggering the video communication executed by the user, starting the camera and the voice module, and carrying out the video communication with one or more display terminals in the system.
In a second aspect, an embodiment of the present specification provides a display terminal, including a display screen, a camera, and a controller, where the display screen and the camera are both connected to the controller, where:
the controller is configured to: responding to a preset operation for triggering a target task executed by a user, carrying out identity verification on the user, if the verification is passed, controlling the display screen to display task information corresponding to the target task, triggering the camera to start video recording, detecting whether the user leaves the screen in the task information display process or not based on a recorded video picture, and if so, respectively recording the display time of the task information when the user leaves and returns each time; and after the display of the task information is finished, controlling the camera to stop video recording, and determining whether the user finishes the target task or not based on the display duration of the task information and the display time of the task information when the user leaves and returns each time.
In a third aspect, the present specification provides a computer-readable storage medium, which stores computer instructions, and when the computer instructions are executed on a computer, the computer is caused to execute the steps of the information interaction method according to the first aspect.
The technical scheme provided in the embodiment of the specification at least has the following technical effects or advantages:
according to the information interaction method, the display terminal and the storage medium provided by the embodiment of the specification, the user is authenticated by responding to the preset operation for triggering the target task executed by the user, after the user passes the authentication, the task information corresponding to the target task is displayed for the user, the camera is triggered to start video recording while the task information starts to be displayed, then whether the user passing the authentication leaves the screen or not in the task information display process is detected based on the recorded video picture, and if yes, the display time of the task information when the user leaves and returns each time is respectively recorded; and stopping video recording after the display of the task information is finished, and determining whether the user finishes the target task or not based on the display duration of the task information and the display time of the task information when the user leaves and returns each time. Therefore, the user can finish the watching of the task information through the interaction with the display terminal, so that the target task is finished, the display terminal can monitor the task finishing condition of the user, the task finishing quality is improved, a more real task finishing result is obtained, and the functions of the display terminal are effectively enriched.
The above description is only an overview of the technical solutions provided by the embodiments of the present specification, and in order to make the technical means of the embodiments of the present specification more clearly understood, the embodiments of the present specification may be implemented according to the content of the description, and in order to make the above and other objects, features, and advantages of the embodiments of the present specification more clearly understood, the following detailed description of the embodiments of the present specification is given.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic structural diagram of an exemplary display terminal provided in a first aspect of an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an exemplary authentication interface in an embodiment of the present description;
fig. 3 is a flowchart of an information interaction method provided in the second aspect of the embodiments of the present specification.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In a first aspect, an embodiment of the present specification provides a structural schematic diagram of a display terminal. As shown in fig. 1, the display terminal includes: a display 101, a camera 102, and a controller (not shown). The display screen 101 and the camera 102 are both connected to the controller. The display screen 101 may be a touch display screen, such as a ten-point capacitive touch screen, for facilitating interaction with a user. The camera 102 may be disposed on a bezel of the display screen 101, such as may be disposed in a middle position of an upper bezel of the display screen 101 with the field of view facing the front of the screen. Therefore, when the user is positioned in front of the screen of the display terminal, the face image of the user can be shot.
In this embodiment, the display terminal may provide a function of executing the target task for the user. The user can complete the target task by interacting with the display terminal. Specifically, the target task may be a lesson learning task, or a video viewing task or the like that requires the user to view specified information. For example, the course learning task may be a task of learning a video course on a display terminal for a student who participates in course training; for another example, the video watching task may be a user who needs to sign a specific agreement, and in order to indicate that the agreement-related situation is known, a task that specifies a declaration of a video needs to be watched on the display terminal before signing the agreement.
Specifically, the controller is used for responding to preset operation of a user for triggering a target task, authenticating the user, controlling the display screen 101 to display task information corresponding to the target task if the user passes the authentication, triggering the camera 102 to start video recording, detecting whether the user leaves the screen in the task information display process based on a recorded video picture, and recording the display time of the task information when the user leaves and returns each time if the user leaves and returns each time; after the display of the task information is finished, the camera 102 is controlled to stop video recording, and whether the user finishes the target task is determined based on the display duration of the task information and the display time of the task information when the user leaves and returns each time. Therefore, the user can finish the watching of the task information through the interaction with the display terminal, so that the target task is finished, the display terminal can monitor the task finishing condition of the user, the task finishing quality is improved, a more real task finishing result is obtained, and the functions of the display terminal are effectively enriched.
For example, the preset operation for triggering the target task may be: clicking a task execution button displayed in the screen of the display screen 101; inputting a voice password corresponding to the target task; or triggering a sliding gesture and the like corresponding to the target task, which can be configured according to the needs of the actual application scene. For example, in an application scenario, a user may click a task execution button displayed on a screen to start authentication, and after the authentication passes, click an open button to trigger the display screen 101 to display task information corresponding to a target task. Taking a course learning task as an example, the task information may be a learning video.
In an alternative embodiment, the display terminal may provide two task execution modes for the user, a single-person mode and a multi-person mode. The multi-user mode is similar to the target task execution flow of the single-user mode, and the difference is that identity authentication needs to be performed on a plurality of users respectively, and the display time of each user before leaving and returning to a screen in the task information display process is recorded respectively, so that whether each user completes the target task is determined respectively. Therefore, a plurality of users can execute the target task on the same terminal at the same time, and the task execution efficiency is improved.
For example, the controller may determine an execution mode of the target task in response to a user-triggered mode selection instruction; if the execution mode is a multi-user mode, sequentially carrying out identity authentication on each user; if the verification is passed, controlling the display screen 101 to display task information corresponding to the target task, triggering the camera 102 to start video recording, then respectively detecting whether each user passing the identity verification leaves the screen in the task information display process based on the recorded video pictures, and respectively recording the display time of each leaving and returning the task information for each user before leaving the screen; and stopping video recording after the display of the task information is finished, and determining whether the user finishes the target task or not based on the display duration of the task information and the display time of the task information when each user leaves and returns.
For example, after a user clicks a task execution button displayed on a screen, the user enters a mode selection interface, a single mode button and a multi-mode button are displayed on the mode selection interface, the user can click one of the buttons to trigger a mode selection instruction according to the number of people actually executing the task, and then authentication is started.
Or, the mode selection by the user does not need to be performed manually, and when a plurality of users execute the target task on the same display terminal at the same time, the plurality of users can perform identity authentication in sequence, and enter the multi-user mode when the number of authenticated users is greater than or equal to 2.
In specific implementation, the authentication may be performed in various ways, for example, the user may input a user name and a password to perform the authentication, or the biometric feature acquisition module may be integrated in the display terminal to perform the authentication in a biometric feature recognition manner, for example, the biometric feature may be a fingerprint feature or a face feature.
In an alternative embodiment, as shown in fig. 1, the display terminal may further include an identification card reader 103, and the identification card reader 103 is connected to the controller. That is to say, the identity card reader 103 is integrated in the display terminal, and the specific position can be set according to actual needs, for example, the identity card reader 103 can be integrated at the lower frame of the display screen 101, so that the user can directly swipe the identity card in the card reading area corresponding to the identity card reader 103 on the display terminal, thereby implementing identity verification, facilitating the user operation, and further enriching the functions of the display terminal. At this time, the controller is further configured to control the identity card reader 103 to read the identity card information of the user, where the identity card information includes a reference face image, and control the camera 102 to acquire a real-time face image of the user, match the acquired real-time face image with the reference face image, and obtain and display an authentication result on the display screen 101 based on the matching result.
Still taking the course learning task as an example, the training trainee can enter an identity verification and identification interface by clicking a course learning button displayed on a screen; an identification button and an exit button are displayed in the authentication identification interface. The identity authentication interface shown in fig. 2 can be accessed by clicking the identity recognition button, and after the student manually swipes the identity card in the card reading area corresponding to the identity card reader, the student can stand in front of the screen and collect the face image of the student on site by the camera 102, and the reference face image (identity card picture) read from the identity card and the face image (on-site picture) collected by the camera 102 are respectively displayed on the identity authentication interface. The background matches the face image captured by the camera 102 with a reference face image identified from the identification card through AI face detection and face identification technology to obtain a face feature comparison result, and displays the comparison result on a screen. For example, if the matching is successful, a "comparison result: and if the matching fails, showing a comparison result: fail.
After the comparison result passes, the student can click the start course button, enter the corresponding course interface, play the course learning video, simultaneously open the built-in camera 102, start to record the video that the student learned before the display terminal screen. It should be noted that, in the above-mentioned multi-user mode, the student can swipe the card in sequence to perform face recognition, and finally click the start course button after all the identity verifications of the participating students pass.
Considering that in the process of displaying the task information, if the user has the behavior before leaving the screen, the time period indicating that the user leaves does not view the displayed task information, and then the time period should not be counted in the task completion amount of the user. Therefore, in this embodiment, the behavior of the user who passes the identity authentication before leaving the screen is detected based on the video image recorded in real time, and the specific detection process may include: and in the process of displaying the task information, judging whether a face image matched with the reference face image of the user exists in the video image, and if not, judging that the user leaves the front of the screen. It should be noted that, in the case of performing identity verification by using the above-mentioned identification card identification method, the reference face image may be obtained and stored by reading the identification card information. When the identity authentication is performed by adopting other modes except for identity card swiping, the reference face image can be collected in advance and stored in the local or cloud end correspondingly before the user starts to execute the target task.
In the video recording process, face images in a recorded picture are identified in real time and are matched with reference face images of users passing identity authentication, if face images of one or more users are missing in the recorded picture, the user is shown to leave the front of a screen of a display terminal, so that the display time TLi of the current task information is recorded, i shows that the user leaves for the current time, if the user leaves for multiple times in the task information display process, i can be 1-N, and N shows the total number of times that the user leaves midway. And continuously recording and detecting, when the facial image of the user is identified to be lost in the recorded picture, indicating that the user returns to the front of the screen again, recording the display time TFi of the current task information again, and so on until the task information is displayed, and stopping video recording. For example, if the initial presentation time of the task information is zero time and the presentation duration is T, the termination presentation time is T time, and the presentation time recorded when the user leaves and returns in the presentation process is between 0 and T.
Still taking the course learning task as an example, assuming that the total duration of the course learning video is 1 hour, the course learning video starts to be played from zero time. In the single-person mode, when the student leaves and returns 3 times halfway, the play time TL1 when the student leaves for the first time, the play time TF1 when the student returns for the first time, the play time TL2 when the student leaves for the second time, the play time TF2 when the student returns for the second time, the play time TL3 when the student leaves for the third time, and the play time TF3 when the student returns for the third time are recorded.
In the multi-user mode, it is necessary to separately identify whether each trainee who passes the identity authentication leaves the screen, and separately record the playing time when each trainee leaves and returns. For example, if 5 students pass identity verification and watch the course learning video together, if no student is missing in the recording pictures detected from 1 st to j-1 st time, and the student a and the student B are detected to be missing in the recording pictures during the j-th detection, the current playing time of the course learning video is recorded as the showing time when the student a and the student B leave for the 1 st time, and is TL respectivelyA1 and TLB1, then the other three students 'leaving is not detected from j +1 times to j + m times, and the students A and B's returning is not detected, j +If the student A returns after m +1 times of detection, recording the current playing time of the course learning video as the display time TF when the student A returns for the 1 st timeA1, if the (j + m + 3) th time detects that the student B returns, recording the current playing time of the course learning video as the display time TF when the student B returns for the 1 st timeBFrom this point on, until the task information presentation is finished, the exit of a student is not detected.
Then, in the above-mentioned multi-person mode, it is detected that the trainee a leaves once, and the display time when leaving is: TLA1, the display time when returning is as follows: TFA1; detecting that the student B leaves once, wherein the display time when leaving is as follows: TLB1, the display time when returning is as follows: TFB1; the remaining three trainees C, D and E have not left.
In addition, in order to reduce the amount of calculation, in an optional implementation manner, in the video recording process, the detection process may be performed once every preset time interval or every preset video frame interval, and the display time when the user leaves and returns to the screen is recorded in time, so as to obtain a more accurate target task completion result. For example, the preset time period may be 2 seconds, 5 seconds, or the like; the preset video frame number may be 5 frames or 10 frames, and may be specifically configured according to the needs of the actual application scene.
In the task information display process, the recorded display time of the user when the user leaves and returns each time can be used as a criterion for judging the completion condition of the target task by the user, and is used for determining whether the user completes the target task. It should be noted that, in the multi-user mode, it is necessary to separately determine whether to complete the target task for each user who passes the identity verification.
Specifically, a reference parameter for measuring the completion of the target task by the user may be obtained according to the display duration of the task information and the display time of the task information when the user leaves and returns each time, and then the reference parameter is compared with a reference threshold set correspondingly to determine whether the user completes the target task. The display duration of the task information is preset in the process of configuring the target task. Of course, if the user is not detected to leave in the task information display process, the user is indicated to be online in the whole process, and the user is judged to complete the target task.
For example, the baseline parameter may be the actual online rate of the user during the presentation of the task information. The online rate refers to a ratio of the accumulated time of the task information displayed in the screen actually viewed by the user to the display time. At this time, the online time of the user can be obtained based on the display time of the task information when the user leaves and returns each time, and the online time is used for representing the accumulated time for the user to watch the task information; comparing the on-line time with the display time to obtain the on-line rate of the user; if the online rate is larger than a first preset threshold value, judging that the user completes the target task; and if the online rate is less than or equal to a first preset threshold value, judging that the user does not finish the target task.
The first preset threshold may be configured according to the actual requirement for completion of the target task, and may be set to 50% or 80%, for example. For example, the initial presentation time of the task information is zero time, the termination presentation time is T time, and the presentation times of the task information when the user leaves and returns are respectively recorded as: TL1, TL2, …, TLN, TF1, TF2, …, TFN, online time H1 may be represented by the formula: h1 ═ TL1+ (TL2-TF1) + … + (T-TFN).
For another example, the reference parameter may be an offline rate of the user during the task information presentation. The off-line rate refers to the proportion of the accumulated time before the user leaves the screen in the display process of the task information in the display time. At this time, the off-line time of the user can be obtained based on the display time of the task information when the user leaves and returns each time, and the off-line time is used for representing the accumulated time before the user leaves the screen in the task information display process; obtaining the off-line rate of the user by the ratio of the off-line duration to the display duration; if the off-line rate is smaller than a second preset threshold value, judging that the user completes the target task; and if the offline rate is greater than or equal to a second preset threshold, judging that the user does not finish the target task.
The second preset threshold may be configured according to the actual requirement for completion of the target task, and may be set to 50% or 20%, for example. Similarly, offline duration H2 may be represented by the formula: h2 (TF1-TL1) + (TF2-TL2) + … + (TFN-TLN).
Still taking the course learning task as an example, if the user is judged to complete the target task, the course examination is passed, and if the target task is not completed, the course examination is not passed.
Further, after determining whether the user completes the target task, in order to allow the user to know the task completion in time, the target task completion result of the user may be displayed on the display screen 101. For example, the presentation content may include: user ID, task ID, and completion result such as pass or fail. Or, on the basis, the display time, the online rate or the offline rate, and the like of each leaving and returning of the user who does not complete the target task can be displayed in a more detailed manner, which is specifically set according to actual needs and is not limited herein.
For the user judged not to complete the target task, the display terminal can also send prompt information to related personnel so as to urge the user to complete again. For example, the prompt message may also include: the user ID, the task ID, the completion result, and the like are specifically set according to actual needs, and are not limited here. Still taking the course learning task as an example, the prompting information can be sent to the manager of the relevant course, so that the manager can supervise the incomplete judgment, namely the failed student is examined to revise the course.
For example, a short message sending module (not shown in the figure) may be further integrated in the display terminal, such as an sim (subscriber Identity module) card or other suitable functional modules, so that the prompt message may be sent to the relevant person in a short message manner.
In addition, in an optional implementation manner, after the task information is displayed, the controller may further generate a certification file for the user to execute the target task based on the recorded video, and store the certification file in the target folder. For example, the format of the certification file may be in MP4 format, and the file name may include: user ID, task ID, and completion date, etc. The target folder is preset, and the generated certification file can be stored in the target folder by configuring a storage path of the certification file in advance. The stored certificate can be used as the basis for the user to complete the target task, so that the related personnel can perform spot check at any time.
Of course, in other embodiments of this specification, in order to reduce the occupation of the storage space, the recorded video may also be deleted directly, or the recorded video may be deleted after the storage duration of the target folder reaches a specified duration.
It should be noted that the trigger condition for generating the certification file and determining whether the user completes the target task may be set according to the requirement of the actual application scenario. For example, the process of generating the certification document and determining whether the user completes the target task may be triggered when it is monitored that the task information display is finished. For another example, after the presentation of the task information is finished, a course ending interface may be popped up, and when it is detected that the user clicks an ending button in the course ending interface, the above processes of generating the certification document and determining whether the user completes the target task may be triggered.
Further, the display terminal can also provide a video communication function for the user, so that the users who execute the target task on different display terminals can conveniently communicate with each other. At this time, as shown in fig. 1, the display terminal further includes a voice module 104 and a communication module (not shown), the voice module 104 and the communication module are both connected to the controller, and the communication module is used for establishing communication connection with other display terminals in the system. For example, the voice module 104 may include an array microphone, which may be disposed on two sides of the lower frame of the display screen 101 for collecting a sound signal; the communication module can include the communication function module that is suitable for such as WIFI module or 2G/3G/4G/5G module. The system is a task management system, and other display terminals in the system are other display terminals which are provided with the task management system and are online in a network except the display terminal where the current user is located.
On the basis, the controller responds to the user to execute preset operation for triggering video communication, and starts the camera 102 and the voice module 104 to carry out video communication with one or more display terminals in the system. For example, a plurality of network points are uniformly provided with display terminals, users of different network points executing target tasks can directly carry out video communication through the display terminals without other communication equipment, user operation is facilitated, and functions of the display terminals are further enriched. For example, the preset operation for triggering video communication may be: a user clicks a video exchange button displayed on a screen; inputting a voice password corresponding to video communication; or, triggering a sliding gesture corresponding to the video communication may be specifically configured according to the needs of the actual application scenario.
Furthermore, in order to facilitate the user operation, the display terminal may further include an infrared sensing module (not shown in the figure), and the infrared sensing module is connected with the controller. Therefore, the controller can detect whether a user enters a preset area range before the screen through the infrared sensing module, if so, the screen is waken up, and the task starting interface is displayed, so that whether the user executes preset operation for triggering a target task and preset operation for triggering video communication on the task starting interface is detected.
For example, a user may enter a task operation interface from an entry provided on a task start interface, the task operation interface is provided with the task execution button and the video communication button, and the user may trigger a target task by clicking the task execution button corresponding to the target task, or may trigger video communication with other users by clicking the video communication button. Of course, other operation buttons may also be provided on the task operation interface, which is not limited in this embodiment.
In a second aspect, an embodiment of the present specification provides an information interaction method, which is applied to a display terminal. It should be noted that the functional modules related to the method, such as the camera, the id card reader, the voice module, and the like, may be integrated in the display terminal or may be arranged independently from the display terminal, which is not limited in this embodiment. As shown in fig. 3, the information interaction method may include:
step S301, responding to the preset operation of the user for triggering the target task, and performing identity authentication on the user;
step S302, if the verification is passed, displaying task information corresponding to the target task, and triggering a camera to start video recording, wherein the camera is used for shooting a face image of a user in front of a screen;
step S303, detecting whether the user leaves the screen in the task information display process based on the recorded video picture, if so, respectively recording the display time of the task information when the user leaves and returns each time;
step S304, after the display of the task information is finished, stopping video recording, and determining whether the user completes the target task or not based on the display duration of the task information and the display time of the task information when the user leaves and returns each time.
It should be noted that, for the specific implementation process of the step S301 to the step S304, reference may be made to the relevant description in the first aspect, and details are not repeated here.
In an optional embodiment, after stopping video recording, the information interaction method may further include: and generating a certification file for the user to execute the target task based on the recorded video, and storing the certification file in a target folder. The specific implementation process may refer to the related description in the first aspect, and is not described herein again.
In an optional implementation manner, the determining whether the user completes the target task based on the display duration of the task information and the display time of the task information each time the user leaves and returns may include: the method comprises the steps that on the basis of the display time of task information when a user leaves and returns each time, the online time of the user is obtained, and the online time is used for representing the accumulated time for the user to watch the task information; comparing the on-line time with the display time to obtain the on-line rate of the user; and if the online rate is greater than a first preset threshold value, judging that the user completes the target task. The specific implementation process may refer to the related description in the first aspect, and is not described herein again.
In an optional implementation manner, the determining whether the user completes the target task based on the display duration of the task information and the display time of the task information each time the user leaves and returns may include: the method comprises the steps that the offline duration of a user is obtained based on the display time of task information when the user leaves and returns each time, and the offline duration is used for representing the accumulated duration before the user leaves a screen in the task information display process; obtaining the off-line rate of the user by the ratio of the off-line duration to the display duration; and if the off-line rate is smaller than a second preset threshold, judging that the user completes the target task. The specific implementation process may refer to the related description in the first aspect, and is not described herein again.
In an optional implementation manner, after determining whether the user completes the target task, the information interaction method may further include: displaying a target task completion result of the user; and if the user does not finish the target task, sending prompt information to related personnel. The specific implementation process may refer to the related description in the first aspect, and is not described herein again.
In an optional implementation manner, if there are multiple users, the process of authenticating the user may include: and sequentially carrying out identity verification on each user. Accordingly, the process of detecting whether the user leaves the screen in the task information presentation process based on the recorded video pictures may include: and respectively detecting whether each user passing the identity authentication leaves the screen in the process of displaying the task information based on the recorded video pictures so as to respectively determine whether each user completes the target task. The specific implementation process may refer to the related description in the first aspect, and is not described herein again.
In an optional implementation manner, the process of authenticating the user may include: reading the identity card information of a user through an identity card reader, wherein the identity card information comprises a reference face image; the face image of the user is collected through the camera, the face image collected by the camera is matched with the reference face image, and the identity verification result is obtained and displayed based on the matching result. The specific implementation process may refer to the related description in the first aspect, and is not described herein again.
In an optional embodiment, the process of detecting whether the user leaves the screen based on the recorded video picture may include: and in the process of displaying the task information, judging whether a face image matched with the reference face image of the user exists in the video image, and if not, judging that the user leaves the front of the screen. The specific implementation process may refer to the related description in the first aspect, and is not described herein again.
In an optional implementation manner, the information interaction method may further include: and responding to the preset operation of the user for triggering the video communication, starting the camera and the voice module, and carrying out the video communication with one or more display terminals in the system. The specific implementation process may refer to the related description in the first aspect, and is not described herein again.
The information interaction method provided by the embodiment of the specification can efficiently realize the task information watching of the user through the interaction between the user and the display terminal, so that the target task is completed, and the task completion condition of the user can be monitored by detecting whether the user leaves the screen or not and recording the display time when the user leaves and returns, so that the task completion quality is improved, a more real task completion result is obtained, and the functions of the display terminal are effectively enriched.
In a third aspect, based on the same inventive concept, an embodiment of the present specification further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer instruction, and when the computer instruction runs on a computer, the computer executes each process of the information interaction method embodiment provided in the second aspect, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. The term "plurality" means more than two, including two or more.
While preferred embodiments of the present specification have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all changes and modifications that fall within the scope of the specification.

Claims (11)

1. An information interaction method is applied to a display terminal, and the method comprises the following steps:
responding to the preset operation for triggering the target task executed by the user, and performing identity authentication on the user;
if the verification is passed, displaying task information corresponding to the target task, and triggering a camera to start video recording, wherein the camera is used for shooting a face image of a user in front of a screen;
detecting whether the user leaves the screen in the task information display process or not based on the recorded video picture, and if so, respectively recording the display time of the task information when the user leaves and returns each time;
and stopping the video recording after the display of the task information is finished, and determining whether the user finishes the target task or not based on the display duration of the task information and the display time of the task information when the user leaves and returns each time.
2. The method of claim 1, wherein after stopping the video recording, further comprising:
and generating a certification file for the user to execute the target task based on the recorded video, and storing the certification file into a target folder.
3. The method of claim 1, wherein determining whether the user completes the target task based on a presentation duration of the task information and a presentation time of the task information each time the user leaves and returns comprises:
obtaining the online time of the user based on the display time of the task information when the user leaves and returns each time, wherein the online time is used for representing the accumulated time for the user to watch the task information;
obtaining the online rate of the user by comparing the online time with the display time;
and if the online rate is greater than a first preset threshold value, judging that the user completes the target task.
4. The method of claim 1, wherein determining whether the user completes the target task based on a presentation duration of the task information and a presentation time of the task information each time the user leaves and returns comprises:
obtaining the off-line time of the user based on the display time of the task information when the user leaves and returns each time, wherein the off-line time is used for representing the accumulated time before the user leaves the screen in the task information display process;
comparing the offline duration with the display duration to obtain the offline rate of the user;
and if the off-line rate is smaller than a second preset threshold value, judging that the user completes the target task.
5. The method of claim 1, after determining whether the user completed the target task, further comprising:
displaying a target task completion result of the user;
and if the user does not finish the target task, sending prompt information to related personnel.
6. The method of claim 1, wherein if there are a plurality of users, said authenticating the user comprises:
sequentially carrying out identity authentication on each user;
the detecting whether the user leaves the screen in the task information display process based on the recorded video pictures comprises the following steps: and respectively detecting whether each user passing identity authentication leaves the screen in the task information display process based on the recorded video picture so as to respectively determine whether each user completes the target task.
7. The method of claim 1, wherein authenticating the user comprises:
reading the identity card information of the user through an identity card reader, wherein the identity card information comprises a reference face image;
and acquiring the face image of the user through the camera, matching the face image acquired by the camera with the reference face image, and obtaining and displaying an identity verification result based on the matching result.
8. The method of claim 7, wherein the detecting whether the user leaves the screen during the task information presentation based on the recorded video frame comprises:
and in the task information display process, judging whether a face image matched with the reference face image of the user exists in the video image, and if not, judging that the user leaves the front of the screen.
9. The method of claim 1, further comprising:
and responding to the preset operation for triggering the video communication executed by the user, starting the camera and the voice module, and carrying out the video communication with one or more display terminals in the system.
10. The utility model provides a display terminal, its characterized in that, includes display screen, camera and controller, the display screen and the camera all with the controller is connected, wherein:
the controller is configured to: responding to a preset operation for triggering a target task executed by a user, carrying out identity verification on the user, if the verification is passed, controlling the display screen to display task information corresponding to the target task, triggering the camera to start video recording, detecting whether the user leaves the screen in the task information display process or not based on a recorded video picture, and if so, respectively recording the display time of the task information when the user leaves and returns each time; and after the display of the task information is finished, controlling the camera to stop video recording, and determining whether the user finishes the target task or not based on the display duration of the task information and the display time of the task information when the user leaves and returns each time.
11. A computer-readable storage medium storing computer instructions which, when executed on a computer, cause the computer to perform the steps of the information interaction method of any one of claims 1-9.
CN202210026854.1A 2022-01-11 2022-01-11 Information interaction method, display terminal and storage medium Pending CN114374876A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210026854.1A CN114374876A (en) 2022-01-11 2022-01-11 Information interaction method, display terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210026854.1A CN114374876A (en) 2022-01-11 2022-01-11 Information interaction method, display terminal and storage medium

Publications (1)

Publication Number Publication Date
CN114374876A true CN114374876A (en) 2022-04-19

Family

ID=81143500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210026854.1A Pending CN114374876A (en) 2022-01-11 2022-01-11 Information interaction method, display terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114374876A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105007520A (en) * 2015-07-13 2015-10-28 华勤通讯技术有限公司 Television program recording method, recording equipment and recording system
CN107566898A (en) * 2017-09-18 2018-01-09 广东小天才科技有限公司 Video playing control method and device and terminal equipment
CN110333774A (en) * 2019-03-20 2019-10-15 中国科学院自动化研究所 A kind of remote user's attention appraisal procedure and system based on multi-modal interaction
CN112752153A (en) * 2020-04-30 2021-05-04 腾讯科技(深圳)有限公司 Video playing processing method, intelligent device and storage medium
KR102308313B1 (en) * 2021-03-17 2021-10-06 주식회사 여심서울 Method and system for providing video contents by recognizing biometric information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105007520A (en) * 2015-07-13 2015-10-28 华勤通讯技术有限公司 Television program recording method, recording equipment and recording system
CN107566898A (en) * 2017-09-18 2018-01-09 广东小天才科技有限公司 Video playing control method and device and terminal equipment
CN110333774A (en) * 2019-03-20 2019-10-15 中国科学院自动化研究所 A kind of remote user's attention appraisal procedure and system based on multi-modal interaction
CN112752153A (en) * 2020-04-30 2021-05-04 腾讯科技(深圳)有限公司 Video playing processing method, intelligent device and storage medium
KR102308313B1 (en) * 2021-03-17 2021-10-06 주식회사 여심서울 Method and system for providing video contents by recognizing biometric information

Similar Documents

Publication Publication Date Title
CN111079113A (en) Teaching system with artificial intelligent control and use method thereof
US20180308107A1 (en) Living-body detection based anti-cheating online research method, device and system
CN109635772A (en) Dictation content correcting method and electronic equipment
CN111611865B (en) Examination cheating behavior identification method, electronic equipment and storage medium
CN106302330A (en) Auth method, device and system
CN106210836A (en) Interactive learning method and device in video playing process and terminal equipment
CN116051115A (en) Face-brushing payment prompting method, device and equipment
CA2782071A1 (en) Liveness detection
CN104835266A (en) Business handling method and system of VTM
RU2673010C1 (en) Method for monitoring behavior of user during their interaction with content and system for its implementation
CN105844247A (en) Bi-camera cabinet machine and face recognition and second-generation ID card identification system
CN112633189A (en) Method and device for preventing examination cheating, electronic equipment and computer readable medium
CN105184267A (en) Face-identification-based secondary-deformation auxiliary authorization method
CN109240786A (en) Theme changing method and electronic equipment
CN112087603A (en) Intelligent examination room supervision method
CN110476180A (en) User terminal for providing the method for the reward type advertising service based on text reading and for carrying out this method
CN112399239A (en) Video playing method and device
CN106504001A (en) Method of payment and device in a kind of VR environment
CN112055257B (en) Video classroom interaction method, device, equipment and storage medium
CN110400119A (en) Interview method, apparatus, computer equipment and storage medium based on artificial intelligence
CN114363547A (en) Double-recording device and double-recording interaction control method
CN107390864B (en) Network investigation method based on eyeball trajectory tracking, electronic equipment and storage medium
CN111275874B (en) Information display method, device and equipment based on face detection and storage medium
Agulla et al. Multimodal biometrics-based student attendance measurement in learning management systems
CN106131052B (en) A kind of multi-source information identity identifying method towards actual mechanical process recruitment evaluation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination