WO2022206731A1 - 用于远程教学的提醒方法、装置及头戴显示设备 - Google Patents

用于远程教学的提醒方法、装置及头戴显示设备 Download PDF

Info

Publication number
WO2022206731A1
WO2022206731A1 PCT/CN2022/083584 CN2022083584W WO2022206731A1 WO 2022206731 A1 WO2022206731 A1 WO 2022206731A1 CN 2022083584 W CN2022083584 W CN 2022083584W WO 2022206731 A1 WO2022206731 A1 WO 2022206731A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video
eyeball
gaze position
area
Prior art date
Application number
PCT/CN2022/083584
Other languages
English (en)
French (fr)
Inventor
于东壮
Original Assignee
青岛小鸟看看科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛小鸟看看科技有限公司 filed Critical 青岛小鸟看看科技有限公司
Publication of WO2022206731A1 publication Critical patent/WO2022206731A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the embodiments of the present disclosure relate to the technical field of remote teaching, and more particularly, the embodiments of the present disclosure relate to a reminder method, an apparatus, and a head-mounted display device for remote teaching.
  • Virtual Reality (VR for short) technology is widely used in remote teaching, remote conference and other fields.
  • virtual reality equipment has more advantages in terms of clarity and immersion, and the effect of using virtual reality equipment for distance teaching is better.
  • others cannot observe the user's listening to the class, and cannot remind the user in time when the user is not paying attention, which affects the effect of remote teaching.
  • the purpose of the embodiments of the present disclosure is to provide a reminder method for remote teaching, which can solve the problem that the user cannot be reminded in time when the user is distracted in the process of using a virtual reality device for remote teaching, which affects the effect of remote teaching .
  • a reminder method for remote teaching which is applied to a head-mounted display device, where the head-mounted display device displays a first video, and the method includes:
  • the key region is an image region whose area ratio exceeds a preset ratio threshold in the image
  • a reminder message is sent to the user.
  • Reminders including:
  • the gaze position of the eyeball in the image is not within the motion area of the image
  • the gaze position of the eyeball in the image is not in the image
  • a reminder message is sent to the user.
  • the method further includes:
  • the first video is a video of the first type
  • setting the first detection duration as a first value
  • setting the second detection duration as a second value
  • the first detection duration is set to a third value
  • the second detection duration is set to a fourth value
  • the third value is greater than the first value, and the fourth value is less than the second value.
  • the method further includes:
  • the first video is a video of the second type.
  • the method further includes:
  • a reminder message is sent to the user.
  • the method is directed to the user.
  • the reminder message After the reminder message is issued, it also includes:
  • the method is directed to the user.
  • the reminder message After the reminder message is issued, it also includes:
  • a reminder device for remote teaching which is applied to a head-mounted display device, where the head-mounted display device displays a first video, and the device includes:
  • a determination module configured to determine, according to the first video, a motion area and a key area of an image in the first video, where the key area is an area whose area ratio exceeds a preset ratio threshold in the image image area;
  • a first acquisition module used for acquiring the gaze position of the eyeball in the image
  • a reminder module configured to issue a reminder to the user when the gaze position of the eyeball in the image is not within the motion area of the image, and the gaze position of the eyeball in the image is not in the key area of the image information.
  • a head-mounted display device displays a first video
  • the head-mounted display device includes:
  • the processor is configured to execute the reminder method for distance teaching according to the first aspect of the present disclosure according to the control of the executable computer program.
  • a computer-readable storage medium stores a computer program that can be read and executed by a computer, and the computer program is used to The computer reads and executes the reminder method for distance teaching according to the first aspect of the present disclosure.
  • the gaze position of the eyeball in the image is obtained, and the motion area and key area of each frame of the image in the first video are determined, and then, according to the gaze position of the eyeball in the image and each frame in the first video
  • the positional relationship between the motion area and the key area of the image can determine whether the user's sight moves with the motion area and the key area of the image, so that it can be observed whether the user's attention is concentrated, and whether the gaze position of the eyeball in the image is not
  • a reminder message is sent to the user to remind the user in time, thereby improving the teaching effect of remote teaching.
  • FIG. 1 is a schematic diagram of a hardware configuration of a head-mounted display device that can be used to implement an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a reminder method for remote teaching according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a positional relationship between an eyeball and a head-mounted display device according to an embodiment of the disclosure
  • FIG. 4 is a schematic diagram of a hardware structure of a reminder device for remote teaching according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a hardware structure of a head-mounted display device according to an embodiment of the disclosure.
  • FIG. 1 a block diagram of a hardware configuration of a head-mounted display device 1000 provided by an embodiment of the present application.
  • the head-mounted display device 1000 can be applied to remote teaching, remote conferences, and the like.
  • the head mounted display device 1000 may be shown in FIG. 1 , including a processor 1100 , a memory 1200 , an interface device 1300 , a communication device 1400 , a display device 1500 , an input device 1600 , an audio device 1700 , and an inertial measurement unit 1800 , Camera 1900, etc.
  • the processor 1100 may be, for example, a central processing unit CPU, a microprocessor MCU, or the like.
  • the memory 1200 includes, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a nonvolatile memory such as a hard disk, and the like.
  • the interface device 1300 includes, for example, a serial bus interface (including a USB interface), a parallel bus interface, a high-definition multimedia interface HDMI interface, and the like.
  • the communication device 1400 is capable of, for example, wired or wireless communication.
  • the display device 1500 is, for example, a liquid crystal display, an LED display, a touch display, or the like.
  • the input device 1600 includes, for example, a touch screen, a keyboard, a somatosensory input, and the like.
  • the audio device 1700 may be used to input/output voice information.
  • the inertial measurement unit 1800 may be used to measure the pose change of the head mounted display device 1000 .
  • the camera 1900 can be used to acquire image information.
  • the head-mounted display device 1000 may be, for example, a VR (Virtual Reality, Virtual Reality) device, an AR (Augmented Reality, Augmented Reality) device, an MR (Mixed Reality, Mixed Reality) device, etc., which are not limited in this embodiment of the present disclosure.
  • a VR Virtual Reality, Virtual Reality
  • AR Augmented Reality
  • MR Mated Reality, Mixed Reality
  • the memory 1200 of the head-mounted display device 1000 is used to store a computer program, and the computer program is used to control the processor 1100 to operate to implement the reminder method for distance teaching according to any embodiment.
  • the computer program is used to control the processor 1100 to operate to implement the reminder method for distance teaching according to any embodiment.
  • a skilled person can design computer programs according to the solutions disclosed in this specification. How the computer program controls the processor 1100 to operate is well known in the art, so it will not be described in detail here.
  • the head-mounted display device 1000 of the embodiment of the present disclosure may only involve some of the devices, such as a head-mounted display device 1000 only involves the processor 1100 and the memory 1200, and the head-mounted display device 1000 in the embodiment of the present disclosure may also include other devices.
  • the electronic device shown in FIG. 1 is illustrative only and is in no way intended to limit the present disclosure, its application, or uses.
  • FIG. 2 shows a reminder method for remote teaching according to one embodiment, and the reminder method can be implemented by, for example, the head-mounted display device 1000 shown in FIG. 1 .
  • the head-mounted display device 1000 of this embodiment displays the first video.
  • the reminder method for remote teaching may include the following steps S2100-S2300.
  • Step S2100 Determine, according to the first video, a motion area and a key area of an image in the first video, where the key area is an image area whose area ratio exceeds a preset ratio threshold in the image .
  • the first video may be a video presented by a head mounted display device.
  • the first video may be, for example, an instructional video.
  • the first video may be a complete video.
  • the first video may be a video clipped from the complete video.
  • the first video may be a video of a certain duration.
  • a video with a duration of 1h may be divided into six segments, and each segment of a video with a duration of 10min is used as the first video.
  • selecting the first video of a certain duration for detection can improve the detection accuracy, so as to accurately understand the user's learning effect.
  • the motion area may be a part of the scene where the scene changes in each frame of the first video.
  • the area in which the teaching content is displayed in the video is the motion area.
  • the first video includes multiple frames of images. Determining the motion region of the image in the first video according to the first video may be determining the motion region of each frame of images according to multiple frames of the first video. During specific implementation, the first video is decoded to obtain multiple frames of images, and the motion area of each frame of images is detected based on the ViBe algorithm. It should be noted that the motion region of each frame of image may also be detected according to other algorithms in the prior art, which is not limited in this embodiment of the present disclosure.
  • the key area may be an image area whose area ratio exceeds a preset ratio threshold in the image of the first video.
  • the key area may be an image area with the largest area in the image of the first video.
  • the blackboard area in the video is the focus area.
  • the area where the product in the video is located is the key area.
  • the first video includes multiple frames of images. Determining the key region of the image in the first video according to the first video may be determining the key region of each frame of images according to multiple frames of the first video.
  • the first video is decoded to obtain multiple frames of images, edge detection is performed on each frame of images based on the Canny edge algorithm to obtain multiple image areas, and then the number of pixels in each image area is obtained, according to each image area. The number of pixels in an image area, and the area value corresponding to each image area is obtained, so that the area with the largest area is regarded as the key area of the image.
  • the key area of each frame of image may also be detected according to other algorithms in the prior art, which is not limited in this embodiment of the present disclosure.
  • motion regions of each frame of images in the first video may be the same or different.
  • the key regions of each frame of images in the first video may be the same or different.
  • Step S2200 acquiring the gaze position of the eyeball in the image.
  • the gaze position of the eyeball in the image may include the gaze position of the user's left eye in the image and the gaze position of the user's right eye in the image.
  • the gaze position of the eyeball in the image can be determined according to the eyeball tracking data.
  • Eye tracking data may be obtained from a head mounted display device. More specifically, the eye-tracking data may be collected by an eye-tracking module provided on the head-mounted display device.
  • the gaze position of the eyeball in the image is a two-dimensional coordinate within the image.
  • the eye tracking data may be a three-dimensional vector with a single eye (left eye or right eye) as the origin, and the three-dimensional vector may reflect the gaze direction of the user's line of sight. Calculate the intersection of the user's realization and the screen of the head-mounted display device, that is, to obtain the gaze position of the eyeball in the image.
  • FIG. 3 is a schematic diagram of the positional relationship between the eyeball and the head-mounted display device.
  • the screen width of the head-mounted display device is w
  • the screen height is h
  • the distance between the user's eyeball and the screen is d
  • the user's interpupillary distance is L.
  • the screen coordinate system XOY is established.
  • the coordinate R1 of the left eye is (h/2, w/2-L/2, d)
  • the tracking vector of the left eye that is, the line of sight of the left eye is the vector RA passing through R1
  • the intersection of the vector RA passing through R1 and the screen surface XOY is calculated, and the intersection point is is the gaze position of the left eye in the image.
  • Step S2300 in the case that the gaze position of the eyeball in the image is not within the motion area of the image, and the gaze position of the eyeball in the image is not within the key area of the image, send a reminder message to the user.
  • the gaze position of the eyeball in the image is not within the motion area of the image, which may be that the gaze position of the eyeball in a certain frame image in the first video is not within the motion area of the corresponding frame image.
  • the gaze position of the eyeball in the current frame image is compared with the position of the motion area of the current frame image to determine the gaze position of the eyeball in the current frame image Whether it is within the motion area of the current frame image.
  • the gaze position of the eyeball in the image is not within the key area of the image, which may be that the gaze position of the eyeball in a certain frame image in the first video is not within the key area of the corresponding frame image.
  • the gaze position of the eyeball in the current frame image is compared with the position of the key area of the current frame image to determine the gaze position of the eyeball in the current frame image Whether it is within the focus area of the current frame image.
  • the gaze position of the eyeball in the image is not within the motion area of the image, and the gaze position of the eyeball in the image is not in the key area of the image, it may be during the playback of the first video , the gaze position of the eyeball in a certain frame image is not in the motion area of the corresponding frame image, and the gaze position of the eyeball in a certain frame image is not in the key area of the corresponding frame image.
  • the user's attention line of sight does not move with the moving area of the image, and it is not within the key area of the image, that is, the user's attention line of sight does not overlap with the moving area of the image, nor does it match the key area of the image.
  • Coincidence at this time, it is considered that the user is not attentive, and a reminder message is sent to the user.
  • the prompt information may include at least one of voice prompt information, pop-up text prompt information, vibration prompt information, and image prompt information. For example, short beeps through the headset. For another example, a reminder message of "pay attention to listening" is displayed on the screen of the head-mounted display device. Another example is to control the vibration of the handle of the head-mounted display device. Also for example, the hue of an image displayed on the screen of the control head-mounted display device changes instantaneously. In this embodiment, when the gaze position of the eyeball in the image is not within the motion area of the image, and the gaze position of the eyeball in the image is not within the key area of the image, the interactive method Reminding users can improve the reminder effect.
  • the gaze position of the eyeball in the image is obtained, and the motion area and key area of each frame of the image in the first video are determined, and then, according to the gaze position of the eyeball in the image and each frame in the first video
  • the positional relationship between the motion area and the key area of the image can determine whether the user's sight moves with the motion area and the key area of the image, so that it can be observed whether the user's attention is concentrated, and whether the gaze position of the eyeball in the image is not
  • a reminder message is sent to the user to remind the user in time, thereby improving the teaching effect of remote teaching.
  • the The step of sending the reminder information by the user may further include steps S3100-S3300.
  • Step S3100 during the process of playing the first video, determine whether the gaze position of the eyeball in the image is always located within the motion area of the image within the first detection duration.
  • the first detection duration it can be determined whether the user's sight moves following the motion area of the image within a period of time. It should be noted that, the first detection duration can be set by those skilled in the art according to the actual situation.
  • the gaze position of the eyeball in the image is within the motion area of the image. That is to say, start timing from the playback start time of the first video, and when the playback duration of the first video reaches the first detection duration, according to the gaze position of the eyeball in the image and the motion area of the image during this period of time position relationship, update the flag bit. Based on this, it can be determined by the value of the read flag bit whether the gaze position of the eyeball in the image is always within the motion area of the image within the first detection period.
  • the gaze position of the eyeball in the image is always within the motion area of the image, and the flag bit is set to "0", and within the first detection duration, the gaze position of the eyeball in the image is not in the image. In the case of motion area, set the flag bit to "1".
  • the playback start time of the first video may also be a time selected by the user. For example, if it is used to watch the first video from 1min20s, this time is the playback start time.
  • Step S3200 during the process of playing the first video, determine whether the gaze position of the eyeball in the image is always located in the key area of the image within the second detection duration.
  • the second detection duration it can be determined whether the user's sight is always moving in the key area of the image for a period of time. That is to say, it can be determined whether the gaze of the user has abnormal gaze, such as eyes closed, for a period of time.
  • the second detection duration can be set by those skilled in the art according to the actual situation.
  • the playback start time of the first video may also be a time selected by the user. For example, if it is used to watch the first video from 1min20s, this time is the playback start time.
  • Step S3300 within the first detection duration, the gaze position of the eyeball in the image is not within the motion area of the image, and within the second detection duration it occurs that the gaze position of the eyeball in the image is not If the image is within the key area, a reminder message is sent to the user.
  • the total duration of the first video is 5min
  • the first detection duration is 30s
  • the second detection duration is 60s.
  • the timing starts from the playback start time of the first video.
  • the playback duration of the first video reaches 30s for the first time, if the gaze position of the eyeball in the image is always within the motion area of the image within these 30s, set the flag to "0"; restart the timing, when the playback duration of the first video reaches 30s again, if the gaze position of the eyeball in the image is not within the motion area of the image within these 30s, update the flag bit to "1";
  • This detection process is repeated until the playback of the first video of 5 minutes ends or the user pauses the playback of the first video.
  • start timing from the start time of the first video playback when the playback duration of the first video reaches 60s for the first time, if the gaze position of the eyeball in the image is always within the key area of the image within the 60s, set the flag Set to "0"; restart the timing, when the playback duration of the first video reaches 60s again, if the eyeball's gaze position in the image is not within the key area of the image within the 60s, update the flag to "1" ”; Repeat this detection process until the first video playback of 5 minutes ends or the user pauses the playback of the first video.
  • the gaze position of the eyeball in the image is not within the motion area of the image within the first detection duration, and the situation where the gaze position of the eyeball in the image is not within the key area of the image occurs within the second detection duration.
  • the first detection duration and the second detection duration may be determined according to the video type of the first video.
  • the first video may be divided into two categories, the first type of video may be a video with more scene changes, and the second type of video may be a video with less scene changes.
  • the first type of video since the video scene changes more, that is, the motion area of the image changes more, a shorter first detection duration can be set, and a longer second detection duration can be set.
  • the second type of video since the video scene changes less, that is, the motion area of the image changes less, a longer second detection duration can be set, and a shorter second detection duration can be set.
  • determining the first detection duration and the second detection duration according to the video type of the first video may further include steps S4100-S4200.
  • Step S4100 in the case that the first video is a video of the first type, set the first detection duration as a first value, and set the second detection duration as a second value.
  • Step S4200 in the case that the first video is a second type of video, set the first detection duration as a third value, and set the second detection duration as a fourth value.
  • the third value is greater than the first value, and the fourth value is less than the second value.
  • the first detection duration is set to 30s
  • the second detection duration is set to 60s
  • the first detection duration is set to 60s and the second detection duration to 40s.
  • the first detection duration and the second detection duration may be determined according to the video type
  • the method can be applied to different types of videos, and the determined first detection duration and the second detection duration Detecting whether the user's attention is focused can further improve the detection accuracy.
  • the video can be divided, and the first detection duration and the second detection duration can be determined according to the video type of each video, which can improve the detection accuracy and further improve the teaching effect of distance teaching.
  • the reminding method for remote teaching may further include: steps S5100-S5400.
  • Step S5100 Acquire the number of pixels included in the motion region of each frame of image in the first video.
  • Step S5200 Determine the number of frames of images whose number is greater than the first threshold.
  • the first threshold can reflect the size of the area occupied by the motion area in each frame of image, and the larger the area occupied by the motion area in the image, the more changes in the frame image compared to the previous frame image. It should be noted that, the first threshold can be set by those skilled in the art according to the actual situation.
  • Step S5300 when the number of frames is greater than a second threshold, determine that the first video is a video of the first type.
  • Step S5400 when the number of frames is less than or equal to a second threshold, determine that the first video is a second type of video.
  • the video type of the first video can be determined.
  • the number of pixels included in the motion area of the image is greater than the first threshold and the number of frames of the image is greater than the second threshold, it means that more frames in the first video have more changes compared to the previous frame, that is, the first video The scene of a video changes a lot.
  • the number of pixels included in the motion area of the image is greater than the first threshold, and the number of frames of the image is less than or equal to the second threshold, it means that a few frame images in the first video have more changes compared to the previous frame image, that is, The first video has less scene changes.
  • the second threshold can be set by those skilled in the art according to the actual situation.
  • the first threshold is set to f1, and if the number of pixels included in the motion area is greater than f1, the count is incremented by 1. If the number of times exceeds 1/3*n, it is determined that the current video is a video with many moving areas, that is, the first type of video; otherwise, it is a video with many static areas, that is, the second type of video.
  • the video type of the first video is determined according to the number of pixels included in the motion area of each frame of image in the first video, and then the first detection duration and the first detection duration are determined based on the video type of the first video.
  • the detection duration can improve the detection accuracy, so as to remind the user in an accurate and timely manner.
  • the reminder method for remote teaching may further include: steps S6100-S6200.
  • Step S6100 acquiring the gaze position of the eyeball in the head-mounted display device.
  • Step S6200 if the gaze position of the eyeball in the head-mounted display device is not within the display area of the head-mounted display device within the first preset time period, a reminder message is sent to the user.
  • the first preset duration can be set by those skilled in the art according to the actual situation.
  • the first preset duration is 1 min.
  • the head-mounted display device after the user wears the head-mounted display device, if the gaze position of the user's eyeballs is not within the display area of the head-mounted display device within the first preset time period, that is, the user's gaze is not on the screen, A reminder message is sent to the user to remind that the video is about to start playing, and the user is prompted to adjust the wearing state of the head-mounted display device in time.
  • the reminding method for remote teaching may further include: acquiring the user's gaze state; if the user's gaze state is in an abnormal state within a first preset time period, sending a reminder message to the user.
  • the abnormal state may be, for example, a closed-eye state.
  • the user's gaze state is in an abnormal state within the first preset time period, which may mean that the user is in a closed eye state within the first preset time period, and a reminder message is sent to the user to remind the video that the video is about to start playing, and the user is reminded to pay attention to the head-mounted display device. Display area.
  • a video segment at the corresponding moment may be recorded, so as to facilitate the user to review the corresponding video later.
  • the reminding method for remote teaching may further include: steps S7100-S7200.
  • Step S7100 if the gaze position of the eyeball in the image is not within the motion area of the image, obtain the corresponding first playback time.
  • the first video is played for 10min25s
  • the gaze position of the eyeball in the image is outside the motion area of the image
  • the moment is recorded that is, the first playback moment is 10min25s.
  • Step S7200 Acquire and store a second video according to the first playback moment and the second preset duration, where the second video is a partial video segment in the first video.
  • the second preset duration can be set by those skilled in the art as required.
  • the second preset duration is 1 min.
  • the second preset duration may be a longer duration among the first detection duration and the second detection duration.
  • the second video may include a video segment with a second preset duration before the first playback moment and a video segment with a second preset duration after the first playback moment.
  • the first playback time is 10min25s
  • the second preset duration is 1min
  • the second video is a video segment ranging from 9min25s to 11min25s.
  • the second video may be named with the first playing time as an index, and the second video may be stored.
  • the head-mounted display device presents a video review interface, which displays multiple video segments, each video segment is named after the first playback moment, and the user can search for the video segment to be rewatched according to the playback moment.
  • the corresponding first playback moment is recorded, and the corresponding video segment is intercepted and stored according to the first playback moment, which can facilitate the user to view the corresponding video segment. It does not require the user to re-watch the entire video, which can improve teaching efficiency and facilitate the use of users.
  • the user can search for the video segment that needs to be re-watched through the first playback moment, which is faster and can further improve the teaching efficiency.
  • the reminding method for remote teaching may further include: steps S8100-S8200.
  • Step S8100 if the gaze position of the eyeball in the image is not within the key area of the image, acquire the corresponding second playback time.
  • the first video is played for 12min30s
  • the gaze position of the eyeball in the image is outside the key area of the image
  • the time is recorded that is, the second playback time is 12min30s.
  • Step S8200 Acquire and store a third video according to the second playback moment and the third preset duration, where the third video is a partial video segment in the first video.
  • the third preset duration can be set by those skilled in the art as required.
  • the third preset duration is 1 min.
  • the third preset duration may be a longer duration among the first detection duration and the second detection duration.
  • the second video may include a video segment with a third preset duration before the second playback moment and a video segment with a third preset duration after the second playback moment.
  • the second playback time is 12min30s
  • the third preset duration is 1min
  • the third video is a video segment ranging from 11min30s to 13min30s.
  • the third video may be named with the second playing time as an index, and the third video may be stored.
  • the head-mounted display device presents a video review interface, which displays multiple video segments, each video segment is named after the second playback time, and the user can search for the video segment to be rewatched according to the playback time.
  • the corresponding second playback time is recorded, and the corresponding video segment is intercepted and stored according to the second playback time, which can facilitate the user to view the corresponding video segment. It does not require the user to re-watch the entire video, which can improve teaching efficiency and facilitate the use of users.
  • the user can search for the video segment that needs to be re-watched through the second playback moment, which is faster and can further improve the teaching efficiency.
  • the reminder device for remote teaching 400 may include a determination module 410 , a first acquisition module 420 and a reminder module 430 .
  • the determining module 410 is configured to determine, according to the first video, a motion area and a key area of an image in the first video, where the key area is an area in the image whose proportion exceeds a preset proportion threshold image area.
  • the first acquiring module 420 is configured to acquire the gaze position of the eyeball in the image. as well as,
  • the reminder module 430 is configured to send a message to the user when the gaze position of the eyeball in the image is not within the motion area of the image, and the gaze position of the eyeball in the image is not within the focus area of the image Reminder information.
  • the reminding module 430 may include: a first determining unit, configured to determine, during the process of playing the first video, whether the gaze position of the eyeball in the image is always located within the first detection duration within the motion area of the image; a second determination unit, configured to determine whether the gaze position of the eyeball in the image is always located at the focal point of the image during the second detection period during the playback of the first video area; a reminder unit, used for appearing within the first detection duration that the gaze position of the eyeball in the image is not within the motion area of the image, and within the second detection duration the eyeball appears in the image If the gaze position in the image is not within the key area of the image, a reminder message is sent to the user.
  • the reminding apparatus 400 for remote teaching may further include: a first setting module, configured to set the first detection duration to be the first video when the first video is a video of the first type a value, set the second detection duration to a second value; a second setting module, configured to set the first detection duration to a third value when the first video is a second type of video, and set The second detection duration is a fourth value; wherein, the third value is greater than the first value, and the fourth value is less than the second value.
  • the reminder device 400 for remote teaching may further include: a second acquisition module, configured to acquire the number of pixels included in the motion area of each frame of the first video; frame a number determination module for determining the number of frames of images whose number is greater than a first threshold; a first type determination module for determining the first video as the first video when the number of frames is greater than a second threshold A type of video; a second type determination module, configured to determine that the first video is a second type of video when the number of frames is less than or equal to a second threshold.
  • the reminding apparatus 400 for remote teaching may further include: a third obtaining module, configured to obtain the gaze position of the eyeball in the head-mounted display device; If the gaze position in the head-mounted display device is not within the display area of the head-mounted display device within the first preset time period, a reminder message is sent to the user.
  • a third obtaining module configured to obtain the gaze position of the eyeball in the head-mounted display device
  • the reminding device 400 for remote teaching may further include: a first playback moment determination module, configured to obtain the corresponding information if the gaze position of the eyeball in the image is not within the motion area of the image the first playback moment; the first video acquisition module is used to acquire and store a second video according to the first playback moment and the second preset duration, and the second video is a part of the video in the first video part.
  • a first playback moment determination module configured to obtain the corresponding information if the gaze position of the eyeball in the image is not within the motion area of the image the first playback moment
  • the first video acquisition module is used to acquire and store a second video according to the first playback moment and the second preset duration, and the second video is a part of the video in the first video part.
  • the reminding device 400 for remote teaching may further include: a second playback time determining module, configured to obtain corresponding information if the gaze position of the eyeball in the image is not within the key area of the image the second playback moment; the second video acquisition module is used to acquire and store a third video according to the second playback moment and the third preset duration, and the third video is part of the first video. part.
  • the gaze position of the eyeball in the image is obtained, and the motion area and key area of each frame of the image in the first video are determined, and then, according to the gaze position of the eyeball in the image and each frame in the first video
  • the positional relationship between the motion area and the key area of the image can determine whether the user's sight moves with the motion area and the key area of the image, so that it can be observed whether the user's attention is concentrated, and whether the gaze position of the eyeball in the image is not
  • a reminder message is sent to the user to remind the user in time, thereby improving the teaching effect of remote teaching.
  • FIG. 5 is a schematic diagram of a hardware structure of a head-mounted display device according to an embodiment.
  • the head mounted display device 500 includes a memory 520 and a processor 510 .
  • Memory 520 is used to store executable instructions.
  • the processor 510 is configured to execute, according to the control of the executable computer program, the reminder method for remote teaching according to the method embodiment of the present application.
  • the head-mounted display device 500 may be a VR device, an AR device, an MR device, etc., which is not limited in this embodiment of the present application.
  • the head-mounted display device 500 may be the head-mounted display device 1000 shown in FIG. 2 , or may be a device with other hardware structures, which is not limited herein.
  • the head-mounted display device 500 may include the above reminder device 400 for remote teaching.
  • each module of the above reminder device 400 for remote teaching may be implemented by the processor 510 running computer instructions stored in the memory 520 .
  • a computer-readable storage medium stores a computer program that can be read and executed by a computer, and the computer program is used when being read and executed by the computer , and execute the reminder method for remote teaching as in any of the above method embodiments of the present application.
  • the present application may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present application.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out the operations of the present application may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination including object-oriented programming languages - such as Smalltalk, C++, etc., and conventional procedural programming languages - such as "eg" languages or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN) network, or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs), can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present application.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Educational Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开实施例公开了用于远程教学的提醒方法、装置及头戴显示设备,包括:根据第一视频,确定第一视频中的图像的运动区域和重点区域,其中,重点区域为图像中面积占比超过预设的占比阈值的图像区域;获取眼球在图像中的注视位置;在眼球在图像中的注视位置不在图像的运动区域内和重点区域内的情况下,向用户发出提醒信息。

Description

用于远程教学的提醒方法、装置及头戴显示设备
本申请要求于2021年04月02日提交的,申请名称为“用于远程教学的提醒方法、装置及头戴显示设备”的、中国专利申请号为“202110362546.1”的优先权,该中国专利申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及远程教学技术领域,更具体地,本公开实施例涉及一种用于远程教学的提醒方法、装置及头戴显示设备。
背景技术
目前,虚拟现实(Virtual Reality,简称VR)技术广泛应用于远程教学、远程会议等领域。与现有的远程教学系统相比,虚拟现实设备在清晰度和沉浸感等方面更有优势,利用虚拟现实设备进行远程教学的效果更好。但是,在用户使用虚拟现实设备进行远程教学的过程中,其他人无法观察到用户的听课情况,不能在用户注意力不集中时及时提醒,影响远程教学效果。
因此,有必要提出一种用于远程教学的提醒方法。
技术问题
本公开实施例的目的在于提供一种用于远程教学的提醒方法,能够解决在用户使用虚拟现实设备进行远程教学的过程中,不能在用户注意力不集中时及时提醒,影响远程教学效果的问题。
技术解决方案
根据本公开实施例第一方面,提供了一种用于远程教学的提醒方法,应用于头戴显示设备,所述头戴显示设备显示有第一视频,所述方法包括:
根据所述第一视频,确定所述第一视频中的图像的运动区域和重点区域,其中,所述重点区域为所述图像中面积占比超过预设的占比阈值的图像区域;
获取眼球在所述图像中的注视位置;
在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
可选地,所述在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,包括:
在播放所述第一视频的过程中,确定在第一检测时长内眼球在所述图像中的注视位置是否始终位于所述图像的运动区域内;
在播放所述第一视频的过程中,确定在第二检测时长内眼球在所述图像中的注视位 置是否始终位于所述图像的重点区域内;
在所述第一检测时长内出现眼球在所述图像中的注视位置不在所述图像的运动区域内,且在所述第二检测时长内出现眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
可选地,所述方法还包括:
在所述第一视频为第一类视频的情况下,设置所述第一检测时长为第一值,设置所述第二检测时长为第二值;
在所述第一视频为第二类视频的情况下,设置所述第一检测时长为第三值,设置所述第二检测时长为第四值;
其中,所述第三值大于所述第一值,所述第四值小于所述第二值。
可选地,所述方法还包括:
获取所述第一视频中每一帧图像的运动区域所包含的像素点的个数;
确定所述个数大于第一阈值的图像的帧数;
在所述帧数大于第二阈值的情况下,确定所述第一视频为第一类视频;
在所述帧数小于或等于第二阈值的情况下,确定所述第一视频为第二类视频。
可选地,所述方法还包括:
获取眼球在所述头戴显示设备内的注视位置;
如果所述眼球在所述头戴显示设备内的注视位置在第一预设时长内不在所述头戴显示设备的显示区域内,向用户发出提醒信息。
可选地,所述方法在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,之后,还包括:
如果出现眼球在所述图像中的注视位置不在所述图像的运动区域内,获取对应的第一播放时刻;
根据所述第一播放时刻和第二预设时长,获取并存储第二视频,所述第二视频为所述第一视频中的部分视频段。
可选地,所述方法在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息之后,还包括:
如果出现眼球在所述图像中的注视位置不在所述图像的重点区域内,获取对应的第二播放时刻;
根据所述第二播放时刻和第三预设时长,获取并存储第三视频,所述第三视频为所 述第一视频中的部分视频段。
根据本公开实施例的第二方面,提供了一种用于远程教学的提醒装置,应用于头戴显示设备,所述头戴显示设备显示有第一视频,所述装置包括:
确定模块,用于根据所述第一视频,确定所述第一视频中的图像的运动区域和重点区域,其中,所述重点区域为所述图像中面积占比超过预设的占比阈值的图像区域;
第一获取模块,用于获取眼球在所述图像中的注视位置;
提醒模块,用于在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
根据本公开实施例第三方面,提供了一种头戴显示设备,所述头戴显示设备显示有第一视频,所述头戴显示设备包括:
存储器,用于存储可执行的指令;
处理器,用于根据所述可执行的计算机程序的控制,执行根据本公开第一方面所述的用于远程教学的提醒方法。
根据本公开实施例第四方面,提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有可被计算机读取执行的计算机程序,所述计算机程序用于在被所述计算机读取运行时,执行根据本公开第一方面所述的用于远程教学的提醒方法。
有益效果
根据本实施例,获取眼球在图像中的注视位置,并确定第一视频中的每一帧图像的运动区域和重点区域,之后,根据眼球在图像中的注视位置与第一视频中每一帧图像的运动区域和重点区域的位置关系,可以判断用户的视线是否跟随图像的运动区域和重点区域移动,从而可以观察用户的注意力是否集中,并且在在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,以及时提醒用户,从而提高远程教学的教学效果。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍。应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定。对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1是可用于实现本公开实施例的头戴显示设备的硬件配置示意图;
图2为本公开实施例的一种用于远程教学的提醒方法的流程示意图;
图3为本公开实施例的一种眼球与头戴显示设备的位置关系的示意图;
图4为本公开实施例的一种用于远程教学的提醒装置的硬件结构示意图;
图5为本公开实施例的一种头戴显示设备的硬件结构示意图。
本发明的实施方式
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
<硬件配置>
如图1所示,本申请实施例提供的头戴显示设备1000的硬件配置的框图。
本公开实施例中,该头戴显示设备1000可以应用于远程教学、远程会议等。
在一个实施例中,头戴显示设备1000可以如图1所示,包括处理器1100、存储器1200、接口装置1300、通信装置1400、显示装置1500、输入装置1600、音频装置1700、惯性测量单元1800、摄像头1900等。
其中,处理器1100例如可以是中央处理器CPU、微处理器MCU等。存储器1200例如包括ROM(只读存储器)、RAM(随机存取存储器)、诸如硬盘的非易失性存储器等。接口装置1300例如包括串行总线接口(包括USB接口)、并行总线接口、高清多媒体接口HDMI接口等。通信装置1400例如能够进行有线或无线通信。显示装置1500例如是液晶显示屏、LED显示屏、触摸显示屏等。输入装置1600例如包括触摸屏、键盘、体感输入等。音频装置1700可以用于输入/输出语音信息。惯性测量单元1800可以用于测量头戴显示设备1000的位姿变化。摄像头1900可以用于获取图像信息。
该头戴显示设备1000例如可以是VR(虚拟现实,Virtual Reality)设备、AR(增强现实,Augmented Reality)设备及MR(混合现实,Mixed Reality)设备等,本公开实施例对此不作限定。
在本实施例中,头戴显示设备1000的存储器1200用于存储计算机程序,该计算机程序用于控制所述处理器1100进行操作以实施根据任意实施例的用于远程教学的提醒方法。技术人员可以根据本说明书所公开方案设计计算机程序。该计算机程序如何控制 处理器1100进行操作,这是本领域公知,故在此不再详细描述。
本领域技术人员应当理解,尽管在图1中对头戴显示设备1000示出了多个装置,但是,本公开实施例的头戴显示设备1000可以仅涉及其中的部分装置,例如头戴显示设备1000只涉及处理器1100和存储器1200,本公开实施例的头戴显示设备1000也可以还包含其他装置。图1所示的电子设备仅是解释性的,并且决不是为了要限制本公开、其应用或用途。
<方法实施例>
图2示出了根据一个实施例的用于远程教学的提醒方法,该提醒方法例如可以由如图1所示的头戴显示设备1000实施。
本实施例的头戴显示设备1000显示有第一视频,如图2所示,用于远程教学的提醒方法可以包括以下步骤S2100~S2300。
步骤S2100,根据所述第一视频,确定所述第一视频中的图像的运动区域和重点区域,其中,所述重点区域为所述图像中面积占比超过预设的占比阈值的图像区域。
第一视频可以是头戴显示设备所呈现的视频。第一视频例如可以是教学视频。在一个例子中,第一视频可以是完整的视频。在一个例子中,第一视频可以是从完整的视频中截取的一段视频。在一个例子中,第一视频可以是一定时长的视频。在实际应用中,例如可以将时长1h的视频划分为六段,将每段时长为10min的视频作为第一视频。在本实施例中,选取一定时长的第一视频进行检测,能够提高检测的准确性,以准确了解用户的学习效果。
运动区域可以是第一视频的每一帧图像中场景发生变化的部分。例如,对于教学视频,视频中的显示教学内容的区域为运动区域。
在本实施例中,第一视频包括多帧图像。根据第一视频确定第一视频中的图像的运动区域,可以是根据第一视频的多帧图像确定每一帧图像的运动区域。在具体实施时,对第一视频进行解码,获得多帧图像,基于ViBe算法检测每一帧图像的运动区域。需要说明的是,也可以根据现有技术中的其他算法检测每一帧图像的运动区域,本公开实施例对此不做限定。
重点区域可以是第一视频的图像中面积占比超过预设的占比阈值的图像区域。可选地,重点区域可以是第一视频的图像中面积最大的图像区域。例如,对于教学视频,视频中的黑板区域为重点区域。还例如,对于产品展示视频,视频中的产品所在的区域为重点区域。
在本实施例中,第一视频包括多帧图像。根据第一视频确定第一视频中的图像的重点区域,可以是根据第一视频的多帧图像确定每一帧图像的重点区域。在具体实施时, 对第一视频进行解码,获得多帧图像,基于Canny边缘算法对每一帧图像进行边缘检测,得到多个图像区域,之后,获取每一图像区域的像素点数量,根据每一图像区域的像素点数量,获得对应于每一图像区域的面积值,从而将面积最大的区域作为图像的重点区域。需要说明的是,也可以根据现有技术中的其他算法检测每一帧图像的重点区域,本公开实施例对此不做限定。
需要说明的是,第一视频中每一帧图像的运动区域可以是相同的,也可以是不同的。第一视频中每一帧图像的重点区域可以是相同的,也可以是不同的。
步骤S2200,获取眼球在所述图像中的注视位置。
眼球在图像中的注视位置可以包括用户左眼在图像中的注视位置、用户右眼在图像中的注视位置。
在本实施例中,可以根据眼球追踪数据确定眼球在图像中的注视位置。眼球追踪数据可以是从头戴显示设备获取的。更具体地,眼球追踪数据可以通过设置在头戴显示设备上的眼球追踪模块采集的。眼球在图像中的注视位置为在图像内的二维坐标。眼球追踪数据可以是以单眼(左眼或者右眼)为原点的一个三维向量,该三维向量可以反映用户视线的注视方向。计算用户实现与头戴显示设备的屏幕的交点,即得到眼球在图像中的注视位置。
示例性的,请参见图3,其是眼球与头戴显示设备的位置关系的示意图。如图3所示,头戴显示设备的屏幕宽度为w,屏幕高度为h,用户眼球与屏幕之间的距离为d,用户的瞳距为L。以头戴显示设备的屏幕的左上角为原点O,建立屏幕坐标系XOY。以计算用户左眼眼球在图像中的注视位置为例,在用户正确佩戴头戴显示设备的情况下,用户的双眼连线的中点正对屏幕的中心,基于此,左眼的坐标R1为(h/2,w/2-L/2,d),左眼的追踪向量,即左眼的视线为过R1的向量RA,计算过R1的向量RA与屏幕面XOY的交点,该交点即为左眼眼球在图像中的注视位置。
步骤S2300,在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
眼球在图像中的注视位置不在图像的运动区域内,可以是眼球在第一视频中的某一帧图像中的注视位置不在对应帧图像的运动区域内。在具体实施时,在获取眼球在当前帧图像的注视位置后,将眼球在当前帧图像的注视位置与当前帧图像的运动区域所在的位置进行比较,以确定眼球在当前帧图像中的注视位置是否在当前帧图像的运动区域内。
眼球在所述图像中的注视位置不在所述图像的重点区域内,可以是眼球在第一视频中的某一帧图像中的注视位置不在对应帧图像的重点区域内。在具体实施时,在获取眼球在当前帧图像的注视位置后,将眼球在当前帧图像的注视位置与当前帧图像的重点区 域所在的位置进行比较,以确定眼球在当前帧图像中的注视位置是否在当前帧图像的重点区域内。
在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况,可以是在播放第一视频的过程中,出现眼球在某一帧图像中的注视位置不在对应帧图像的运动区域内,且眼球在某一帧图像中的注视位置不在对应帧图像的重点区域内的情况。在这种情况下,说明用户的注意视线没有跟随图像的运动区域移动,且也没有在图像的重点区域内,也就是用户的注意视线没有与图像的运动区域重合,也没有与图像的重点区域重合,此时认为用户注意力不集中,向用户发出提醒信息。
提醒信息可以包括语音提示信息、弹窗文字提示信息、振动提示信息、图像提示信息中的至少一个。例如,通过头戴显示设备发出短促的嘀嘀声。还例如,在头戴显示设备的屏幕上显示“注意听讲”的提醒信息。还例如,控制头戴显示设备的手柄振动。还例如,控制头戴显示设备的屏幕上显示的图像色调瞬间变化。在本实施例中,在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,通过交互方式提醒用户,能够提高提醒效果。
根据本实施例,获取眼球在图像中的注视位置,并确定第一视频中的每一帧图像的运动区域和重点区域,之后,根据眼球在图像中的注视位置与第一视频中每一帧图像的运动区域和重点区域的位置关系,可以判断用户的视线是否跟随图像的运动区域和重点区域移动,从而可以观察用户的注意力是否集中,并且在在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,以及时提醒用户,从而提高远程教学的教学效果。
在一个实施例中,所述在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息的步骤,可以进一步包括步骤S3100-S3300。
步骤S3100,在播放所述第一视频的过程中,确定在第一检测时长内眼球在所述图像中的注视位置是否始终位于所述图像的运动区域内。
在本实施例中,通过设置第一检测时长,可以判断用户视线是否在一段时间内跟随图像的运动区域移动。需要说明的是,第一检测时长可以由本领域技术人员根据实际情况进行设置。
在实际应用中,在播放第一视频的过程中,实时检测眼球在图像中的注视位置是否在图像的运动区域内。也就是说,从第一视频的播放起始时刻开始计时,在第一视频的播放时长每达到一次第一检测时长时,根据在该段时间内眼球在图像中的注视位置与图 像的运动区域的位置关系,更新标志位。基于此,可以通过读取到的标志位的值确定在第一检测时长内眼球在图像中的注视位置是否始终在图像的运动区域内。具体地,在第一检测时长内眼球在图像中的注视位置始终在图像的运动区域内,将标志位设置为“0”,在第一检测时长内出现眼球在图像中的注视位置不在图像的运动区域内的情况,将标志位设置为“1”。需要说明的是,第一视频的播放起始时刻也可以是用户选择的时刻,例如,用于从第一视频的1min20s开始观看,则该时刻为播放起始时刻。
步骤S3200,在播放所述第一视频的过程中,确定在第二检测时长内眼球在所述图像中的注视位置是否始终位于所述图像的重点区域内。
在该步骤中,通过设置第二检测时长,可以判断用户视线是否在一段时间始终位于图像的重点区域移动。也就是说,可以判断用户视线是否在一段时间内发生注视异常,例如闭眼。需要说明的是,第二检测时长可以由本领域技术人员根据实际情况进行设置。
在实际应用中,在播放第一视频的过程中,实时检测眼球在图像中的注视位置是否在图像的重点区域内。也就是说,从第一视频的播放起始时刻开始计时,在第一视频的播放时长每达到一次第二检测时长时,根据在该段时间内眼球在图像中的注视位置与图像的重点区域的位置关系,更新标志位。基于此,可以通过读取到的标志位的值确定在第二检测时长内眼球在图像中的注视位置是否始终在图像的重点区域内。具体地,在第二检测时长内眼球在图像中的注视位置始终在图像的重点区域内,将标志位设置为“0”,在第二检测时长内出现眼球在图像中的注视位置不在图像的重点区域内的情况,将标志为设置为“1”。需要说明的是,第一视频的播放起始时刻也可以是用户选择的时刻,例如,用于从第一视频的1min20s开始观看,则该时刻为播放起始时刻。
步骤S3300,在所述第一检测时长内出现眼球在所述图像中的注视位置不在所述图像的运动区域内,且在所述第二检测时长内出现眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
例如,假设第一视频的总时长为5min,第一检测时长为30s,第二检测时长为60s。从第一视频的播放起始时刻开始计时,在第一视频的播放时长第一次达到30s时,如果这30s内眼球在图像中的注视位置始终在图像的运动区域内,将标志位设置为“0”;重新开始计时,在第一视频的播放时长再次达到30s时,如果这30s内出现眼球在图像中的注视位置不在图像的运动区域内的情况,将标志位更新为“1”;重复此检测过程,直至5min的第一视频播放结束或者用户暂停播放第一视频。同时,从第一视频的播放起始时刻开始计时,在第一视频的播放时长第一次达到60s时,如果这60s内眼球在图像中的注视位置始终在图像的重点区域内,将标志位设置为“0”;重新开始计时,在第一视频的播放时长再次达到60s时,如果这60s内出现眼球在图像中的注视位置不在 图像的重点区域内的情况,将标志位更新为“1”;重复此检测过程,直至5min的第一视频播放结束或者用户暂停播放第一视频。再之后,读取对应于第一检测时长的标志位的值和对应于第二检测时长的标志位的值,如果对应于第一检测时长的标志位的值和对应于第二检测时长的标志位的值均为“1”,即为在第一检测时长内出现眼球在图像中的注视位置不在图像的运动区域内,且在第二检测时长内出现眼球在图像中的注视位置不在图像的重点区域内的情况。
在本实施例中,在第一检测时长内出现眼球在图像中的注视位置不在图像的运动区域内,且在第二检测时长内出现眼球在图像中的注视位置不在图像的重点区域内的情况下,向用户发出提醒信息,能够提高检测的准确性,从而在提醒用户的同时,避免频繁发出的提醒信息对用户造成干扰,进一步提高远程教学的教学效果。
在一个实施例中,可以根据第一视频的视频类型确定第一检测时长和第二检测时长。
第一视频可以分为两类,第一类视频可以是场景变化较多的视频,第二类视频可以是场景变化较少的视频。对于第一类视频,由于视频场景变化较多,也就是图像的运动区域变化较多,可以设置较短的第一检测时长,设置较长的第二检测时长。对于第二类视频,由于视频场景变化较少,也就是图像的运动区域变化较少,可以设置较长的第二检测时长,设置较短的第二检测时长。
具体地,根据第一视频的视频类型确定第一检测时长和第二检测时长进一步可以包括:步骤S4100-S4200。
步骤S4100,在所述第一视频为第一类视频的情况下,设置所述第一检测时长为第一值,设置所述第二检测时长为第二值。
步骤S4200,在所述第一视频为第二类视频的情况下,设置所述第一检测时长为第三值,设置所述第二检测时长为第四值。
其中,所述第三值大于所述第一值,所述第四值小于所述第二值。
例如,对于第一类视频,设置第一检测时长为30s,第二检测时长为60s。对于第二类视频,设置第一检测时长为60s,第二检测时长为40s。
在本实施例中,对于不同的视频,可以根据视频类型确定第一检测时长和第二检测时长,该方法能够适用于不同类型的视频,并且根据确定后的第一检测时长和第二检测时长检测用户的注意力是否集中,能够进一步提高检测的准确性。此外,对于同一视频,可以对视频进行划分,并且根据每一段视频的视频类型确定第一检测时长和第二检测时长,这样能够提高检测的精确性,从而进一步提升远程教学的教学效果。
在一个实施例中,在根据第一视频的视频类型确定第一检测时长和第二检测时长之前,该用于远程教学的提醒方法还可以包括:步骤S5100-S5400。
步骤S5100,获取所述第一视频中每一帧图像的运动区域所包含的像素点的个数。
步骤S5200,确定所述个数大于第一阈值的图像的帧数。
第一阈值可以反映每一帧图像中运动区域所占面积的大小,图像中运动区域所占面积越大,说明该帧图像相较于前一帧图像变化较多。需要说明的是,第一阈值可以由本领域技术人员根据实际情况进行设置。
步骤S5300,在所述帧数大于第二阈值的情况下,确定所述第一视频为第一类视频。
步骤S5400,在所述帧数小于或等于第二阈值的情况下,确定所述第一视频为第二类视频。
通过设置第二阈值,可以确定第一视频的视频类型。当图像的运动区域所包含的像素点的个数大于第一阈值的图像的帧数大于第二阈值时,说明第一视频中较多帧图像相较于前一帧图像变化较多,即第一视频的场景变化较多。当图像的运动区域所包含的像素点的个数大于第一阈值的图像的帧数小于或等于第二阈值时,说明第一视频中少数帧图像相较于前一帧图像变化较多,即第一视频的场景变化较少。需要说明的是,第二阈值可以由本领域技术人员根据实际情况进行设置。
例如,假设第一视频包括n帧图像,设定第一阈值为f1,如果运动区域所包含的像素点的个数大于f1,计次加1。如果该次数超过1/3*n,则判定当前视频为运动区域较多的视频,即第一类视频,否则为静态区域较多的视频,即第二类视频。
在本实施例中,根据第一视频中每一帧图像的运动区域所包含的像素点的个数,确定第一视频的视频类型,之后基于第一视频的视频类型确定第一检测时长和第二检测时长,从而可以提高检测的准确性,以准确及时的提醒用户。
在一个实施例中,该用于远程教学的提醒方法还可以包括:步骤S6100-S6200。
步骤S6100,获取眼球在所述头戴显示设备内的注视位置。
步骤S6200,如果所述眼球在所述头戴显示设备内的注视位置在第一预设时长内不在所述头戴显示设备的显示区域内,向用户发出提醒信息。
第一预设时长可以由本领域技术人员根据实际情况进行设置。例如,第一预设时长为1min。
在本实施例中,在用户佩戴上头戴显示设备后,如果在第一预设时长内用户眼球的注视位置不在头戴显示设备的显示区域内,也就是用户的注视视线没有在屏幕上,向用户发出提醒信息,以提醒即将开始播放视频,提示用户及时调整头戴显示设备的佩戴状态。
在一个实施例中,该用于远程教学的提醒方法还可以包括:获取用户注视状态;如 果用户注视状态在第一预设时长内处于异常状态,向用户发出提醒信息。
异常状态例如可以是闭眼状态。用户注视状态在第一预设时长内处于异常状态,可以是用户在第一预设时长内处于闭眼状态,向用户发出提醒信息,以提醒即将开始播放视频,提示用户注意头戴显示设备的显示区域。
在本实施例中,在出现眼球的注视位置不在图像的运动区域或者重点区域内时,可以录制相应时刻的视频段,以方便用户之后回看相应视频。
在一个实施例中,在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,之后,该用于远程教学的提醒方法还可以包括:步骤S7100-S7200。
步骤S7100,如果出现眼球在所述图像中的注视位置不在所述图像的运动区域内,获取对应的第一播放时刻。
例如,在第一视频播放到10min25s时,检测到眼球在图像中的注视位置在图像的运动区域之外的区域,记录该时刻,即第一播放时刻为10min25s。
步骤S7200,根据所述第一播放时刻和第二预设时长,获取并存储第二视频,所述第二视频为所述第一视频中的部分视频段。
第二预设时长由本领域技术人员根据需要进行设置。例如,第二预设时长为1min。还例如,第二预设时长可以是第一检测时长和第二检测时长中的较长的时长。
示例性的,第二视频可以是包括第一播放时刻之前的第二预设时长的视频段和第一播放时刻之后的第二预设时长的视频段。例如,第一播放时刻为10min25s,第二预设时长为1min,第二视频为9min25s~11min25s的视频段。
更具体的,可以以第一播放时刻为索引命名第二视频,并对第二视频进行存储。在实际应用中,头戴显示设备的呈现有视频回看界面,该界面中展示多个视频段,每个视频段以第一播放时刻命名,用户可以根据播放时刻查找需要重新观看的视频段。
在本实施例中,在出现眼球在图像中的注视位置不在图像的运动区域内时,记录对应的第一播放时刻,并根据第一播放时刻截取并存储相应的视频段,可以方便用户查看相应的视频段,不需要用户重新观看整段视频,可以提高教学效率,方便用户使用。此外,用户可以通过第一播放时刻查找需要重新观看的视频段,查找更快捷,可以进一步提高教学效率。
在一个实施例中,在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,之后,该用于远程教学的提醒方法还可以包括:步骤S8100-S8200。
步骤S8100,如果出现眼球在所述图像中的注视位置不在所述图像的重点区域内, 获取对应的第二播放时刻。
例如,在第一视频播放到12min30s时,检测到眼球在图像中的注视位置在图像的重点区域之外的区域,记录该时刻,即第二播放时刻为12min30s。
步骤S8200,根据所述第二播放时刻和第三预设时长,获取并存储第三视频,所述第三视频为所述第一视频中的部分视频段。
第三预设时长由本领域技术人员根据需要进行设置。例如,第三预设时长为1min。还例如,第三预设时长可以是第一检测时长和第二检测时长中的较长的时长。
示例性的,第二视频可以是包括第二播放时刻之前的第三预设时长的视频段和第二播放时刻之后的第三预设时长的视频段。例如,第二播放时刻为12min30s,第三预设时长为1min,第三视频为11min30s~13min30s的视频段。
更具体的,可以以第二播放时刻为索引命名第三视频,并对第三视频进行存储。在实际应用中,头戴显示设备的呈现有视频回看界面,该界面中展示多个视频段,每个视频段以第二播放时刻命名,用户可以根据播放时刻查找需要重新观看的视频段。
在本实施例中,在出现眼球在图像中的注视位置不在图像的重点区域内时,记录对应的第二播放时刻,并根据第二播放时刻截取并存储相应的视频段,可以方便用户查看相应的视频段,不需要用户重新观看整段视频,可以提高教学效率,方便用户使用。此外,用户可以通过第二播放时刻查找需要重新观看的视频段,查找更快捷,可以进一步提高教学效率。
<装置实施例>
本实施例提供一种用于远程教学的提醒装置,该装置应用于头戴显示设备,所述头戴显示设备显示有第一视频,如图4所示,该用于远程教学的提醒装置400可以包括确定模块410、第一获取模块420及提醒模块430。
该确定模块410用于根据所述第一视频,确定所述第一视频中的图像的运动区域和重点区域,其中,所述重点区域为所述图像中面积占比超过预设的占比阈值的图像区域。
该第一获取模块420用于获取眼球在所述图像中的注视位置。以及,
该提醒模块430用于在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
在一个实施例中,该提醒模块430可以包括:第一确定单元,用于在播放所述第一视频的过程中,确定在第一检测时长内眼球在所述图像中的注视位置是否始终位于所述图像的运动区域内;第二确定单元,用于在播放所述第一视频的过程中,确定在第二检测时长内眼球在所述图像中的注视位置是否始终位于所述图像的重点区域内;提醒单元, 用于在所述第一检测时长内出现眼球在所述图像中的注视位置不在所述图像的运动区域内,且在所述第二检测时长内出现眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
在一个实施例中,该用于远程教学的提醒装置400还可以包括:第一设置模块,用于在所述第一视频为第一类视频的情况下,设置所述第一检测时长为第一值,设置所述第二检测时长为第二值;第二设置模块,用于在所述第一视频为第二类视频的情况下,设置所述第一检测时长为第三值,设置所述第二检测时长为第四值;其中,所述第三值大于所述第一值,所述第四值小于所述第二值。
在一个实施例中,该用于远程教学的提醒装置400还可以包括:第二获取模块,用于获取所述第一视频中每一帧图像的运动区域所包含的像素点的个数;帧数确定模块,用于确定所述个数大于第一阈值的图像的帧数;第一类型确定模块,用于在所述帧数大于第二阈值的情况下,确定所述第一视频为第一类视频;第二类型确定模块,用于在所述帧数小于或等于第二阈值的情况下,确定所述第一视频为第二类视频。
在一个实施例中,该用于远程教学的提醒装置400还可以包括:第三获取模块,用于获取眼球在所述头戴显示设备内的注视位置;提醒模块,还用于如果所述眼球在所述头戴显示设备内的注视位置在第一预设时长内不在所述头戴显示设备的显示区域内,向用户发出提醒信息。
在一个实施例中,该用于远程教学的提醒装置400还可以包括:第一播放时刻确定模块,用于如果出现眼球在所述图像中的注视位置不在所述图像的运动区域内,获取对应的第一播放时刻;第一视频获取模块,用于根据所述第一播放时刻和第二预设时长,获取并存储第二视频,所述第二视频为所述第一视频中的部分视频段。
在一个实施例中,该用于远程教学的提醒装置400还可以包括:第二播放时刻确定模块,用于如果出现眼球在所述图像中的注视位置不在所述图像的重点区域内,获取对应的第二播放时刻;第二视频获取模块,用于根据所述第二播放时刻和第三预设时长,获取并存储第三视频,所述第三视频为所述第一视频中的部分视频段。
根据本实施例,获取眼球在图像中的注视位置,并确定第一视频中的每一帧图像的运动区域和重点区域,之后,根据眼球在图像中的注视位置与第一视频中每一帧图像的运动区域和重点区域的位置关系,可以判断用户的视线是否跟随图像的运动区域和重点区域移动,从而可以观察用户的注意力是否集中,并且在在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,以及时提醒用户,从而提高远程教学的教学效果。
<设备实施例>
图5是根据一个实施例的头戴显示设备的硬件结构示意图。如图5所示,该头戴显示设备500包括存储器520和处理器510。存储器520用于存储可执行的指令。处理器510用于根据所述可执行的计算机程序的控制,执行根据本申请方法实施例的用于远程教学的提醒方法。
该头戴显示设备500可以是VR设备、AR设备及MR设备等,本申请实施例对此不作限定。
该头戴显示设备500可以是如图2所示的头戴显示设备1000,也可以是具备其他硬件结构的设备,在此不做限定。
在另外的实施例中,该头戴显示设备500可以包括以上用于远程教学的提醒装置400。
在一个实施例中,以上用于远程教学的提醒装置400的各模块可以通过处理器510运行存储器520中存储的计算机指令实现。
<介质实施例>
在本实施例中,还提供一种计算机可读存储介质,该计算机可读存储介质存储有可被计算机读取并运行的计算机程序,所述计算机程序用于在被所述计算机读取运行时,执行如本申请以上任意方法实施例的用于远程教学的提醒方法。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分相互参见即可,每个实施例重点说明的都是与其他实施例的不同之处,但本领域技术人员应当清楚的是,上述各实施例可以根据需要单独使用或者相互结合使用。另外,对于装置实施例而言,由于其是与方法实施例相对应,所以描述得比较简单,相关之处参见方法实施例的对应部分的说明即可。以上所描述的系统实施例仅仅是示意性的,其中作为分离部件说明的模块可以是或者也可以不是物理上分开的。
本申请可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本申请的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身, 诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“如“语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)网连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以 产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本申请的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本申请的范围由所附权利要求来限定。

Claims (10)

  1. 一种用于远程教学的提醒方法,其中,应用于头戴显示设备,所述头戴显示设备显示有第一视频,所述方法包括:
    根据所述第一视频,确定所述第一视频中的图像的运动区域和重点区域,其中,所述重点区域为所述图像中面积占比超过预设的占比阈值的图像区域;
    获取眼球在所述图像中的注视位置;
    在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
  2. 根据权利要求1所述的方法,其中,所述在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,包括:
    在播放所述第一视频的过程中,确定在第一检测时长内眼球在所述图像中的注视位置是否始终位于所述图像的运动区域内;
    在播放所述第一视频的过程中,确定在第二检测时长内眼球在所述图像中的注视位置是否始终位于所述图像的重点区域内;
    在所述第一检测时长内出现眼球在所述图像中的注视位置不在所述图像的运动区域内,且在所述第二检测时长内出现眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    在所述第一视频为第一类视频的情况下,设置所述第一检测时长为第一值,设置所述第二检测时长为第二值;
    在所述第一视频为第二类视频的情况下,设置所述第一检测时长为第三值,设置所述第二检测时长为第四值;
    其中,所述第三值大于所述第一值,所述第四值小于所述第二值。
  4. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取所述第一视频中每一帧图像的运动区域所包含的像素点的个数;
    确定所述个数大于第一阈值的图像的帧数;
    在所述帧数大于第二阈值的情况下,确定所述第一视频为第一类视频;
    在所述帧数小于或等于第二阈值的情况下,确定所述第一视频为第二类视频。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取眼球在所述头戴显示设备内的注视位置;
    如果所述眼球在所述头戴显示设备内的注视位置在第一预设时长内不在所述头戴 显示设备的显示区域内,向用户发出提醒信息。
  6. 根据权利要求1所述的方法,其中,所述方法在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息,之后,还包括:
    如果出现眼球在所述图像中的注视位置不在所述图像的运动区域内,获取对应的第一播放时刻;
    根据所述第一播放时刻和第二预设时长,获取并存储第二视频,所述第二视频为所述第一视频中的部分视频段。
  7. 根据权利要求1所述的方法,其中,所述方法在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息之后,还包括:
    如果出现眼球在所述图像中的注视位置不在所述图像的重点区域内,获取对应的第二播放时刻;
    根据所述第二播放时刻和第三预设时长,获取并存储第三视频,所述第三视频为所述第一视频中的部分视频段。
  8. 一种用于远程教学的提醒装置,其中,应用于头戴显示设备,所述头戴显示设备显示有第一视频,所述装置包括:
    确定模块,用于根据所述第一视频,确定所述第一视频中的图像的运动区域和重点区域,其中,所述重点区域为所述图像中面积占比超过预设的占比阈值的图像区域;
    第一获取模块,用于获取眼球在所述图像中的注视位置;
    提醒模块,用于在眼球在所述图像中的注视位置不在所述图像的运动区域内,且眼球在所述图像中的注视位置不在所述图像的重点区域内的情况下,向用户发出提醒信息。
  9. [根据细则91更正 09.08.2022]
    一种头戴显示设备,其中,所述头戴显示设备显示有第一视频,所述头戴显示设备包括:
    存储器,用于存储可执行的指令;
    处理器,用于根据所述可执行的计算机程序的控制,执行根据权利要求1-7中任一项所述的用于远程教学的提醒方法。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有可被计算机读取执行的计算机程序,所述计算机程序用于在被所述计算机读取运行时,执行根据权利要求1-7中任一项所述的用于远程教学的提醒方法。
PCT/CN2022/083584 2021-04-02 2022-03-29 用于远程教学的提醒方法、装置及头戴显示设备 WO2022206731A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110362546.1 2021-04-02
CN202110362546.1A CN113255431B (zh) 2021-04-02 2021-04-02 用于远程教学的提醒方法、装置及头戴显示设备

Publications (1)

Publication Number Publication Date
WO2022206731A1 true WO2022206731A1 (zh) 2022-10-06

Family

ID=77220218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/083584 WO2022206731A1 (zh) 2021-04-02 2022-03-29 用于远程教学的提醒方法、装置及头戴显示设备

Country Status (2)

Country Link
CN (1) CN113255431B (zh)
WO (1) WO2022206731A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255431B (zh) * 2021-04-02 2023-04-07 青岛小鸟看看科技有限公司 用于远程教学的提醒方法、装置及头戴显示设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339082A (zh) * 2016-08-16 2017-01-18 惠州Tcl移动通信有限公司 一种基于教学用头戴式设备的提示方法及系统
US20200272230A1 (en) * 2017-12-29 2020-08-27 Beijing 7Invensun Technology Co., Ltd. Method and device for determining gaze point based on eye movement analysis device
CN113255431A (zh) * 2021-04-02 2021-08-13 青岛小鸟看看科技有限公司 用于远程教学的提醒方法、装置及头戴显示设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140279731A1 (en) * 2013-03-13 2014-09-18 Ivan Bezdomny Inc. System and Method for Automated Text Coverage of a Live Event Using Structured and Unstructured Data Sources
CN103955273B (zh) * 2014-04-16 2017-06-13 北京智产科技咨询有限公司 一种通过操作系统实现用户姿态检测的移动终端和方法
CN104182046A (zh) * 2014-08-22 2014-12-03 京东方科技集团股份有限公司 眼控提醒方法、眼控图像显示方法及显示系统
CN109657553B (zh) * 2018-11-16 2023-06-20 江苏科技大学 一种学生课堂注意力检测方法
CN109902630B (zh) * 2019-03-01 2022-12-13 上海像我信息科技有限公司 一种注意力判断方法、装置、系统、设备和存储介质
CN110400241B (zh) * 2019-07-23 2023-04-07 常州面包电子科技有限公司 一种基于智能眼镜的学习监督方法
CN111401269B (zh) * 2020-03-19 2023-07-14 成都云盯科技有限公司 基于监控视频的商品热点检测方法、装置和设备
CN111310733B (zh) * 2020-03-19 2023-08-22 成都云盯科技有限公司 基于监控视频的人员进出检测方法、装置和设备
CN111311995A (zh) * 2020-03-23 2020-06-19 宁波视科物电科技有限公司 一种基于增强现实技术的远程教学系统及教学方法
CN112016492A (zh) * 2020-09-03 2020-12-01 深圳市艾为智能有限公司 基于视觉的教学注意力监测系统及方法
CN112070641A (zh) * 2020-09-16 2020-12-11 东莞市东全智能科技有限公司 基于眼动追踪的教学质量评测方法、装置及系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339082A (zh) * 2016-08-16 2017-01-18 惠州Tcl移动通信有限公司 一种基于教学用头戴式设备的提示方法及系统
US20200272230A1 (en) * 2017-12-29 2020-08-27 Beijing 7Invensun Technology Co., Ltd. Method and device for determining gaze point based on eye movement analysis device
CN113255431A (zh) * 2021-04-02 2021-08-13 青岛小鸟看看科技有限公司 用于远程教学的提醒方法、装置及头戴显示设备

Also Published As

Publication number Publication date
CN113255431B (zh) 2023-04-07
CN113255431A (zh) 2021-08-13

Similar Documents

Publication Publication Date Title
JP6165846B2 (ja) 目のトラッキングに基づくディスプレイの一部の選択的強調
TWI687901B (zh) 虛擬實境設備的安全監控方法、裝置及虛擬實境設備
US20130187835A1 (en) Recognition of image on external display
US11245887B2 (en) Electronic device and operation method therefor
WO2017181588A1 (zh) 一种显示页面定位的方法和电子设备
WO2018214431A1 (zh) 通过虚拟现实设备呈现场景的方法、设备及虚拟现实设备
US10831267B1 (en) Systems and methods for virtually tagging objects viewed by friends and influencers
KR20150096826A (ko) 디스플레이 장치 및 제어 방법
US11636572B2 (en) Method and apparatus for determining and varying the panning speed of an image based on saliency
WO2022206731A1 (zh) 用于远程教学的提醒方法、装置及头戴显示设备
US10462336B2 (en) Low latency tearing without user perception
KR20210134614A (ko) 데이터 처리 방법 및 장치, 전자 기기 및 저장 매체
WO2017053032A1 (en) Detecting and correcting whiteboard images while enabling the removal of the speaker
US11410330B2 (en) Methods, devices, and systems for determining field of view and producing augmented reality
US10592013B2 (en) Systems and methods for unifying two-dimensional and three-dimensional interfaces
US20180046253A1 (en) Apparatus and Method to Navigate Media Content Using Repetitive 3D Gestures
US20190302898A1 (en) Constellation-based augmentation of mouse form-factor for virtual reality applications
WO2024066752A1 (zh) 显示控制方法、装置、头戴显示设备及介质
Ebert et al. Tiled++: An enhanced tiled hi-res display wall
US20200015727A1 (en) Virtual forced fixation
EP3211629A1 (en) An apparatus and associated methods
TW201603567A (zh) 即時視訊串流中字元辨識技術
US20140205265A1 (en) Video content distribution package
US9921651B2 (en) Video display for visually impaired people
CN113534959A (zh) 画面显示方法、装置、虚拟现实设备以及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778913

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.02.2024)