CN112399239A - Video playing method and device - Google Patents

Video playing method and device Download PDF

Info

Publication number
CN112399239A
CN112399239A CN202011174167.1A CN202011174167A CN112399239A CN 112399239 A CN112399239 A CN 112399239A CN 202011174167 A CN202011174167 A CN 202011174167A CN 112399239 A CN112399239 A CN 112399239A
Authority
CN
China
Prior art keywords
video
target
target video
user
watching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011174167.1A
Other languages
Chinese (zh)
Other versions
CN112399239B (en
Inventor
翟希山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011174167.1A priority Critical patent/CN112399239B/en
Publication of CN112399239A publication Critical patent/CN112399239A/en
Application granted granted Critical
Publication of CN112399239B publication Critical patent/CN112399239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video playing method and device, belongs to the technical field of communication, and can solve the problem that in the video playing process, a video is continuously played under the condition that a user does not watch the video, so that an important plot is missed. The method comprises the following steps: under the condition of playing a target video, determining whether a sight focus of a watching user of the target video is focused in a video playing interface of the target video; and if the sight focus is not focused in the video playing interface within a preset time length, executing target operation.

Description

Video playing method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video playing method and device.
Background
With the development of electronic technology, video playing applications in electronic devices play an important role, and with the increasing of the intelligent degree of electronic devices, users pay more and more attention to the viewing experience of videos in the process of playing videos by the video playing applications in the electronic devices, and at present, users can pause playing videos by the electronic devices through a pause button in the process of playing videos.
In the related art, in the case that a user needs to leave for a short time or take a short rest during watching a video, if the user does not click a pause button to pause playing the video, the video will continue to be played without being watched by the user, so that an important scenario is missed, and the watching experience of the user is affected.
Disclosure of Invention
The embodiment of the application aims to provide a video playing method and device, which can solve the problem that in the video playing process, a video is continuously played under the condition that a user does not watch the video, so that an important plot is missed.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video playing method, where the method includes: under the condition of playing a target video, determining whether a sight focus of a watching user of the target video is focused in a video playing interface of the target video; if the sight focus is not focused in the video playing interface within a preset time length, executing target operation; the target operation is used for recording target plot information of the target video played after the first time; the first time is a starting time when the sight focus leaves the video playing interface.
In a second aspect, an embodiment of the present application provides a video playing apparatus, where the determining module is configured to determine, when a target video is played, whether a line of sight focus of a viewer of the target video is focused on a video playing interface of the target video; the execution module is configured to execute a target operation if the determination module determines that the gaze focus is not focused in the video playing interface within a predetermined time period; wherein the target operation comprises at least one of: starting to record a screen at the first time, and acquiring key plot information of the target video played after the first time, wherein the target operation is used for recording the target plot information of the target video played after the first time; the first time is a starting time when the sight focus leaves the video playing interface.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, under the condition of playing the target video, whether the user does not watch the target video currently due to short departure or short break is determined by detecting whether the sight focus of the watching user of the target video is focused in the video playing interface of the target video, so that when the sight focus is detected not to be focused in the video playing interface within the preset time, the target scenario information of the target video played after the first time (namely the starting time of the sight focus of the user departing from the video playing interface of the target video) is recorded by executing the target operation, so that the electronic equipment can automatically record the scenario information missed in the departure time period of the user after the user leaves for a short time, and the watching experience of the user is improved.
Drawings
Fig. 1 is a flowchart of a method for playing a video according to an embodiment of the present application;
fig. 2 is one of schematic diagrams of an interface applied by a video playing method according to an embodiment of the present application;
fig. 3 is a second schematic diagram of an interface applied by a video playing method according to an embodiment of the present application;
fig. 4 is a third schematic view of an interface applied by a video playing method according to an embodiment of the present application;
fig. 5 is a fourth schematic view of an interface applied by a video playing method according to an embodiment of the present application;
fig. 6 is a fifth schematic view of an interface applied by a video playing method according to an embodiment of the present application;
fig. 7 is a sixth schematic view of an interface applied by a video playing method according to an embodiment of the present application;
fig. 8 is a seventh schematic view of an interface applied by a video playing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video playback device according to an embodiment of the present application;
fig. 10 is a second schematic structural diagram of a video playing apparatus according to an embodiment of the present application;
fig. 11 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 12 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail a video playing method provided by the embodiment of the present application through a specific embodiment and an application scenario thereof with reference to the accompanying drawings.
The video playing method provided by the embodiment of the application can be applied to watching video scenes by users, such as single-person watching scenes and multi-person watching scenes.
Aiming at a single film watching scene:
in the video playing process of playing the video 1 by the electronic device a, if the watching user a does not watch the video 1 temporarily due to lack of concentration, the video 1 will continue to be played without being watched by the user a, and thus, the user a misses an important scenario in the video 1. In the embodiment of the application, in the playing process of the video 1, the electronic device a may determine whether the user a does not watch the video 1 currently by detecting whether the visual focus of the user a is focused in the video playing interface of the video 1, and thus, when the electronic device detects that the visual focus of the user a is not focused in the video playing interface of the video 1, it is determined that the user is not watching the video 1 currently, and recording of scenario information of subsequent playing of the video 1 is started, so as to prevent the user a from missing important scenarios, and thus, the watching efficiency of the user a is improved.
Aiming at a multi-person film watching scene:
that is, in a scene where a plurality of electronic devices share and play the same video, assuming that the electronic device a, the electronic device B, and the electronic device C are in the same local area network as the central server, when the central server controls the electronic device a, the electronic device B, and the electronic device C to synchronously play the video 1, if the user a (i.e., a user watching the video 1 played in the electronic device a) temporarily leaves the watching area of the video 1 by private affairs, since the playing of the video a is controlled by the central server, the video 1 will continue to be played during the time period when the user a leaves, so that the user a misses the important scenario of the video 1. In this embodiment of the present application, a part or all of the electronic devices (e.g., at least one of the electronic device a, the electronic device B, and the electronic device C) controlled by the central server may determine whether a watching user is temporarily away from the electronic device by detecting whether a visual focus of the watching user is focused on a video playing interface of the video 1 of the electronic device, and thus, when the electronic device may detect that the video playing interface of the video 1 does not have a focused visual focus, it may be determined that the watching user is temporarily away from the electronic device due to a private accident and starts to record scenario information of the video, and start to record scenario information of subsequent playing of the video 1, so as to avoid the watching user from missing important scenarios, and thus, the watching efficiency of the watching user is improved.
It should be noted that, when the video playing method provided by the embodiment of the present application is applied to a single-person film watching scene, a video playing mode in which the electronic device plays the target video may become a single-person playing mode. Or, when the video playing method provided by the embodiment of the application is applied to a multi-person viewing scene, at least two electronic devices are present to play the target video synchronously, and at this time, the video playing mode in which the at least two electronic devices play the target video synchronously may become a multi-person playing mode.
The embodiment of the present application provides a video playing method, which may be applied to an electronic device, in other words, the video playing method may be executed by software or hardware installed in the electronic device. As shown in fig. 1, a video playing method provided in an embodiment of the present application may include the following steps 101 and 102:
step 101: in the case of playing a target video, determining whether a line-of-sight focus of a watching user of the target video is focused in a video playing interface of the target video.
In the embodiment of the present application, in the case of playing the target video, the video playing apparatus may detect in real time or periodically (for example, once every 3 minutes), whether the line of sight focus of the watching user of the target video is focused in the video playing interface of the target video. That is, the video player may trigger the detection process of the visual focus of the user in real time, or may trigger the detection process of the sight focus of the watching user periodically according to a fixed period.
In this embodiment of the present application, the video playing apparatus may detect whether the user gaze focus is focused on the video playing interface of the target video by acquiring the face image information, and may also detect whether the user gaze focus is focused on the video playing interface of the target video by acquiring the iris information.
In this embodiment of the application, when the target video is played or before the target video is played, the video playing apparatus actively or passively starts the smart play mode. In this mode, the video playing apparatus may detect whether the user's gaze is focused in the video playing interface of the target video, and record important scenario information of the target video played after the user leaves the viewing angle range of the video playing interface when the user's gaze focus is not focused in the video playing interface of the target video within a predetermined time period (i.e., the user leaves or does not watch the target video for a short time).
For example, before the smart play mode is turned on, the video playing apparatus may display the target identifier on the video playing interface of the video playing application (for example, in the single-person mode and the multi-person mode, the video playing apparatus displays an "open quadrangle pattern" on the video playing interface of the target video), so as to indicate that the video playing application supports the smart play mode, as shown in fig. 2. It should be noted that, when the video playing application in the electronic device supports the smart playing mode, it indicates that the video playing application supports the detection function of detecting whether the user's sight is focused in the video playing interface of the target video.
For example, a user may manually turn on a smart play mode in an electronic device. In one example, a user may manually initiate a smart play mode via input to a target control displayed in a display screen of the electronic device. For example, a prompt window including "whether to start the smart play mode" is displayed on the video play interface, and the user confirms that the smart play mode is started through the prompt window.
Further, after the smart play mode is turned on, the video playing device may update the target identifier displayed on the video playing interface (for example, in the single-person mode, the video playing device updates one "hollow quadrangle star pattern" on the video playing interface to one "hollow quadrangle star pattern" as shown in fig. 3, and in the multi-person mode, the video playing device updates two "hollow quadrangle star patterns" on the video playing interface to two "hollow quadrangle star patterns" as shown in fig. 6), so as to indicate that the smart play mode is turned on.
For example, the video playing apparatus may actively start the smart play mode when a predetermined condition is satisfied. Wherein the predetermined condition is: there is at least one setting for the saved smart play mode or when the target video begins playing.
In one example, if the smart play mode is enabled during the playing of the target video, the video playing apparatus will automatically save the current smart play mode setting after the target video is played.
In one example, after restoring the saved smart play mode setting, the video playback device may update the target identifier displayed on the video playback interface (e.g., in the single-person mode, the video playback device updates one "hollow quadrangle star pattern" displayed on the video playback interface of the target video to one "orange solid pentagon star pattern", as shown in fig. 5, and in the multi-person mode, the video playback device updates two "hollow quadrangle star patterns" displayed on the video playback interface of the target video to two "orange solid pentagon star patterns", as shown in fig. 7), to indicate that the saved smart play mode setting is restored.
In one example, when the video playing apparatus resumes to open the saved smart play mode setting, the video playing apparatus may display a prompt window "whether to close the smart play mode" on the video playing interface to prompt the user whether to close the smart play mode.
Illustratively, after the smart play mode is started, if it is detected that the user's gaze focus is not focused in the video play interface of the target video within a predetermined time period (i.e., the user leaves or has a short break), the pause button and the above-mentioned target identifier on the video play interface are set (e.g., in the single-person mode, the video play apparatus sets the color of the pause button displayed on the target video play interface to red and sets the color of one "hollow five-pointed star pattern" on the target video play interface to red, as shown in fig. 4; in the multi-person mode, the video play apparatus sets the color of the pause button displayed on the target video play interface to red and sets the colors of two "hollow five-pointed star patterns" on the target video play interface to red to prompt the user to watch the video).
Step 102: and if the sight focus of the watching user is not focused in the video playing interface within the preset time, executing the target operation.
In this embodiment, the target operation is used to record target scenario information of the target video played after a first time.
In this embodiment of the application, the first time is a start time when the gaze focus of the viewing user leaves the video playing interface, that is, the first time is a playing time of the target video when the gaze focus of the viewing user leaves the video playing interface of the target video. For example, assuming that the total playing time of the target video is 40 minutes, when the watching user's gaze focus is away from the video playing interface of the target video, if the target video is played to the 15 th minute, the first time is the 15 th minute.
Optionally, in this embodiment of the application, the determining that the gaze focus of the user is not focused in the video playing interface within the predetermined time period includes at least one of: the method comprises the steps of detecting that the visual focus of a watching user stays on a video playing interface of a target video within a preset time period, and detecting that the sight line focus of the user does not exist within the preset time period. It is understood that if the above-mentioned line-of-sight focus is detected not to be focused in the above-mentioned video playing interface within a predetermined time period, it indicates that the watching user leaves the video watching range (i.e. the viewing angle range of the video playing interface) or does not watch the video (i.e. the line of sight is not focused on the video playing interface).
Optionally, in an embodiment of the present application, the target operation includes at least one of: and starting to record the screen at the first time, and acquiring key plot information of the target video played after the first time, or pausing to play the target video.
For example, when the target operation includes starting to record a screen at a first time, the target scenario information may be a video recorded by the video playing apparatus or scenario information extracted from the video recorded by the video playing apparatus; and when the target operation comprises obtaining key scenario information of a target video played after the first time, the target scenario information is information related to the key scenario information. Further, the key scenario information includes at least one of: pictures, text, audio, subtitles, video, etc.
For example, the video playing apparatus may store the video recorded by the video playing apparatus and the key scenario information in a local or network server of the electronic device, which is not limited in this embodiment of the application.
For example, after the viewing user finishes viewing the target scenario information, the target scenario information may be deleted or destroyed.
Illustratively, the key scenario information is: the video playing device extracts or identifies the scenario information obtained from the target video played after the first time through key frame extraction or video semantic recognition.
In an example, taking a key frame extraction manner as an example, the video playing apparatus may extract a key frame including a specific face image from video frames of the target video after the first time, so as to obtain key scenario information. Specifically, the video playing apparatus first identifies a video frame including a face image from a target video, then extracts a key frame including a specific face image from the video frame including the face image, and performs compression coding on the extracted key frame to obtain a corresponding key scenario picture, and finally adds the key scenario picture to a key scenario picture set (e.g., a directory storing pictures) (i.e., the key scenario information). It can be understood that, during the playing process of the target video, the key scenario pictures in the key scenario picture set may increase as the number of video frames increases.
Further, the specific face image includes at least one of: and viewing the specific face image selected by the user, for example, the viewing user selects at least one face image from the plurality of actor face images in advance, or the face image with higher occurrence frequency identified in the target video.
In another example, taking a video semantic recognition manner as an example, the video playing apparatus may perform video semantic analysis on a video frame of a target video after a first time, analyze a visual object (e.g., a character in the video) and scene information (e.g., an expression and an action of the character) in the video frame, generate video semantic information of the video frame based on the visual object and the scene information, and finally convert the video semantic information into a scenario text describing a scenario (i.e., the above-mentioned key scenario information).
Furthermore, the video playing device may perform semantic analysis on one video frame each time to obtain one piece of video semantic information included in the one video frame, and may also perform semantic analysis on a plurality of video frames each time to obtain semantic information included in the plurality of video frames. For example, the video playing apparatus acquires 20 video frames of the target video at a time, and performs semantic analysis on the 20 video frames to obtain one piece of video semantic information included in the 20 video frames.
Further, the video playing apparatus may obtain video frames in real time or periodically and perform semantic analysis to obtain a plurality of video semantic information during the playing process of the target video, and add scenario characters corresponding to each obtained video semantic information to a video semantic set (e.g., a document) according to a predetermined sequence, for example, add scenario characters to a text document according to the sequence of obtaining the scenario characters.
For example, suppose that the visual objects extracted by the video playing apparatus from 20 video frames of the target video starting from the first time are: the "plumes", "texts", and "teachers", and the object features of the "plumes" and "texts" are "speaking", "stooping", and the object features of the "teachers" are "waving", "smiling". The video playing device performs semantic combination on the object features (such as 'plum + speech + bending over', 'text + speech + bending over', 'teacher + smiling + waving'), and determines that the corresponding video scene information is: greeting or calling, finally, obtaining video semantic information corresponding to the 20 video frames according to the visual objects and the scene information, and converting the video semantic information into scenario characters: "Small plums and texts have been seen by teachers, and called".
In the video playing method provided by the embodiment of the application, under the condition of playing the target video, whether the user does not watch the target video currently due to short departure or short break is determined by detecting whether the sight focus of the watching user of the target video is focused in the video playing interface of the target video, and thus, when the sight focus is detected not to be focused in the video playing interface within the preset time length, the target scenario information of the target video played after the first time (namely the starting time of the sight focus of the user departing from the video playing interface of the target video) is recorded by executing the target operation, so that the electronic device can automatically record the scenario information missed in the departure time period of the user after the user leaves for a short time, and further improves the watching efficiency of the user.
Optionally, in this embodiment of the application, in the step 101, in the case of playing the target video, determining a focal point of a line of sight of a viewer of the target video may include the following steps 101a and 101 b:
step 101 a: and under the condition of playing the target video, starting the front camera, and acquiring the face image information of the watching user of the target video.
Step 101 b: and determining whether the sight focus of the watching user is focused in the video playing interface of the target video or not according to the face image information of the watching user.
For example, the video playing device may collect the facial image information of the watching user in real time or periodically through the camera.
For example, the video playing device may identify eyes of the user from face image information of the watching user collected by the camera, extract eye information, and finally detect a position of a sight focus of the user according to the eye information of the user. Further, if the position of the sight line focus of the watching user is detected on the video playing interface, the sight line focus of the watching user is indicated to be focused in the video playing interface of the target video.
Illustratively, the above manner for determining the position of the line-of-sight focus of the viewing user in the video playing interface includes any one of the following: the embodiments of the present application are not particularly limited to this, and the following embodiments take the pupil corneal reflection technique as an example for exemplary explanation.
For example, the video playing device identifies a face image of the viewing user acquired by the camera, determines an eye region in the face image, and identifies the eye region to determine coordinates of a pupil center point and coordinates of a flare point (that is, coordinates of a light spot formed by an object on the screen reflected by an eyeball), then determines a pupil-cornea reflection vector of the eyeball of the viewing user according to the coordinates of the pupil center point and the coordinates of the flare point (that is, the light spot formed by a shooting object in the user eyeball in a shooting preview interface at the acquisition time of any face image), and finally performs function fitting on a sight focus of the viewing user and the pupil-cornea reflection vector to obtain coordinates of the sight focus of the viewing user, thereby determining a position of the sight focus of the viewing user.
Therefore, whether the watching user watches the target video or not can be conveniently and accurately judged.
Further optionally, in this embodiment of the application, the target viewing user of the target video is determined when the target video starts to be played, and when other users of the target user view the target video, the screen locking of the electronic device is controlled, so that the target video is prevented from being seen by other users when the target viewing user leaves the target video.
Illustratively, the step 101a may include the following steps 101a 1:
step 101a 1: and under the condition of playing the target video, starting the front camera, and acquiring the face image information of the target watching user of the target video.
The face image information of the target viewing user is as follows: and the front camera acquires the face image information of the first face during the playing of the target video. That is, the target viewing user is a user who views the target video when the target video starts playing.
For example, after the video playing device acquires the face image information of the target viewing user of the target video, the face image information may be stored in the face database.
Therefore, the video playing device can detect whether other users except the target watching user watch the target video or not in the video playing process.
Further optionally, in this embodiment, with reference to the step 101a1, after the gaze focus is not focused on the video playing interface within a predetermined time period in the step 102, the video playing method provided in this embodiment further includes a step B1:
step B1: and if other watching users except the target watching user watch the target video, controlling the first electronic equipment to lock the screen.
For example, because the face image information of the target viewing user is usually stored in the face database as preset face image information, in the playing process of the target video, the video playing apparatus matches the face image information with the face image information stored in the face database after acquiring the face image information of each face image (it should be noted that the face image may include faces of one or more viewing users), and controls the first electronic device to lock the screen when the face image information meets a predetermined condition. Wherein the predetermined condition includes: all face information contained in the face image information is not matched with the face image information in the face database.
Therefore, when the face image information collected by the video playing device meets the preset condition, the fact that the target watching user leaves the video watching area of the target video and other watching users watch the target video is indicated, and at the moment, in order to avoid privacy disclosure of the user, the electronic equipment can lock the screen.
Optionally, in this embodiment of the application, when the at least two electronic devices play the target video synchronously, the video playing apparatus may detect a focus of a line of sight of the watching user, and record the target scenario information when the watching user leaves or has a short break, so as to avoid the watching user missing important scenario information. Illustratively, the synchronous playing of the target video by the at least two electronic devices (i.e. in the multi-user mode) may be implemented by at least one of the following: the screen sharing between at least two electronic equipment, two at least electronic equipment access same LAN and all pass through same central server control.
For example, in the step 101, in the case of playing the target video, determining the line-of-sight focus of the viewing user of the target video may include the following step a1 or step a 2:
step A1: and under the condition that the screen of at least two electronic devices is shared and the target video is played in a screen sharing picture, detecting the sight focus of a watching user of the target video of the first electronic device.
Step A2: the method comprises the steps that when at least two electronic devices are connected into a local area network, and a central server in the local area network controls the at least two electronic devices to play a target video, a sight focus of a watching user of the target video of a first electronic device is detected.
The first electronic device is one of the at least two electronic devices.
Illustratively, in the case of screen sharing between at least two electronic devices, the sharing source electronic device performs compression coding on the recorded screen image and audio to obtain a video stream, and transmits the video stream to the sharing sink electronic device, and the sharing sink electronic device receives the video stream and plays the video stream on the screen.
Optionally, in this embodiment of the application, when it is detected that the watching user returns to the state of normally watching the target video again, the video playing apparatus stops recording the screen or stops recording the scenario information, so that the user continues to watch the target video.
For example, after the target operation is executed in step 102, the video playing method provided in the embodiment of the present application further includes step 102a 1:
step 102a 1: and if the sight focus is determined to be refocused in the video playing interface, stopping the target operation.
Illustratively, when it is detected that the watching user returns to the normal watching target video state again, the video playing device continues to play the target video or displays a prompt message on the video playing interface to remind the watching user to view the plot information of the missed key plot.
For example, after the target operation is executed in step 102, the video playing method provided in the embodiment of the present application further includes step 102a 2:
step 102a 2: and if the sight focus is determined to be refocused in the video playing interface, displaying prompt information. The prompt information is used for prompting the watching user to check the target plot information.
Illustratively, the prompt message includes any one of the following items: a prompt window and a control.
Illustratively, the viewing user triggers the video playing device to display the target scenario information of the target video through the input (e.g., touch input or other feasible input) of the prompt information.
Illustratively, the display mode of the target scenario information of the target video includes any one of the following items: and displaying on a target interface (such as a video playing interface of a target video or a new interface), and jumping to a display interface of a key scenario directory (target scenario information is at least one piece of key scenario information in the key scenario directory). For example, when a key scenario picture of the target video is acquired, the video playing apparatus automatically retrieves a storage directory of the key scenario picture set, and displays the key scenario picture in the directory.
Illustratively, when the video playing apparatus detects that the watching user returns to the normal watching state, the target scenario information (e.g. important scenario summary) recorded by the video playing apparatus is retrieved, and the target scenario information is displayed on the target interface, as shown in fig. 8. And when the electronic equipment detects that the watching user finishes checking the target scenario information (for example, when the display duration of the target scenario information exceeds the preset display duration, or detects the input operation of the user on the target scenario information), deleting the target scenario information.
Therefore, the privacy disclosure of the user is avoided, and the storage space of the electronic equipment is saved
It should be noted that, in the video playing method provided in the embodiment of the present application, the execution main body may be a video playing device, or a control module in the video playing device for executing the video playing method. In the embodiment of the present application, a method for executing video playing by a video playing apparatus is taken as an example to describe the video playing apparatus provided in the embodiment of the present application.
An embodiment of the present application provides a video playing apparatus, as shown in fig. 9, the video playing apparatus 600 includes a determining module 601 and an executing module 602, where the determining module 601 is configured to determine whether a line of sight focus of a viewer of a target video is focused on a video playing interface of the target video when the target video is played; the executing module 602 is configured to execute a target operation if the determining module 601 detects that the gaze focus is not focused in the video playing interface within a predetermined time period; wherein the target operation comprises at least one of: starting to record a screen at the first time, and acquiring key plot information of a target video played after the first time, wherein the target operation is used for recording the target plot information of the target video played after the first time; the first time is a starting time when the sight focus leaves the video playing interface.
Optionally, in this embodiment of the application, the determining module 601 is specifically configured to determine a line-of-sight focus of a user watching the target video of the first electronic device when the target video is played in a screen sharing picture shared by at least two electronic devices; the first electronic device is one of the at least two electronic devices.
Optionally, in an embodiment of the present application, the target operation includes at least one of: and starting to record the screen at the first time, and acquiring the key plot information of the target video played after the first time.
Optionally, in this embodiment of the application, as shown in fig. 10, the video playing apparatus 600 further includes: a display module 603; the display module 603 is configured to display a prompt message if the determining module 601 determines that the gaze focus is refocused in the video playing interface; the prompt information is used for prompting the watching user to check the target plot information.
Optionally, in this embodiment of the application, as shown in fig. 10, the video playing apparatus 600 further includes: an acquisition module 604; the executing module 602 is further configured to start a front-facing camera in a case that a target video is played, and the acquiring module 604 is configured to acquire face image information of a user watching the target video after the executing module 602 starts the front-facing camera; the determining module 601 is specifically configured to detect whether a sight focus of a watching user is focused on a video playing interface of the target video according to the face image information acquired by the acquiring module 604.
Optionally, in this embodiment of the application, the collecting module 604 is specifically configured to collect face image information of a target viewing user of the target video; the face image information of the target viewing user is as follows: the front camera collects the face image information of a first face when the target video is played; the executing module 602 is further configured to control the first electronic device to lock the screen if the determining module 601 determines that other watching users except the target watching user watch the target video.
In the video playing apparatus provided in the embodiment of the present application, in the case of playing a target video, it is determined whether a user is currently not watching the target video due to a short departure or a short break by detecting whether a gaze focus of a watching user of the target video is focused in a video playing interface of the target video, and thus, when it is detected that the gaze focus is not focused in the video playing interface within a predetermined time period, target scenario information of the target video played after a first time (i.e., a start time when the gaze focus of the user departs from the video playing interface of the target video) is recorded by performing a target operation, so that an electronic device can automatically record scenario information missed in a departure time period of the user after the user leaves for a short time, thereby improving the watching efficiency of the user.
It should be noted that, as shown in fig. 10, modules that are necessarily included in the video playback apparatus 600 are indicated by solid line boxes, such as the determination module 601; modules that may or may not be included in the video playback device 600 are illustrated with dashed boxes, such as the display module 603.
The video playing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video playing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The video playing device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 8, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 11, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and executable on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 110 is configured to determine whether a gaze focus of a user viewing a target video is focused in a video playing interface of the target video when the target video is played; the processor 110 is further configured to execute a target operation if it is determined that the gaze focus is not focused in the video playing interface within a predetermined time period; the target operation is used for recording target plot information of the target video played after the first time; the first time is a starting time when the sight focus leaves the video playing interface.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to determine a line-of-sight focus of a watching user of the target video of the first electronic device when the target video is played in a screen sharing picture and a screen is shared between at least two electronic devices; the first electronic device is one of the at least two electronic devices.
Optionally, in an embodiment of the present application, the target operation includes at least one of: and starting to record the screen at the first time, and acquiring the key plot information of the target video played after the first time.
Optionally, in this embodiment of the application, the display module 106 is configured to display a prompt message if the processor 110 determines that the gaze focus is refocused in the video playing interface; the prompt information is used for prompting the watching user to check the target plot information.
Optionally, in this embodiment of the application, the processor 110 is further configured to start a front-facing camera in a case that a target video is played, and the input unit 104 is configured to collect face image information of a user watching the target video after the processor 110 starts the front-facing camera; the processor 110 is specifically configured to determine whether a focal point of a line of sight of a viewer is focused on a video playing interface of the target video according to the facial image information acquired by the input unit 104.
Optionally, in this embodiment of the application, the input unit 104 is specifically configured to acquire face image information of a target viewing user of the target video; the face image information of the target viewing user is as follows: the front camera collects the face image information of a first face when the target video is played; the processor 110 is further configured to control the first electronic device to lock the screen if it is determined that other watching users except the target watching user watch the target video.
In the electronic device provided by the embodiment of the application, under the condition of playing the target video, whether the user does not watch the target video currently due to short departure or short break is determined by detecting whether the sight focus of the watching user of the target video is focused in the video playing interface of the target video, and thus, when the sight focus is detected not to be focused in the video playing interface within the preset time period, the target scenario information of the target video played after the first time (namely the starting time of the sight focus of the user departing from the video playing interface of the target video) is recorded by executing the target operation, so that the electronic device can automatically record the scenario information missed in the departure time period of the user after the user leaves for a short time, and the watching efficiency of the user is improved.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium with a readable storage medium removed, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video playing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A video playback method, the method comprising:
under the condition of playing a target video, determining whether a sight focus of a watching user of the target video is focused in a video playing interface of the target video;
if the sight focus is not focused in the video playing interface within a preset time length, executing target operation;
wherein the target operation comprises at least one of: starting to record a screen at a first time, and acquiring key plot information of the target video played after the first time, wherein the target operation is used for recording the target plot information of the target video played after the first time; the first time is a starting time when the sight line focus leaves the video playing interface.
2. The method of claim 1, wherein determining a gaze focus of a viewing user of a target video while the target video is being played comprises:
under the condition that screens of at least two electronic devices are shared and the target video is played in a screen sharing picture, acquiring a sight focus of a watching user of the target video of a first electronic device;
wherein the first electronic device is one of the at least two electronic devices.
3. The method of claim 1, wherein after the target operation is performed, the method further comprises:
if the sight focus is determined to be refocused in the video playing interface, displaying prompt information; the prompt information is used for prompting the watching user to view the target plot information.
4. The method according to any one of claims 1 to 3, wherein the determining a line-of-sight focus of a viewing user of a target video in the case of playing the target video comprises:
under the condition of playing a target video, starting a front camera, and acquiring face image information of a watching user of the target video;
and determining whether the sight focus of the watching user is focused in the video playing interface of the target video or not according to the face image information.
5. The method of claim 4, wherein the acquiring facial image information of a viewing user of the target video comprises:
collecting face image information of a target watching user of the target video; the face image information of the target watching user is as follows: the face image information of a first face is acquired by the front camera when the target video is played;
if the line-of-sight focus is not focused in the video playing interface within a predetermined time period, the method further comprises:
and if other watching users except the target watching user watch the target video, controlling the first electronic equipment to lock the screen.
6. A video playback apparatus, comprising: a determination module and an execution module, wherein:
the determining module is used for determining whether a sight focus of a watching user of the target video is focused in a video playing interface of the target video under the condition of playing the target video;
the execution module is configured to execute a target operation if the determination module determines that the gaze focus is not focused in the video playing interface within a predetermined time period;
wherein the target operation comprises at least one of: starting to record a screen at a first time, and acquiring key plot information of the target video played after the first time, wherein the target operation is used for recording the target plot information of the target video played after the first time; the first time is a starting time when the sight line focus leaves the video playing interface.
7. The apparatus of claim 6, further comprising: an acquisition module;
the acquisition module is specifically used for acquiring a sight focus of a watching user of the target video of the first electronic device under the condition that the screen is shared between at least two electronic devices and the target video is played in a screen sharing picture;
wherein the first electronic device is one of the at least two electronic devices.
8. The apparatus of claim 6, further comprising: a display module;
the display module is used for displaying prompt information if the determining module determines that the sight focus is refocused in the video playing interface; the prompt information is used for prompting the watching user to view the target plot information.
9. The apparatus of any one of claims 6 to 8, further comprising: an acquisition module;
the execution module is also used for starting the front camera under the condition of playing the target video;
the acquisition module is used for acquiring the face image information of the watching user of the target video after the execution module starts the front camera;
the determining module is specifically configured to determine whether a sight focus of a watching user is focused in a video playing interface of the target video according to the face image information acquired by the acquiring module.
10. The apparatus of claim 9,
the acquisition module is specifically used for acquiring the face image information of a target watching user of the target video; the face image information of the target watching user is as follows: the face image information of a first face is acquired by the front camera when the target video is played;
the execution module is further configured to control the first electronic device to lock the screen if the determination module determines that other watching users except the target watching user watch the target video.
CN202011174167.1A 2020-10-28 2020-10-28 Video playing method and device Active CN112399239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011174167.1A CN112399239B (en) 2020-10-28 2020-10-28 Video playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011174167.1A CN112399239B (en) 2020-10-28 2020-10-28 Video playing method and device

Publications (2)

Publication Number Publication Date
CN112399239A true CN112399239A (en) 2021-02-23
CN112399239B CN112399239B (en) 2023-03-10

Family

ID=74598318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011174167.1A Active CN112399239B (en) 2020-10-28 2020-10-28 Video playing method and device

Country Status (1)

Country Link
CN (1) CN112399239B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342223A (en) * 2021-05-28 2021-09-03 维沃移动通信(杭州)有限公司 Unread message identifier display method and device and electronic equipment
CN113573151A (en) * 2021-09-23 2021-10-29 深圳佳力拓科技有限公司 Digital television playing method and device based on focusing degree value
CN113709552A (en) * 2021-08-31 2021-11-26 维沃移动通信有限公司 Video generation method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106982391A (en) * 2016-01-19 2017-07-25 中国移动通信集团公司 A kind of information-pushing method, terminal, server and system
CN107704086A (en) * 2017-10-20 2018-02-16 维沃移动通信有限公司 A kind of mobile terminal operating method and mobile terminal
CN107911745A (en) * 2017-11-17 2018-04-13 武汉康慧然信息技术咨询有限公司 TV replay control method
CN109195015A (en) * 2018-08-21 2019-01-11 北京奇艺世纪科技有限公司 A kind of video playing control method and device
CN110460905A (en) * 2019-09-03 2019-11-15 腾讯科技(深圳)有限公司 The automatic continuous playing method of video, device and storage medium based on more equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106982391A (en) * 2016-01-19 2017-07-25 中国移动通信集团公司 A kind of information-pushing method, terminal, server and system
CN107704086A (en) * 2017-10-20 2018-02-16 维沃移动通信有限公司 A kind of mobile terminal operating method and mobile terminal
CN107911745A (en) * 2017-11-17 2018-04-13 武汉康慧然信息技术咨询有限公司 TV replay control method
CN109195015A (en) * 2018-08-21 2019-01-11 北京奇艺世纪科技有限公司 A kind of video playing control method and device
CN110460905A (en) * 2019-09-03 2019-11-15 腾讯科技(深圳)有限公司 The automatic continuous playing method of video, device and storage medium based on more equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342223A (en) * 2021-05-28 2021-09-03 维沃移动通信(杭州)有限公司 Unread message identifier display method and device and electronic equipment
CN113709552A (en) * 2021-08-31 2021-11-26 维沃移动通信有限公司 Video generation method and device and electronic equipment
CN113573151A (en) * 2021-09-23 2021-10-29 深圳佳力拓科技有限公司 Digital television playing method and device based on focusing degree value
CN113573151B (en) * 2021-09-23 2021-11-23 深圳佳力拓科技有限公司 Digital television playing method and device based on focusing degree value

Also Published As

Publication number Publication date
CN112399239B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN109637518B (en) Virtual anchor implementation method and device
US11503377B2 (en) Method and electronic device for processing data
CN112399239B (en) Video playing method and device
CN109976506B (en) Awakening method of electronic equipment, storage medium and robot
CN104796781B (en) Video clip extracting method and device
US11809479B2 (en) Content push method and apparatus, and device
US20220147741A1 (en) Video cover determining method and device, and storage medium
US20150293359A1 (en) Method and apparatus for prompting based on smart glasses
CN113194255A (en) Shooting method and device and electronic equipment
CN112333382B (en) Shooting method and device and electronic equipment
CN111596760A (en) Operation control method and device, electronic equipment and readable storage medium
CN112532885B (en) Anti-shake method and device and electronic equipment
CN113556603B (en) Method and device for adjusting video playing effect and electronic equipment
CN108616775A (en) The method, apparatus of intelligence sectional drawing, storage medium and intelligent terminal when video playing
CN114245221A (en) Interaction method and device based on live broadcast room, electronic equipment and storage medium
CN112511743B (en) Video shooting method and device
CN114501144A (en) Image-based television control method, device, equipment and storage medium
CN114257824A (en) Live broadcast display method and device, storage medium and computer equipment
CN111698532B (en) Bullet screen information processing method and device
CN115499589B (en) Shooting method, shooting device, electronic equipment and medium
CN113891002B (en) Shooting method and device
CN109151599B (en) Video processing method and device
CN110636377A (en) Video processing method, device, storage medium, terminal and server
CN111770279B (en) Shooting method and electronic equipment
CN112887782A (en) Image output method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant