CN111093113A - Video content output method and electronic equipment - Google Patents

Video content output method and electronic equipment Download PDF

Info

Publication number
CN111093113A
CN111093113A CN201910321830.7A CN201910321830A CN111093113A CN 111093113 A CN111093113 A CN 111093113A CN 201910321830 A CN201910321830 A CN 201910321830A CN 111093113 A CN111093113 A CN 111093113A
Authority
CN
China
Prior art keywords
user
video
electronic equipment
lcd screen
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910321830.7A
Other languages
Chinese (zh)
Inventor
张卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910321830.7A priority Critical patent/CN111093113A/en
Publication of CN111093113A publication Critical patent/CN111093113A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Social Psychology (AREA)
  • Databases & Information Systems (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention relates to the technical field of education, and discloses a video content output method and electronic equipment, wherein the method comprises the following steps: when a video playing instruction input by a user is received, playing a video through an LCD screen; controlling a camera module of the electronic equipment to shoot so as to obtain a facial image of a user; when the facial expression features in the facial image are matched with the preset query expression features, acquiring the current playing content of the LCD screen; searching for a learning text matched with the currently played content; and outputting the learning text through the ink screen. By implementing the embodiment of the invention, the video content in question of the user can be actively and timely explained, and the learning efficiency of the user is improved. In addition, the learning text is output through the ink screen, so that the output of the information of the character types on the LCD screen is reduced, and the influence of the electronic equipment on the eyesight of the user can be reduced.

Description

Video content output method and electronic equipment
Technical Field
The invention relates to the technical field of education, in particular to a video content output method and electronic equipment.
Background
With the rapid development of electronic devices such as smart phones and tablet computers, more and more users choose to use the electronic devices for learning. At present, most electronic devices have a video playing function, and users also like to use the electronic devices to play videos to assist learning. However, in practice, it is found that most users easily have a question about video contents during the process of watching videos, and the electronic device cannot actively explain the video contents in time, so that the learning efficiency of the users is reduced.
Disclosure of Invention
The embodiment of the invention discloses a video content output method and electronic equipment, which can actively explain video content in question of a user in time and improve the learning efficiency of the user.
The first aspect of the embodiment of the present invention discloses a method for outputting video content, which is applied to an electronic device, wherein the electronic device is provided with a display screen, the display screen comprises an LCD screen and an ink screen, and the method comprises:
when a video playing instruction input by a user is received, playing a video through the LCD screen;
controlling a camera module of the electronic equipment to shoot so as to obtain a facial image of the user;
when the facial expression features in the facial image are matched with preset query expression features, acquiring the current playing content of the LCD screen;
searching for a learning text matched with the currently played content;
and outputting the learning text through the ink screen.
As an alternative implementation manner, in the first aspect of the embodiment of the present invention, the acquiring currently played content of the LCD screen when the facial expression feature in the facial image matches a preset query expression feature includes:
acquiring the current wrinkle number of the forehead of the user and the current exposed area of the eyes of the user according to the facial image; the current exposure area is an area of the user's eye not covered by an eyelid;
and when the current wrinkle number is larger than the preset number and the current exposure area of the eyes of the user is smaller than the normal exposure area of the eyes of the user, acquiring the current playing content of the LCD screen.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the outputting the learner text through the ink screen, the method further includes:
performing screenshot operation on the LCD screen to obtain a screenshot comprising the currently played content;
combining the screenshot and the learning text to obtain review data, and generating a number corresponding to the review data according to the screenshot operation;
storing the review data and the number corresponding to the review data to a local cache of the electronic equipment;
when the video playing is finished, sorting review data in the local cache according to the numbers to obtain review data combinations;
and sending the review data combination to the printing equipment bound with the electronic equipment so that the printing equipment prints the review data combination into paper review data.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the playing the video through the LCD screen, the method further includes:
acquiring target playing content of the video annotation and a key annotation corresponding to the target playing content by an internet user;
and when the playing content on the LCD screen is the target playing content, outputting the key annotation through the ink screen to prompt the user to watch the key annotation.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the playing the video through the LCD screen, the method further includes:
judging whether the position of the electronic equipment is a bus station or not;
if so, sending a bus position request to a bus service platform so that the bus service platform feeds back the current position of the target bus waiting by the user to the electronic equipment; the bus position request carries an identifier of a target bus waiting for the user;
calculating a target time length required by the target bus to travel from the current position to a bus station corresponding to the position where the electronic equipment is located;
judging whether the target time length is less than a preset time length or not;
if the number of the videos is smaller than the preset number, the video played by the LCD screen is paused, and prompt information is output through the ink screen to prompt that the target bus is about to arrive.
A second aspect of an embodiment of the present invention discloses an electronic device, which is provided with a display screen, wherein the display screen includes an LCD screen and an ink screen, and the electronic device includes:
the playing unit is used for playing the video through the LCD screen when receiving a video playing instruction input by a user;
the shooting unit is used for controlling a camera module of the electronic equipment to shoot so as to obtain a facial image of the user;
the first acquisition unit is used for acquiring the current playing content of the LCD screen when the facial expression features in the facial image are matched with preset query expression features;
the searching unit is used for searching the learning text matched with the current playing content;
and the output unit is used for outputting the learning text through the ink screen.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the first obtaining unit includes:
the first acquisition subunit acquires the current wrinkle number of the forehead of the user and the current exposed area of the eyes of the user according to the facial image; the current exposure area is an area of the user's eye not covered by an eyelid;
and the second acquisition subunit is used for acquiring the current playing content of the LCD screen when the current wrinkle number is larger than the preset number and the current exposure area of the eyes of the user is smaller than the normal exposure area of the eyes of the user.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the screenshot unit is used for performing screenshot operation on the LCD screen after the learning text is output by the output unit through the ink screen so as to obtain a screenshot comprising the current playing content;
the generating unit is used for combining the screenshot and the learning text to obtain review data and generating a number corresponding to the review data according to the screenshot operation;
the storage unit is used for storing the review data and the number corresponding to the review data to a local cache of the electronic equipment;
the sorting unit is used for sorting the review data in the local cache according to the serial number to obtain a review data combination when the video playing is finished;
and the first sending unit is used for sending the review data combination to the printing equipment bound with the electronic equipment so that the printing equipment prints the review data combination into paper review data.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the second acquisition unit is used for acquiring target playing content labeled by an internet user for the video and a key annotation corresponding to the target playing content after the playing unit plays the video through the LCD screen;
the output unit is further configured to output the highlight annotation through the ink screen to prompt the user to watch the highlight annotation when the playing content on the LCD screen is the target playing content.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the judging unit is used for judging whether the position of the electronic equipment is a bus station or not after the playing unit plays the video through the LCD screen;
the second sending unit is used for sending a bus position request to a bus service platform after the judging unit judges that the position of the electronic equipment is a bus station, so that the bus service platform feeds back the current position of a target bus waiting by the user to the electronic equipment; the bus position request carries an identifier of a target bus waiting for the user;
the calculating unit is used for calculating the target time length required by the target bus to travel from the current position to the bus station corresponding to the position where the electronic equipment is located;
the judging unit is further used for judging whether the target duration is less than a preset duration;
and the output unit is also used for pausing the video played by the LCD screen and outputting prompt information through the ink screen to prompt that the target bus is about to arrive when the judgment unit judges that the target duration is less than the preset duration.
A third aspect of an embodiment of the present invention discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the video content output method disclosed by the first aspect of the embodiment of the invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the method for outputting video content disclosed in the first aspect of the embodiments of the present invention.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product is configured to, when running on a computer, cause the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the electronic equipment is provided with a display screen, the display screen comprises an LCD screen and an ink screen, when a video playing instruction input by a user is received, the electronic equipment plays a video through the LCD screen, then the camera module is controlled to shoot to obtain a facial image of the user, when facial expression characteristics in the facial image are matched with preset query expression characteristics, the current playing content of the LCD screen is obtained, a learning text matched with the current playing content is searched, and then the learning text is output through the ink screen. Therefore, by analyzing the facial expression of the user, when the facial expression feature is matched with the preset query expression feature, the fact that the user has a query on the current broadcast content of the LCD screen is indicated, the electronic equipment searches the learning text matched with the current broadcast content, and then outputs the learning text through the ink screen, so that the video content with the query of the user can be actively explained in time, and the learning efficiency of the user is improved. In addition, the learning text is output through the ink screen, so that the output of the information of the character types on the LCD screen is reduced, and the influence of the electronic equipment on the eyesight of the user can be reduced.
Drawings
For the purpose of clarity
Technical solutions in the embodiments of the present invention will be briefly described below with reference to drawings required to be used in the embodiments, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and it is obvious for a person skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for outputting video content according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another video content output method according to the embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
FIG. 4 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
fig. 6 is an exemplary diagram of an electronic device disclosed in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "first" and "second" and the like in the description and the claims of the present invention are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the present invention, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "center", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate an orientation or positional relationship based on the orientation or positional relationship shown in the drawings. These terms are used primarily to better describe the invention and its embodiments and are not intended to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meanings of these terms in the present invention can be understood by those skilled in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "connected" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meanings of the above terms in the present invention can be understood by those of ordinary skill in the art according to specific situations.
The embodiment of the invention discloses a video content output method and electronic equipment, which can actively explain video content in question of a user in time and improve the learning efficiency of the user. The following detailed description is made with reference to the accompanying drawings.
First, briefly introduce an electronic device according to the present invention, please refer to fig. 6, and fig. 6 is an exemplary diagram of an electronic device according to an embodiment of the present invention. The electronic device comprises a device body 600, an ink screen 601, an LCD screen 602 and a protective shell 603, wherein the LCD screen 602 is connected with the device body 600 through the protective shell 603, and the LCD screen 602 can be used for playing videos; the apparatus body 600 is provided with an ink screen 601, and the ink screen 601 can be used for outputting learning text (such as a text paraphrase of a video) or accent annotation related to the video.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for outputting video content according to an embodiment of the present invention. The electronic equipment is provided with a display screen, and the display screen comprises an LCD screen and an ink screen. As shown in fig. 1, the method may include the following steps.
101. When a video playing instruction input by a user is received, the electronic equipment plays a video through the LCD screen.
In the embodiment of the present invention, the electronic device may be a family education machine, a learning tablet, or the like, and the embodiment of the present invention is not limited. Among other things, the electronic device may support network technologies including, but not limited to: global system for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), wideband Code Division Multiple Access (W-CDMA), CDMA2000, IMT Single Carrier (IMT Single Carrier), enhanced data rate GSM Evolution (enhanced data Rates for GSM Evolution, EDGE), Long-Term Evolution (Long-Term Evolution, LTE), advanced Long-Term Evolution (LTE), Time-Division Long-Term Evolution (Time-Division LTE, TD-LTE), High-Performance Radio Local Area Network (High-Performance Radio Local Area Network, High-Performance lan), High-Performance wide Area Network (High wan), Local multi-point SDMA (Local multi-point Multiple Access, LMDS), Worldwide Interoperability for Microwave (WiMAX), bluetooth (orthogonal frequency Division multiplexing), and High-capacity space Division multiplexing (ZigBee-HC) Universal Mobile Telecommunications System (UMTS), universal mobile telecommunications system time division duplex (UMTS-TDD), evolved high speed packet access (HSPA +), time division synchronous code division multiple access (TD-SCDMA), evolution data optimized (EV-DO), Digital Enhanced Cordless Telecommunications (DECT), and others.
In the embodiment of the invention, the display screen arranged on the electronic equipment can comprise an LCD screen and an ink screen. The electronic equipment can play videos through the LCD screen due to the fact that the interaction capacity of the LCD screen is strong; because the ink screen can simulate the effect of a book through the electronic paper display technology, the effect of protecting the eyesight of a user is realized, and therefore, the electronic equipment can output the learning text related to the current playing content of the video through the ink screen.
In the embodiment of the invention, it can be understood that the long-time watching of the video can cause eye fatigue of the user, and the continuous watching of the video at the moment not only reduces the learning efficiency of the user, but also influences the eyesight of the user. Therefore, as an alternative implementation, the electronic device may perform the following operations:
controlling a camera module of the electronic equipment to shoot so as to obtain a user image;
judging whether the user is in an eye fatigue state or not according to the user image;
if yes, the LCD screen and the ink screen are closed, and sleep-aid music is played to help the user to enter the sleep state.
By implementing the optional implementation mode, whether the user is in an eye fatigue state or not is judged in the process of watching the video by the user, if so, the LCD screen and the ink screen are closed, the LCD screen and the ink screen are in a standby state, and the sleep-aid music is played to help the user to enter a sleep state, so that the power consumption of the electronic equipment can be reduced, and the user can be in a work and leisure combination.
Specifically, as an optional implementation manner, the manner in which the electronic device determines whether the user is in the eye fatigue state according to the user image may be:
judging whether the expression characteristics of the user in the user image are matched with preset yawning expression characteristics or not; if yes, determining that the user is in an eye fatigue state; if not, determining that the user is not in the eyestrain state.
By implementing the optional implementation mode, a method for judging whether the user is in the eyestrain state is provided, and whether the user is in the eyestrain state can be judged according to whether the user yawns.
Specifically, as another optional implementation manner, the manner in which the electronic device determines whether the user is in the eye fatigue state according to the user image may be:
recognizing the head action characteristics of a user by combining multiple frames of continuous user images shot by a camera module of the electronic equipment;
judging whether the head action characteristics are matched with preset nodding characteristics or not;
if the head action characteristics are matched with the preset nodding characteristics, further judging whether the eyes of the user are in an eye closing state; if yes, determining that the user is in an eye fatigue state; if not, it is determined that the user is not in an eyestrain state.
And if the head action characteristic does not match with the preset nodding characteristic, determining that the user is not in the eyestrain state.
Implementing the alternative embodiment, another method for judging whether the user is in the eyestrain state is provided, the user usually makes a trouble when being in the eyestrain state, and if the user is judged to be nodding and closing the eyes according to the user image, the user is shown to make a trouble at the moment, so that the user is judged to be in the eyestrain state.
In the embodiment of the invention, high-energy short-wave Blue light (Blue Laser) may exist in the light output by the LCD screen, and the high-energy short-wave Blue light may cause irreversible damage to the retina and the optic nerve of a user, so that the vision of the user is damaged. If the eyes of the user are too close to the LCD screen during the process of watching the video played by the LCD screen, the visual impairment of the user can be aggravated.
Therefore, as an alternative implementation, the LCD screen may be provided with a distance sensing module, and during the process of playing the video through the LCD screen, the electronic device may further perform the following steps:
monitoring a target distance between eyes of a user and an LCD screen in real time through a distance sensing module;
judging whether the target distance is smaller than a preset distance or not;
and if the distance is less than the preset distance, the LCD screen is closed, and eye protection information is output on the ink screen to prompt the user to pay attention to the eye protection.
By implementing the optional implementation mode, the distance between the eyes of the user and the LCD screen is judged, when the eyes of the user are too short-sighted away from the LCD screen, the LCD screen is closed in time and the user is reminded of paying attention to protect the eyes, and the harm to the eyes of the user caused by watching videos in a close range can be reduced.
102. The electronic equipment controls a camera module of the electronic equipment to shoot so as to obtain a facial image of a user.
In the embodiment of the invention, the electronic equipment can be provided with the camera module, and correspondingly, the electronic equipment can shoot the user through the camera module so as to obtain the facial image of the user.
103. And when the facial expression features in the facial image are matched with the preset query expression features, the electronic equipment acquires the current playing content of the LCD screen.
In the embodiment of the present invention, it can be understood that if a user does not understand the currently played content of the video or does not understand the currently played content of the video during the process of watching the video, a question will be generated, and accordingly, the facial expression of the user will change accordingly, for example, a question expression feature will be generated during the question.
104. The electronic device searches for a learning text matching the currently playing content.
In the embodiment of the present invention, the learning text is text data related to the currently played content of the video, such as a word definition, and the embodiment of the present invention is not limited.
In the embodiment of the invention, when the user generates the query expression feature, the electronic device can search the matched learning text according to the currently played content of the video, such as the paraphrase of the currently played content of the video.
105. The electronic device outputs the learning text through the ink screen.
In the embodiment of the invention, the ink screen can simulate the effect of a book through an electronic paper display technology, so that the effect of protecting the eyesight of a user is realized, and therefore, the electronic equipment can output the learning text related to the current playing content of the video through the ink screen.
As an alternative implementation, after the electronic device outputs 105 the learning text through the ink screen, the following steps may be further performed:
pausing a video played on an LCD screen and marking a target node where the currently played content is located;
and when a continuation instruction input by the user is detected, continuing playing the video from the target node.
By implementing the optional implementation mode, when the electronic equipment outputs the learning text through the ink screen, the playing of the video is paused and the target node where the currently played content is located is marked, so that sufficient time is reserved for the user to read the learning text, and after the user finishes watching the learning text, the user can continue to watch the video from the target node, so that the use experience of the user can be improved.
It can be seen that, by implementing the method described in fig. 1, in the process of playing a video through the LCD screen, by analyzing the facial expression of the user, when the facial expression feature matches with the preset query expression feature, it indicates that the user has a query for the currently played content on the LCD screen, the electronic device searches for a learning text matching with the currently played content, and then outputs the learning text through the ink screen, so that the video content in which the user has a query can be actively explained in time, and the learning efficiency of the user is improved. In addition, the learning text is output through the ink screen, so that the output of the information of the character types on the LCD screen is reduced, and the influence of the electronic equipment on the eyesight of the user can be reduced.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating another video content output method according to an embodiment of the present invention. The electronic equipment is provided with a display screen, and the display screen comprises an LCD screen and an ink screen. As shown in fig. 2, the method may include the following steps.
201. When a video playing instruction input by a user is received, the electronic equipment plays a video through the LCD screen.
In the embodiment of the present invention, it can be understood that, when watching a video, a user often does not know which contents in the video are important contents, so that some important contents are easily missed. Therefore, as an optional implementation manner, when a video playing instruction input by a user is received in step 201, after the electronic device plays a video through the LCD screen, the following steps may also be performed:
acquiring target playing content of an internet user for video annotation and a key annotation corresponding to the target playing content;
when the playing content on the LCD screen is the target playing content, the key annotation is output through the ink screen to prompt the user to watch the key annotation.
In the embodiment of the present invention, the key label is a mark for reminding the user to watch the target playing content corresponding to the key label in a key manner, and may be a name of a person appearing in the target playing content or a name of a major event appearing in the target playing content.
By implementing the optional implementation mode, the key labels of the internet users to the videos are obtained, when the videos are played to the playing contents corresponding to the key labels, the key annotations are output through the ink screen to prompt the users to watch the keys, and the users can be prevented from missing the key contents of the videos.
It should be noted that, after step 201 is executed, step 202 to step 206 may be executed first, or step 207 to step 210 may be executed first, and the embodiment of the present invention is not limited thereto.
202. The electronic equipment judges whether the position of the electronic equipment is a bus station or not; if yes, executing step 203-step 206; otherwise, the flow is ended.
It can be understood that people often watch some videos to send time when waiting for a bus, but sometimes the people miss the opportunity of taking the bus because of watching the videos and can only continue waiting for the next bus. Therefore, in the embodiment of the invention, the electronic device can calculate the target time length required by the target bus waiting for the user to travel to the bus station corresponding to the position where the electronic device is located, when the target time length is less than the preset time length, the target bus is about to arrive, and the video played by the LCD screen is paused to remind the user.
As an optional implementation manner, the manner in which the electronic device determines whether the location of the electronic device is a bus stop may be:
shooting the surrounding environment of the electronic equipment through a camera module to obtain an environment image;
identifying an environmental feature in an environmental image;
judging whether the environmental characteristics are matched with the characteristics of the bus station;
if so, determining that the position of the electronic equipment is a bus station; if not, the electronic equipment is determined not to be at the bus station.
By implementing the optional implementation manner, a method for judging whether the position of the electronic device is the bus station is provided, and whether the position of the electronic device is the bus station can be judged by identifying whether the environmental characteristics of the electronic device are matched with the characteristics of the bus station.
As another optional implementation, a positioning module may be built in the electronic device, and the manner that the electronic device determines whether the position of the electronic device is a bus station may be:
acquiring a position coordinate of the electronic equipment through a positioning module;
judging whether the position coordinate is matched with the position coordinate of a certain bus station;
if so, determining that the position of the electronic equipment is a bus station; if not, the electronic equipment is determined not to be at the bus station.
In the embodiment of the present invention, the positioning module may include a Global Positioning System (GPS) module, a beidou satellite positioning system, and the like, and the embodiment of the present invention is described by taking the GPS module as an example, which should not limit the present invention. The GPS module has high integration sensitivity and low power consumption, can simultaneously track up to 20 satellites, quickly position and realize 1Hz navigation updating, so that the electronic equipment can acquire the real-time position coordinates of the electronic equipment through the built-in GPS module.
By implementing the optional implementation manner, another method for judging whether the position of the electronic device is the bus station is provided, and whether the position of the electronic device is the bus station can be judged by judging whether the position coordinates of the electronic device are matched with the position coordinates of a certain bus station.
203. The electronic equipment sends a bus position request to the bus service platform, so that the bus service platform feeds back the current position of the target bus waiting by the user to the electronic equipment.
In the embodiment of the invention, the bus position request carries the identification of the target bus waiting for the user, so that the bus service platform can purposefully acquire the current position of the target bus according to the identification of the target bus.
204. The electronic equipment calculates the target time length required by the target bus to travel from the current position to the bus station corresponding to the position where the electronic equipment is located.
In the embodiment of the invention, as the driving route of the bus is fixed, the distance between the target bus and the bus stop corresponding to the position where the electronic equipment is located can be calculated according to the current position of the target bus and the bus stop corresponding to the position where the electronic equipment is located, and then the target time length required by the target bus to drive from the current position to the bus stop corresponding to the position where the electronic equipment is located can be further calculated according to the normal driving speed of the bus.
205. The electronic equipment judges whether the target duration is less than a preset duration or not; if so, go to step 206; otherwise, the flow is ended.
206. The electronic equipment pauses the video played by the LCD screen and outputs prompt information through the ink screen so as to prompt that the target bus is about to arrive.
In the embodiment of the invention, the steps 202 to 206 are implemented, the target time length required by the target bus waiting for the user to travel to the bus station corresponding to the position of the electronic device is calculated, when the target time length is less than the preset time length, the target bus is about to arrive, the video played by the LCD screen is paused, and the user is reminded, so that the user can be prevented from missing the bus.
207. The electronic equipment controls a camera module of the electronic equipment to shoot so as to obtain a facial image of a user.
208. And when the facial expression features in the facial image are matched with the preset query expression features, the electronic equipment acquires the current playing content of the LCD screen.
In the embodiment of the present invention, when a user has a question, the user may often have a frown, and the frown may cause an increase in the number of forehead wrinkles and a reduction in eyes, so as to be an optional implementation manner, when the facial expression feature in the facial image is matched with the preset question expression feature in step 208, the manner of acquiring the currently played content on the LCD screen by the electronic device may be:
acquiring the current wrinkle number of the forehead of the user and the current exposed area of the eyes of the user according to the facial image; wherein the current exposure area is the area of the user's eyes not covered by the eyelid;
and when the current wrinkle number is larger than the preset number and the current exposure area of the eyes of the user is smaller than the normal exposure area of the eyes of the user, acquiring the current playing content of the LCD screen.
By implementing the optional implementation mode, the method for insisting whether the facial expression features of the user are matched with the preset query expression features is provided, whether the user frows or not can be judged according to the number of wrinkles of the forehead and the exposed area of the eyes of the user, whether the user generates a query or not can be further judged, and the intelligent degree of the electronic equipment is improved.
209. The electronic device searches for a learning text matching the currently playing content.
210. The electronic device outputs the learning text through the ink screen.
211. And the electronic equipment performs screenshot operation on the LCD screen to obtain a screenshot comprising the currently played content.
212. The electronic equipment combines the screenshot and the learning text to obtain review data, and generates a number corresponding to the review data according to screenshot operation.
In the embodiment of the invention, the numbers are used for marking the sequence of review data, and the review data merged first is ranked in front of the review data merged later is ranked behind the review data merged later.
213. The electronic equipment stores the review data and the number corresponding to the review data in a local cache of the electronic equipment.
214. When the video playing is finished, the electronic equipment sorts the review data in the local cache according to the serial numbers to obtain the review data combination.
In the embodiment of the invention, the review data in the local cache are sorted according to the numbers, the review data can be clearly sorted according to the playing time of the video, and the review data in the obtained review data combination are arranged according to the merging sequence, namely according to the playing time of the video, wherein the review data are arranged in the front and in the back.
215. The electronic equipment sends the review data combination to the printing equipment bound with the electronic equipment, so that the printing equipment prints the review data combination into paper review data.
In the embodiment of the present invention, steps 211 to 205 are implemented, by storing and numbering the screenshot of the playing content of the video and the corresponding learning text, after the video is played, arranging the screenshot and the learning text according to the numbering, and then sending the screenshot and the learning text to a printing device for printing to obtain paper review data, which can further assist the user in learning.
It can be seen that compared with the method described in fig. 1, the method described in fig. 2 is implemented, and the highlight annotation is output through the ink screen to prompt the user to watch the highlight, so that the user can be prevented from missing the highlight content of the video. In addition, the target bus waiting for the user is calculated to be driven to the target time length required by the bus station corresponding to the position where the electronic equipment is located, when the target time length is shorter than the preset time length, the target bus is about to arrive, the video played by the LCD screen is paused, the user is reminded, and the bus missing of the user can be avoided. In addition, whether the user frowns or not can be judged through the number of the wrinkles of the forehead of the user and the exposed area of the eyes, and whether the user has a question or not is further judged, so that the intelligent degree of the electronic equipment is improved. In addition, the screenshot and the learning text are sent to the printing device to be printed to obtain paper review materials, so that the user can be further assisted in learning.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 3, the electronic device may include:
the playing unit 301 is configured to play a video through the LCD screen when receiving a video playing instruction input by a user.
In the embodiment of the invention, it can be understood that the long-time watching of the video can cause eye fatigue of the user, and the continuous watching of the video at the moment not only reduces the learning efficiency of the user, but also influences the eyesight of the user. Therefore, as an alternative embodiment, the playing unit 301 may perform the following operations:
controlling a camera module of the electronic equipment to shoot so as to obtain a user image;
judging whether the user is in an eye fatigue state or not according to the user image;
if yes, the LCD screen and the ink screen are closed, and sleep-aid music is played to help the user to enter the sleep state.
By implementing the optional implementation mode, whether the user is in an eye fatigue state or not is judged in the process of watching the video by the user, if so, the LCD screen and the ink screen are closed, the LCD screen and the ink screen are in a standby state, and the sleep-aid music is played to help the user to enter a sleep state, so that the power consumption of the electronic equipment can be reduced, and the user can be in a work and leisure combination.
Specifically, as an optional implementation manner, the manner of determining whether the user is in the eye fatigue state according to the user image by the playing unit 301 may be:
judging whether the expression characteristics of the user in the user image are matched with preset yawning expression characteristics or not; if yes, determining that the user is in an eye fatigue state; if not, determining that the user is not in the eyestrain state.
By implementing the optional implementation mode, a method for judging whether the user is in the eyestrain state is provided, and whether the user is in the eyestrain state can be judged according to whether the user yawns.
Specifically, as another optional implementation manner, the manner of determining whether the user is in the eye fatigue state according to the user image by the playing unit 301 may be:
recognizing the head action characteristics of a user by combining multiple frames of continuous user images shot by a camera module of the electronic equipment;
judging whether the head action characteristics are matched with preset nodding characteristics or not;
if the head action characteristics are matched with the preset nodding characteristics, further judging whether the eyes of the user are in an eye closing state; if yes, determining that the user is in an eye fatigue state; if not, it is determined that the user is not in an eyestrain state.
And if the head action characteristic does not match with the preset nodding characteristic, determining that the user is not in the eyestrain state.
Implementing the alternative embodiment, another method for judging whether the user is in the eyestrain state is provided, the user usually makes a trouble when being in the eyestrain state, and if the user is judged to be nodding and closing the eyes according to the user image, the user is shown to make a trouble at the moment, so that the user is judged to be in the eyestrain state.
In the embodiment of the invention, high-energy short-wave Blue light (Blue Laser) may exist in the light output by the LCD screen, and the high-energy short-wave Blue light may cause irreversible damage to the retina and the optic nerve of a user, so that the vision of the user is damaged. If the eyes of the user are too close to the LCD screen during the process of watching the video played by the LCD screen, the visual impairment of the user can be aggravated.
Therefore, as an alternative embodiment, a distance sensing module may be disposed on the LCD screen, and during the process of playing the video through the LCD screen, the playing unit 301 may further perform the following steps:
monitoring a target distance between eyes of a user and an LCD screen in real time through a distance sensing module;
judging whether the target distance is smaller than a preset distance or not;
and if the distance is less than the preset distance, the LCD screen is closed, and eye protection information is output on the ink screen to prompt the user to pay attention to the eye protection.
By implementing the optional implementation mode, the distance between the eyes of the user and the LCD screen is judged, when the eyes of the user are too short-sighted away from the LCD screen, the LCD screen is closed in time and the user is reminded of paying attention to protect the eyes, and the harm to the eyes of the user caused by watching videos in a close range can be reduced.
The shooting unit 302 is configured to control a camera module of the electronic device to shoot so as to obtain a facial image of the user.
In this embodiment of the present invention, the shooting unit 302 may be provided with a camera module, and accordingly, the shooting unit 302 may shoot the user through the camera module to obtain the facial image of the user.
A first obtaining unit 303, configured to obtain currently played content of the LCD screen when the facial expression feature in the facial image matches a preset query expression feature.
And a searching unit 304, configured to search for a learning text matching the currently played content.
An output unit 305 for outputting the learning text through the ink screen.
As an alternative embodiment, after the output unit 305 outputs the learning text through the ink screen, the following steps may be further performed:
pausing a video played on an LCD screen and marking a target node where the currently played content is located;
and when a continuation instruction input by the user is detected, continuing playing the video from the target node.
By implementing the optional implementation manner, when the output unit 305 outputs the learning text through the ink screen, the playing of the video is paused and the target node where the currently played content is located is marked, so that sufficient time is left for the user to read the learning text, and after the user finishes watching the learning text, the user can continue to watch the video from the target node, so that the use experience of the user can be improved.
It can be seen that, with the electronic device described in fig. 3, in the process of playing a video through the LCD screen, by analyzing the facial expression of the user, when the facial expression feature matches with the preset query expression feature, it indicates that the user has a query for the currently played content on the LCD screen, and the electronic device searches for a learning text matching with the currently played content and then outputs the learning text through the ink screen, so that the electronic device can actively explain the video content in which the user has a query in time, and improve the learning efficiency of the user. In addition, the learning text is output through the ink screen, so that the output of the information of the character types on the LCD screen is reduced, and the influence of the electronic equipment on the eyesight of the user can be reduced.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 4 is further optimized from the electronic device shown in fig. 3. Compared to the electronic device shown in fig. 3, the electronic device shown in fig. 4 may further include:
and a screenshot unit 306, configured to perform a screenshot operation on the LCD screen after the output unit 305 outputs the learning text through the ink screen, so as to obtain a screenshot including the currently played content.
The generating unit 307 is configured to combine the screenshot and the learning text to obtain review data, and generate a number corresponding to the review data according to the screenshot operation.
The saving unit 308 is configured to save the review data and the number corresponding to the review data to a local cache of the electronic device.
The sorting unit 309 is configured to sort the review data in the local cache according to the number when the video playing is finished so as to obtain a review data combination.
The first sending unit 310 is configured to send the review data combination to the printing device bound to the electronic device, so that the printing device prints the review data combination into paper review data.
In this embodiment of the present invention, as an optional implementation manner, the first obtaining unit 303 includes:
a first acquiring subunit 3031, configured to acquire, according to the facial image, the current number of wrinkles of the forehead of the user and the current exposed area of the eyes of the user; wherein the current exposure area is the area of the user's eyes not covered by the eyelid;
the second obtaining subunit 3032 is configured to obtain the currently played content of the LCD screen when the current number of wrinkles is greater than the preset number and the current exposed area of the eyes of the user is smaller than the normal exposed area of the eyes of the user.
By implementing the optional implementation mode, the method for insisting whether the facial expression features of the user are matched with the preset query expression features is provided, whether the user frows or not can be judged according to the number of wrinkles of the forehead and the exposed area of the eyes of the user, whether the user generates a query or not can be further judged, and the intelligent degree of the electronic equipment is improved.
And the second obtaining unit 311 is configured to obtain, after the playing unit 301 plays the video through the LCD screen, a target playing content annotated by the internet user for the video and a key annotation corresponding to the target playing content.
The output unit 305 is further configured to output a highlight annotation through the ink screen to prompt the user to watch the highlight annotation when the playing content on the LCD screen is the target playing content.
In the embodiment of the present invention, the key label is a mark for reminding the user to watch the target playing content corresponding to the key label in a key manner, and may be a name of a person appearing in the target playing content or a name of a major event appearing in the target playing content.
The determining unit 312 is configured to determine whether the electronic device is located at a bus station after the playing unit 301 plays the video through the LCD screen.
As an alternative implementation, the manner for determining whether the electronic device is located at the bus station by the determining unit 312 may be:
shooting the surrounding environment of the electronic equipment through a camera module to obtain an environment image;
identifying an environmental feature in an environmental image;
judging whether the environmental characteristics are matched with the characteristics of the bus station;
if so, determining that the position of the electronic equipment is a bus station; if not, the electronic equipment is determined not to be at the bus station.
By implementing the optional implementation manner, a method for judging whether the position of the electronic device is the bus station is provided, and whether the position of the electronic device is the bus station can be judged by identifying whether the environmental characteristics of the electronic device are matched with the characteristics of the bus station.
As another alternative, the determining unit 312 may have a positioning module built therein, and the manner for determining whether the electronic device is located at the bus station by the determining unit 312 may be:
acquiring a position coordinate of the electronic equipment through a positioning module;
judging whether the position coordinate is matched with the position coordinate of a certain bus station;
if so, determining that the position of the electronic equipment is a bus station; if not, the electronic equipment is determined not to be at the bus station.
In the embodiment of the present invention, the positioning module may include a Global Positioning System (GPS) module, a beidou satellite positioning system, and the like, and the embodiment of the present invention is described by taking the GPS module as an example, which should not limit the present invention. The GPS module has high integration sensitivity and low power consumption, and can track up to 20 satellites at the same time, perform quick positioning, and implement 1Hz navigation updating, so that the determining unit 312 can obtain its own real-time position coordinates through its built-in GPS module.
By implementing the optional implementation manner, another method for judging whether the position of the electronic device is the bus station is provided, and whether the position of the electronic device is the bus station can be judged by judging whether the position coordinates of the electronic device are matched with the position coordinates of a certain bus station.
The second sending unit 313 is configured to send a bus position request to the bus service platform after the determining unit 312 determines that the electronic device is located at the bus stop, so that the bus service platform feeds back the current position of the target bus, where the user waits, to the electronic device.
In the embodiment of the invention, the bus position request carries the identification of the target bus waiting for the user, so that the bus service platform can purposefully acquire the current position of the target bus according to the identification of the target bus.
The calculating unit 314 is configured to calculate a target time length required for the target bus to travel from the current position to a bus stop corresponding to the position where the electronic device is located.
The determining unit 312 is further configured to determine whether the target duration is less than a preset duration.
The output unit 305 is further configured to pause the video played by the LCD screen and output a prompt message through the ink screen to prompt that the target bus is about to arrive when the determination unit 312 determines that the target duration is less than the preset duration.
It can be seen that, compared with the electronic device described in fig. 3, the electronic device described in fig. 4 is implemented to output the highlight annotation through the ink screen to prompt the user to watch the highlight annotation, so that the user can be prevented from missing the highlight content of the video. In addition, the target bus waiting for the user is calculated to be driven to the target time length required by the bus station corresponding to the position where the electronic equipment is located, when the target time length is shorter than the preset time length, the target bus is about to arrive, the video played by the LCD screen is paused, the user is reminded, and the bus missing of the user can be avoided. In addition, whether the user frowns or not can be judged through the number of the wrinkles of the forehead of the user and the exposed area of the eyes, and whether the user has a question or not is further judged, so that the intelligent degree of the electronic equipment is improved. In addition, the screenshot and the learning text are sent to the printing device to be printed to obtain paper review materials, so that the user can be further assisted in learning.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. As shown in fig. 5, the electronic device may include:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
the processor 502 calls the executable program code stored in the memory 501 to execute any one of the video content output methods in fig. 1 to 2.
An embodiment of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program enables a computer to execute an output method of any one of video contents in fig. 1 to 2.
An embodiment of the present invention discloses a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute an output method of video content of any one of fig. 1 to 2.
The embodiment of the present invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
In various embodiments of the present invention, it is understood that the meaning of "a and/or B" means that a and B are each present alone or both a and B are included.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The foregoing describes in detail a video content output method and an electronic device disclosed in the embodiments of the present invention, and specific examples are applied herein to explain the principles and embodiments of the present invention, and the description of the foregoing embodiments is only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. An output method of video content is applied to an electronic device, the electronic device is provided with a display screen, the display screen comprises an LCD screen and an ink screen, and the method comprises the following steps:
when a video playing instruction input by a user is received, playing a video through the LCD screen;
controlling a camera module of the electronic equipment to shoot so as to obtain a facial image of the user;
when the facial expression features in the facial image are matched with preset query expression features, acquiring the current playing content of the LCD screen;
searching for a learning text matched with the currently played content;
and outputting the learning text through the ink screen.
2. The method according to claim 1, wherein the obtaining the currently played content of the LCD screen when the facial expressive features in the facial image match with preset query expressive features comprises:
acquiring the current wrinkle number of the forehead of the user and the current exposed area of the eyes of the user according to the facial image; the current exposure area is an area of the user's eye not covered by an eyelid;
and when the current wrinkle number is larger than the preset number and the current exposure area of the eyes of the user is smaller than the normal exposure area of the eyes of the user, acquiring the current playing content of the LCD screen.
3. The method of claim 2, wherein after the outputting the learner text via the ink screen, the method further comprises:
performing screenshot operation on the LCD screen to obtain a screenshot comprising the currently played content;
combining the screenshot and the learning text to obtain review data, and generating a number corresponding to the review data according to the screenshot operation;
storing the review data and the number corresponding to the review data to a local cache of the electronic equipment;
when the video playing is finished, sorting review data in the local cache according to the numbers to obtain review data combinations;
and sending the review data combination to the printing equipment bound with the electronic equipment so that the printing equipment prints the review data combination into paper review data.
4. The method according to any one of claims 1 to 3, wherein after the playing of the video through the LCD screen, the method further comprises:
acquiring target playing content of the video annotation and a key annotation corresponding to the target playing content by an internet user;
and when the playing content on the LCD screen is the target playing content, outputting the key annotation through the ink screen to prompt the user to watch the key annotation.
5. The method of any of claims 1 to 4, wherein after said playing video through said LCD screen, said method further comprises:
judging whether the position of the electronic equipment is a bus station or not;
if so, sending a bus position request to a bus service platform so that the bus service platform feeds back the current position of the target bus waiting by the user to the electronic equipment; the bus position request carries an identifier of a target bus waiting for the user;
calculating a target time length required by the target bus to travel from the current position to a bus station corresponding to the position where the electronic equipment is located;
judging whether the target time length is less than a preset time length or not;
if the number of the videos is smaller than the preset number, the video played by the LCD screen is paused, and prompt information is output through the ink screen to prompt that the target bus is about to arrive.
6. The utility model provides an electronic equipment, its characterized in that, electronic equipment is equipped with the display screen, the display screen includes LCD screen and ink screen, electronic equipment includes:
the playing unit is used for playing the video through the LCD screen when receiving a video playing instruction input by a user;
the shooting unit is used for controlling a camera module of the electronic equipment to shoot so as to obtain a facial image of the user;
the first acquisition unit is used for acquiring the current playing content of the LCD screen when the facial expression features in the facial image are matched with preset query expression features;
the searching unit is used for searching the learning text matched with the current playing content;
and the output unit is used for outputting the learning text through the ink screen.
7. The electronic device according to claim 6, wherein the first acquisition unit includes:
the first acquisition subunit acquires the current wrinkle number of the forehead of the user and the current exposed area of the eyes of the user according to the facial image; the current exposure area is an area of the user's eye not covered by an eyelid;
and the second acquisition subunit is used for acquiring the current playing content of the LCD screen when the current wrinkle number is larger than the preset number and the current exposure area of the eyes of the user is smaller than the normal exposure area of the eyes of the user.
8. The electronic device of claim 7, further comprising:
the screenshot unit is used for performing screenshot operation on the LCD screen after the learning text is output by the output unit through the ink screen so as to obtain a screenshot comprising the current playing content;
the generating unit is used for combining the screenshot and the learning text to obtain review data and generating a number corresponding to the review data according to the screenshot operation;
the storage unit is used for storing the review data and the number corresponding to the review data to a local cache of the electronic equipment;
the sorting unit is used for sorting the review data in the local cache according to the serial number to obtain a review data combination when the video playing is finished;
and the first sending unit is used for sending the review data combination to the printing equipment bound with the electronic equipment so that the printing equipment prints the review data combination into paper review data.
9. The electronic device according to any one of claims 6 to 8, further comprising:
the second acquisition unit is used for acquiring target playing content labeled by an internet user for the video and a key annotation corresponding to the target playing content after the playing unit plays the video through the LCD screen;
the output unit is further configured to output the highlight annotation through the ink screen to prompt the user to watch the highlight annotation when the playing content on the LCD screen is the target playing content.
10. The electronic device of any of claims 6-9, further comprising:
the judging unit is used for judging whether the position of the electronic equipment is a bus station or not after the playing unit plays the video through the LCD screen;
the second sending unit is used for sending a bus position request to a bus service platform after the judging unit judges that the position of the electronic equipment is a bus station, so that the bus service platform feeds back the current position of a target bus waiting by the user to the electronic equipment; the bus position request carries an identifier of a target bus waiting for the user;
the calculating unit is used for calculating the target time length required by the target bus to travel from the current position to the bus station corresponding to the position where the electronic equipment is located;
the judging unit is further used for judging whether the target duration is less than a preset duration;
and the output unit is also used for pausing the video played by the LCD screen and outputting prompt information through the ink screen to prompt that the target bus is about to arrive when the judgment unit judges that the target duration is less than the preset duration.
11. An electronic device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for executing an output method of video content as claimed in any one of claims 1 to 5.
12. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute a video content output method according to any one of claims 1 to 5.
CN201910321830.7A 2019-04-22 2019-04-22 Video content output method and electronic equipment Pending CN111093113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910321830.7A CN111093113A (en) 2019-04-22 2019-04-22 Video content output method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910321830.7A CN111093113A (en) 2019-04-22 2019-04-22 Video content output method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111093113A true CN111093113A (en) 2020-05-01

Family

ID=70392931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910321830.7A Pending CN111093113A (en) 2019-04-22 2019-04-22 Video content output method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111093113A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975151A (en) * 2016-04-28 2016-09-28 合网络技术(北京)有限公司 Position-based event reminding method and apparatus
US20170103397A1 (en) * 2015-10-08 2017-04-13 Mitake Information Corporation Video identification method and computer program product thereof
CN106792215A (en) * 2016-12-12 2017-05-31 福建天晴数码有限公司 Education video order method and its system
CN107424100A (en) * 2017-07-21 2017-12-01 深圳市鹰硕技术有限公司 Information providing method and system
CN108924608A (en) * 2018-08-21 2018-11-30 广东小天才科技有限公司 Auxiliary method for video teaching and intelligent equipment
CN109034037A (en) * 2018-07-19 2018-12-18 江苏黄金屋教育发展股份有限公司 On-line study method based on artificial intelligence
CN109087225A (en) * 2018-08-30 2018-12-25 广东小天才科技有限公司 Learning control method based on family education equipment and family education equipment
CN109166365A (en) * 2018-09-21 2019-01-08 深圳市科迈爱康科技有限公司 The method and system of more mesh robot language teaching

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170103397A1 (en) * 2015-10-08 2017-04-13 Mitake Information Corporation Video identification method and computer program product thereof
CN105975151A (en) * 2016-04-28 2016-09-28 合网络技术(北京)有限公司 Position-based event reminding method and apparatus
CN106792215A (en) * 2016-12-12 2017-05-31 福建天晴数码有限公司 Education video order method and its system
CN107424100A (en) * 2017-07-21 2017-12-01 深圳市鹰硕技术有限公司 Information providing method and system
CN109034037A (en) * 2018-07-19 2018-12-18 江苏黄金屋教育发展股份有限公司 On-line study method based on artificial intelligence
CN108924608A (en) * 2018-08-21 2018-11-30 广东小天才科技有限公司 Auxiliary method for video teaching and intelligent equipment
CN109087225A (en) * 2018-08-30 2018-12-25 广东小天才科技有限公司 Learning control method based on family education equipment and family education equipment
CN109166365A (en) * 2018-09-21 2019-01-08 深圳市科迈爱康科技有限公司 The method and system of more mesh robot language teaching

Similar Documents

Publication Publication Date Title
CN105975560B (en) Question searching method and device of intelligent equipment
US10169923B2 (en) Wearable display system that displays a workout guide
US9966075B2 (en) Leveraging head mounted displays to enable person-to-person interactions
CN104461277B (en) Mobile terminal and its control method
US20140280296A1 (en) Providing help information based on emotion detection
US10180949B2 (en) Method and apparatus for information searching
CN111512370B (en) Voice tagging of video while recording
CN107885483B (en) Audio information verification method and device, storage medium and electronic equipment
CN106657650A (en) System expression recommendation method and device, and terminal
CN110245250A (en) Image processing method and relevant apparatus
CN112001312A (en) Document splicing method, device and storage medium
CN109783613B (en) Question searching method and system
CN111079494B (en) Learning content pushing method and electronic equipment
CN106504050A (en) A kind of information comparison device and method
CN110033769A (en) A kind of typing method of speech processing, terminal and computer readable storage medium
CN109460556A (en) A kind of interpretation method and device
CN109917988B (en) Selected content display method, device, terminal and computer readable storage medium
CN111079726B (en) Image processing method and electronic equipment
CN111639209A (en) Book content searching method, terminal device and storage medium
CN105975554A (en) Big data searching method and device based on mobile terminal
CN111079501B (en) Character recognition method and electronic equipment
CN111182387A (en) Learning interaction method and intelligent sound box
CN111026901A (en) Learning content searching method and learning equipment
CN117995184A (en) Man-machine interaction method, device and equipment under low attention and storage medium
CN111093113A (en) Video content output method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200501

RJ01 Rejection of invention patent application after publication