CN110213663A - Audio and video playing method, computer equipment and computer readable storage medium - Google Patents

Audio and video playing method, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN110213663A
CN110213663A CN201910429523.0A CN201910429523A CN110213663A CN 110213663 A CN110213663 A CN 110213663A CN 201910429523 A CN201910429523 A CN 201910429523A CN 110213663 A CN110213663 A CN 110213663A
Authority
CN
China
Prior art keywords
user
audio
attention score
screen
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910429523.0A
Other languages
Chinese (zh)
Inventor
齐燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
Original Assignee
OneConnect Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Smart Technology Co Ltd filed Critical OneConnect Smart Technology Co Ltd
Priority to CN201910429523.0A priority Critical patent/CN110213663A/en
Publication of CN110213663A publication Critical patent/CN110213663A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4852End-user interface for client configuration for modifying audio parameters, e.g. switching between mono and stereo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast

Abstract

The present invention is suitable for field of communication technology, provides a kind of audio and video playing method, apparatus, computer equipment and computer readable storage medium.Wherein, method includes: to determine the second attention score of the audio that user plays the first attention score of the screen of the terminal and user to the terminal in terminal plays visual content and/or audio;If the first attention score is less than first threshold and/or the second attention score is less than second threshold, it is then based on the first attention score and/or the second attention score, the broadcasting adjustment modes of visual content and/or audio are determined by broadcast state adaptive model;The screen intensity and/or audio broadcast sound volume of the terminal are adjusted according to the broadcasting adjustment modes, to realize automatic adjusument screen intensity display pattern and audio-frequency play mode, the volume that the screen intensity and audio that terminal can be independently adjustable play, has saved electric energy.

Description

Audio and video playing method, computer equipment and computer readable storage medium
Technical field
The invention belongs to field of communication technology more particularly to a kind of audio and video playing method, apparatus, computer equipment and meter Calculation machine readable storage medium storing program for executing.
Background technique
It is a kind of relatively broad application scenarios used using terminal device playing audio-video in mobile internet era.
In the prior art, terminal device playing audio-video be usually according to fixed play mode or in response to The mode switch request at family is played out according to the mode after switching, and in playing process will not according to user currently whether It is absorbed in the screen of viewing terminal equipment and/or listens attentively to the sound that terminal device is issued, and it is aobvious adaptively to adjust screen intensity Show mode and audio-frequency play mode.If the sight of user, which frames out and/or is not absorbed in certain time period, listens attentively to audio, And terminal device continues to keep original screen intensity display pattern and audio-frequency play mode, then will lead to the waste of electric energy.
Therefore, existing audio and video playing method user shift audiovisual attention when exist can not automatic adjusument screen Brightness display pattern and audio-frequency play mode and the problem of lead to waste of energy.
Summary of the invention
The embodiment of the invention provides a kind of audio and video playing method, apparatus, computer equipment and computer-readable storages Medium, with solve existing audio and video playing method user shift audiovisual attention when exist can not automatic adjusument screen it is bright The problem of spending display pattern and audio-frequency play mode and leading to waste of energy.
The first aspect of the embodiment of the present invention provides a kind of audio and video playing method, comprising:
In terminal plays visual content and/or audio, determine user to the first attention of the screen of the terminal Second attention score of the audio that score and user play the terminal;
If the first attention score is less than first threshold and/or the second attention score is less than second threshold, It is then based on the first attention score and/or the second attention score, it can by broadcast state adaptive model determination Broadcasting adjustment modes depending on changing content and/or audio;
The screen intensity and/or audio broadcast sound volume of the terminal are adjusted according to the broadcasting adjustment modes.
The second aspect of the embodiment of the present invention provides a kind of audio and video display device, comprising:
First determining module, for determining user to the terminal in terminal plays visual content and/or audio Second attention score of the audio that the first attention score of screen and user play the terminal;
Second determining module, if being less than first threshold and/or second attention for the first attention score Score is less than second threshold, then is based on the first attention score and/or the second attention score, passes through broadcast state Adaptive model determines the broadcasting adjustment modes of visual content and/or audio;
Module is adjusted, for adjusting the screen intensity and/or audio broadcasting of the terminal according to the broadcasting adjustment modes Volume.
The third aspect of the embodiment of the present invention provides a kind of computer equipment, comprising: memory, processor and storage In the memory and the computer program that can run on the processor, the processor execute the computer program When perform the steps of
In terminal plays visual content and/or audio, determine user to the first attention of the screen of the terminal Second attention score of the audio that score and user play the terminal;
If the first attention score is less than first threshold and/or the second attention score is less than second threshold, It is then based on the first attention score and/or the second attention score, it can by broadcast state adaptive model determination Broadcasting adjustment modes depending on changing content and/or audio;
The screen intensity and/or audio broadcast sound volume of the terminal are adjusted according to the broadcasting adjustment modes.
The fourth aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, and the computer program performs the steps of when being executed by processor
In terminal plays visual content and/or audio, determine user to the first attention of the screen of the terminal Second attention score of the audio that score and user play the terminal;
If the first attention score is less than first threshold and/or the second attention score is less than second threshold, It is then based on the first attention score and/or the second attention score, it can by broadcast state adaptive model determination Broadcasting adjustment modes depending on changing content and/or audio;
The screen intensity and/or audio broadcast sound volume of the terminal are adjusted according to the broadcasting adjustment modes.
The embodiment of the present invention, by terminal plays visual content and/or audio, determining user to the terminal Second attention score of the audio that the first attention score of screen and user play the terminal, if described First attention score is less than first threshold and/or the second attention score is less than second threshold, then is based on described first Attention score and/or the second attention score, then by broadcast state adaptive model determine visual content and/or The broadcasting adjustment modes of audio adjust the screen intensity of the terminal further according to the broadcasting adjustment modes and/or audio play Volume can be independently adjustable the screen of terminal to realize automatic adjusument screen intensity display pattern and audio-frequency play mode The volume that curtain brightness and audio play, has saved electric energy.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is an application environment schematic diagram of middle pitch video broadcasting method of the embodiment of the present invention;
Fig. 2 is the implementation process schematic diagram for the audio and video playing method that the embodiment of the present invention one provides;
Fig. 3 is another implementation process schematic diagram of audio and video playing method provided in an embodiment of the present invention;
Fig. 4 is another implementation process schematic diagram of audio and video playing method provided in an embodiment of the present invention;
Fig. 5 is the schematic diagram of audio and video display device provided by Embodiment 2 of the present invention;
Fig. 6 is the schematic diagram of the first determining module in audio and video display device provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of the first determination unit in audio and video display device provided in an embodiment of the present invention;
Fig. 8 is another schematic diagram of the first determining module provided in an embodiment of the present invention;
Fig. 9 is the schematic diagram of third determination unit in audio and video display device provided in an embodiment of the present invention;
Figure 10 is the schematic diagram for the computer equipment that the embodiment of the present invention three provides;
Figure 11 is the structural schematic diagram for the terminal that the embodiment of the present invention four provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific The present invention also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, in case unnecessary details interferes description of the invention.
Audio and video playing method provided in an embodiment of the present invention can be applicable in the application environment such as Fig. 1, wherein client (terminal in following embodiments) is communicated by network with server.Wherein, client can be, but not limited to various individuals Computer, laptop, smart phone, tablet computer and portable wearable device.Server can use independent service The server cluster of device either multiple servers composition is realized.
In order to illustrate audio and video playing method provided by the embodiment of the present invention, said below by specific embodiment It is bright.
Embodiment one:
Fig. 2 shows the implementation process schematic diagrames for the audio and video playing method that the embodiment of the present invention one provides.Such as Fig. 2 institute Show, which specifically comprises the following steps 101 to step 103, and details are as follows:
Step 101: in terminal plays visual content and/or audio, determining user to the of the screen of the terminal Second attention score of the audio that one attention score and user play the terminal.
Wherein, the first attention score be attention score of the user to the screen in eyesight, described second Attention score is attention score of the user to the audio in hearing.
Step 101 specifically includes step 201 and step 202, as shown in figure 3, details are as follows:
Step 201: attention of the user to the screen in eyesight is determined according to the eye motion recognition result of user Score.
Step 202: determining user to the audio in hearing according to the intensity of user's feedback signal to made by audio On attention score.
For step 202, as an embodiment of the present invention, detectable signal is issued to user by earphone, detects institute The attenuation process information for stating detectable signal determines attention force information of the user in hearing according to the attenuation process information.
As an embodiment of the present invention, as shown in figure 4, step 201 specifically includes:
Step 301: obtaining the eye information of user, the eye information includes that eyes open closed state information and iris letter Breath.
Wherein, before the eye information for obtaining active user, the iris information in the eye information of user is acquired in advance, And it is stored.The behavior or operation for detecting whether the corresponding APP of starting terminal, if detecting the behavior or operation When, the iris recognition module in terminal can automatically open, and start to acquire the iris information in the eye information of active user.
Step 302: the number of users in front of the screen is determined according to the iris information.
For step 302, determine that the quantity of different irises can determine the user in front of the screen according to iris information Quantity.Further, according to collected iris information, judge whether to acquire successfully, if information collection success, its iris is believed Breath be uploaded to the server that data connection has been carried out with terminal, and detect the iris information whether the original iris information with user Matching, if there is with the matched original iris information of the iris information, then the corresponding user of the iris information can be confirmed;Such as Fruit is not present and the matched original iris information of the iris information, it is determined that active user is unregistered or active user does not carry out rainbow The acquisition of film information can issue prompt information and carry out corresponding prompt, it is desirable that active user's registration or acquisition iris information, with Convenient for directly carrying out iris recognition when next using terminal.
Step 303: when the number of users is one, opening closed state information according to the eyes and determine active user couple Attention score of the screen in eyesight.
For step 303, during user using terminal APP, the eyes for acquiring active user in real time open closed state Information, and closed state information is opened according to eyes to judge active user to the attention degree of current screen content.
It as an embodiment of the present invention, whether is same current based on the iris information confirmation terminal in eye information User carry out using.In the terminal or in server, the iris information prestored may be the information of multiple users, in confirmation iris After information, can also the historical data of attention force information according to active user to the screen content of display in eyesight divide Analysis obtains the first analysis as a result, determining that active user pays attention in difference APP in using terminal respectively according to the first analysis result The probability that power is not concentrated.Further, attention of the screen content of display in eyesight can also be believed according to active user The historical data of breath is analyzed, and obtains the second analysis as a result, determining active user in using terminal according to the second analysis result Upper same APP scatterbrained probability in different use times.Therefore, can according to the historical data of active user into Anticipation on row probability, capable of determining active user in advance, using which APP, there may be attentions not to collect in which period In situation.
Step 304: when the number of users is multiple, opening closed state information according to the eyes respectively and determine multiple use Attention score of the family to the screen in eyesight, is ranked up multiple attention scores, and score is highest as use Attention score of the family to the screen in eyesight.
For step 304, it is to be understood that when multiple users are used together the screen display content of terminal viewing terminal And/or using terminal is when listening audio, when the attention of at least one user does not decline, does not then adjust the broadcasting mould of terminal Formula.For example, user A, B is used together a terminal viewing video with C, when user A and B are absent minded but user C When remaining the attention of high concentration, closed state information is opened according to the eyes of user A, B and C respectively and determines three of them To attention score a, b and the c of the screen in eyesight, multiple attention scores are ranked up, the highest c of score is made Attention score for user A, B and C to the screen in eyesight.
As an embodiment of the present invention, step 101 specifically includes:
Step 401: including the scene image of user face before acquisition terminal screen.
Wherein, scene image includes user's face image and background image.
Step 402: extracting face information from the scene image.
Step 403: determining the tired journey of the user by predetermined portions face characteristic map according to the face information Degree.
Step 404: first attention score and user of the user to the screen of terminal are determined according to the degree of fatigue To the second attention score of the audio that the terminal is played.
As an embodiment of the present invention, before step 402, further includes:
Step 501: defining the Face datection network architecture.
Wherein, step 501 specifically includes: suggesting that network, region Recurrent networks and key point return based on tandem zones Network structure defines the Face datection network architecture using depth convolutional neural networks.Used depth convolutional neural networks In, the region suggests that network inputs are 16*16*3 image data, and network is made of full convolution framework, exports as human face region The confidence level of Suggestion box and rough vertex position;The region Recurrent networks input is 32*32*3 image data, and network is by rolling up Long-pending and full connection framework is constituted, and exports the confidence level for human face region and accurate vertex position;The key point Recurrent networks Input is 64*64*3 image data, and network connects framework by convolution sum entirely and constitutes, and exports the confidence level for human face region, position And face key point position.
Step 502: collecting training data and calibration.
Wherein, step 502 specifically includes: historic scenery image of the acquisition user in using terminal extracts discrete time series Training sample 8000 is opened, and artificial mark generates sample label.Wherein, label substance includes: face classification (0- background, 1- people Face), human face region position (x, y, w, h) and face key point (6-eyes, nose, the corners of the mouth, lower jaw) utilize pattern colour Domain, several how spatial alternations carry out sample expansion.
Step 503: the training of substep offline network.
Wherein, step 503 specifically includes: being trained by the way of stochastic gradient descent, learning rate, batchsize Deng for configurable parameter (for example, 64), for different branch's training missions, Classification Loss function L_cls is set as cross entropy. Wherein, the Euclidean distance training sequence execution in three steps that loss function L_reg is set as corresponding regression point is returned:
Step 601: training region suggests that network, training sample are generated according to label, and training loss function is provided that Loss1=α 1*L_cls+ β 1*L_reg;Wherein, 1 α, β 1 are configurable parameter.
Step 602: training region Recurrent networks, training sample suggest that network is exported in original training sample collection according to region As a result it generates, training loss function is provided that Loss2=α 2*L_cls+ β 2*L_reg;Wherein, 2 α, β 2 are configurable ginseng Number.
Step 603: training region Recurrent networks, training sample are exported according to region Recurrent networks in original training sample collection As a result it generates, training loss function is provided that Loss2=α 3*L_cls+ β 3*L_reg;α 3, β 3 is configurable parameter.
Correspondingly, step 403 specifically includes:
Step 701: training gained in scene image input step 503 is detected into network, output area Recurrent networks feature Map and face key point position.
Step 702: characteristic spectrum and preset user's face database being subjected to characteristic similarity matching, utilize default similarity Threshold value obtains the identity information of active user.
As another embodiment of the invention, step 403 is specifically included:
Step 801: eye feature picture frame pond being carried out to active user according to predetermined portions face characteristic map, by sleepy It sleeps detection Network Recognition eye and opens closed state and regression analysis acquisition eyes are carried out to eye key point and open the degree of closing.
Step 802: mouth feature picture frame pond being carried out according to the predetermined portions face characteristic map, passes through yawn behavior It identifies Network Recognition mouth open and-shut mode, obtains mouth open and-shut mode.
Step 803: opening closed state according to the eyes, the eyes open the degree of closing and the mouth open and-shut mode determines institute State the degree of fatigue of user.
For step 801, eye ROI-pooling (feature picture frame pond) is carried out to predetermined portions face characteristic map, Then carry out that eye opens closed state identification and eye key point is returned using drowsiness detection network, through formula by key point position Eyes opening degree λ is converted to, output current time eyes, which are opened closed state S1 and opened, closes degree λ.
Correspondingly, defining drowsiness early warning reference index drowsiness behavior confidence level C1 and half a minute is averaged eye aperture.According to Drowsiness detection network output eye opens closed state S1 and calculates drowsiness behavior confidence level C1:C1t+1=max (0, C1t+ (S1-1) * K1 +S1*K2);Make and break degree λ is opened according to eye, and calculating half a minute be averaged eye aperture λ 0.5: where ts is the camera sampling period, K1, K2 are configurable threshold value of warning parameter.Based on active user's fatigue base-line data, corresponding sleepy confidence level early warning threshold is set Value T1 and half a minute are averaged eye aperture threshold value T1 '.
For step 802, mouth ROI-pooling (feature picture frame pond) is carried out to predetermined portions face characteristic map, Then the identification of mouth open and-shut mode is carried out using yawn Activity recognition network, exports mouth open and-shut mode S2.Correspondingly, definition is breathed out Owe early warning reference index yawn behavior confidence level C2.Network is detected according to above-mentioned yawn and exports yawn state, calculates sleepy behavior Confidence level C2:C2t+1=max (0, C2t+ (S2-1) * K1 '+S2*K2 ').Wherein, K1 ', K2 ' are configurable threshold parameter.
For step 803, degree of fatigue P=F [max (0, C1t+ (S1-1) * K1+S1*K2)]+G of user [max (0, C2t+ (S2-1) * K1 '+S2*K2 ')], wherein F and G is positive correlation function.
In the embodiment of the present invention, degree of fatigue is judged based on sleepy behavior and yawn behavioural characteristic joint, fatigue behaviour is fixed Justice is more perfect, can more accurately determine the degree of fatigue of user, and the degree of fatigue is converted to active user and is worked as to terminal The screen content of preceding display or the attention score of audio attention, to more accurately determine that active user is current to terminal The attention score of the screen content attention of display.
Different screen intensitys is controlled according to the difference of degree of fatigue.Degree of fatigue reaches certain value and active user couple The attention score for the screen content attention that terminal is currently shown then closes screen lower than preset threshold value;When user wakes up then Light screen.
If step 102, the first attention score are less than first threshold and/or the second attention score less than the Two threshold values are then based on the first attention score and/or the second attention score, pass through broadcast state adaptive model Determine the broadcasting adjustment modes of visual content and/or audio.
The volume of step 103, the brightness that screen is adjusted according to the broadcast state and/or audio.It is broadcast according to described Put screen intensity and/or audio broadcast sound volume that adjustment modes adjust the terminal.
For step 102 and 103, for example, being less than default threshold to the attention score for the screen content that terminal is currently shown Value is 80, and the preset threshold to the attention score of terminal present video attention is 70, then current to terminal in active user The attention score of the screen content attention of display be lower than 80 when, then according to the attention score and in advance it is trained Broadcast state adaptive model adjusts the broadcast state of video, for example, the screen content of display is video, then turns down video playing Brightness.When attention score component of the active user to the terminal screen content currently shown or audio attention drops When low, then the brightness of video playing is gradually turned down.For example, audio is music, in active user to terminal present video attention Attention score be lower than 70 when, then turn down the size of music playback volume, when the active user to terminal present video infuse When the attention score component for power of anticipating reduces, then the volume of music is gradually turned down.
The embodiment of the present invention, by terminal plays visual content and/or audio, determining user to the terminal Second attention score of the audio that the first attention score of screen and user play the terminal, if described First attention score is less than first threshold and/or the second attention score is less than second threshold, then is based on described first Attention score and/or the second attention score, then by broadcast state adaptive model determine visual content and/or The broadcasting adjustment modes of audio adjust the screen intensity of the terminal further according to the broadcasting adjustment modes and/or audio play Volume can be independently adjustable the screen of terminal to realize automatic adjusument screen intensity display pattern and audio-frequency play mode The volume that curtain brightness and audio play, has saved electric energy.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Embodiment two:
Referring to FIG. 5, it illustrates the schematic diagrames of audio and video display device 50 provided by Embodiment 2 of the present invention.The sound Video play device 50, comprising: the first determining module 51, the second determining module 52 and adjustment module 53.Wherein, the tool of each module Body function is as follows:
First determining module 51, for determining user to the terminal in terminal plays visual content and/or audio Screen the first attention score and user's the second attention score of audio that the terminal is played.
Second determining module 52, if being less than first threshold and/or second attention for the first attention score Power score is less than second threshold, then is based on the first attention score and/or the second attention score, by playing shape State adaptive model determines the broadcasting adjustment modes of visual content and/or audio.
Module 53 is adjusted, for adjusting the screen intensity of the terminal according to the broadcasting adjustment modes and/or audio is broadcast Playback amount.
Optionally, as shown in fig. 6, the first determining module 31 includes:
First determination unit 311 determines that user is regarding the screen for the eye motion recognition result according to user Attention score in power.
Second determination unit 312, for determining user to institute according to the intensity of user's feedback signal to made by audio State attention score of the audio in hearing.
Optionally, as shown in fig. 7, the first determination unit 311 includes:
Subelement 3111 is obtained, for obtaining the eye information of user, the eye information includes that eyes open closed state letter Breath and iris information.
First determines subelement 3112, for determining the number of users in front of the screen according to the iris information.
Second determines subelement 3113, for opening closed state letter according to the eyes when the number of users is one Cease the attention score for determining active user to the screen in eyesight.
Third determines subelement 3114, for being opened according to the eyes close shape respectively when the number of users is multiple State information determines attention score of multiple users to the screen in eyesight, is ranked up to multiple attention scores, will The highest attention score as user to the screen in eyesight of score.
Optionally, as shown in figure 8, the first determining module 31 includes:
Acquisition unit 313, for including the scene image of user face before acquisition terminal screen.
Extraction unit 314, for extracting face information from the scene image.
Third determination unit 315, described in being determined according to the face information by predetermined portions face characteristic map The degree of fatigue of user.
4th determination unit 316, for determining user to the first attention of the screen of terminal according to the degree of fatigue Second attention score of the audio that score and user play the terminal.
Optionally, as shown in figure 9, third determination unit 315 includes:
Eye feature picture frame pond beggar's unit 3151, for being carried out according to predetermined portions face characteristic map to active user Eye feature picture frame pond detects Network Recognition eye by drowsiness and opens closed state and to the progress regression analysis of eye key point It obtains eyes and opens the degree of closing.
Mouth feature picture frame pond beggar's unit 3152, it is special for carrying out mouth according to the predetermined portions face characteristic map It levies picture frame pond and mouth open and-shut mode is obtained by yawn Activity recognition Network Recognition mouth open and-shut mode.
4th determines subelement 3153, and for opening closed state according to the eyes, the eyes open the degree of closing and the mouth Portion's open and-shut mode determines the degree of fatigue of the user.
Specific about audio and video display device limits the restriction that may refer to above for audio and video playing method, This is repeated no more.Modules in above-mentioned audio and video display device can come fully or partially through software, hardware and combinations thereof It realizes.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with software Form is stored in the memory in computer equipment, executes the corresponding operation of the above modules in order to which processor calls.
Embodiment three:
In the present embodiment, a kind of computer equipment is provided, which can be client, internal structure Figure can be as shown in Figure 10.The computer equipment includes processor, the memory, network interface sum number connected by system bus According to library.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory of the computer equipment includes Non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating The database of machine equipment is for storing the data being related in audio and video playing method.The network interface of the computer equipment is used for It is communicated with external terminal by network connection.To realize a kind of audio and video playing side when the computer program is executed by processor Method.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor realize that audio-video is broadcast in above-described embodiment when executing computer program The step of putting method, such as step 101 shown in Fig. 2 is to step 103.Alternatively, processor is realized when executing computer program State the function of each module/unit of audio and video display device in embodiment, such as module 31 shown in Fig. 3 is to the function of module 33. To avoid repeating, which is not described herein again.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program realizes the step of above-described embodiment middle pitch video broadcasting method, such as step 101 shown in Fig. 2 when being executed by processor To step 103.Alternatively, realizing each mould of audio and video display device in above-described embodiment when computer program is executed by processor Block/unit function, such as module 31 shown in Fig. 3 is to the function of module 33.To avoid repeating, which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (SyNchliNk) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Example IV:
The embodiment of the invention provides a kind of terminals, as shown in figure 11, for ease of description, illustrate only and present invention reality The relevant part of example is applied, it is disclosed by specific technical details, please refer to present invention method part.The terminal can be packet Include mobile phone, tablet computer, PDA (Personal DigitalAssistant, personal digital assistant), POS (PointofSales, Point-of-sale terminal), any terminal device such as vehicle-mounted computer, taking the terminal as an example:
Figure 11 shows the block diagram of the part-structure of mobile phone relevant to terminal provided in an embodiment of the present invention.With reference to figure 11, mobile phone includes: radio frequency (Radio Frequency, RF) circuit 1110, memory 1120, input unit 1130, display unit 1140, sensor 1150, voicefrequency circuit 1160, Wireless Fidelity (wireless fidelity, WiFi) module 1170, processor The components such as 1180 and power supply 1190.It will be understood by those skilled in the art that handset structure shown in Figure 11 is not constituted pair The restriction of mobile phone may include perhaps combining certain components or different component cloth than illustrating more or fewer components It sets.
It is specifically introduced below with reference to each component parts of the Figure 11 to mobile phone:
RF circuit 1110 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, handled to processor 1180;In addition, the data for designing uplink are sent to base station.In general, RF circuit packet Include but be not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (LowNoiseAmplifier, LNA), duplexer etc..In addition, RF circuit 110 can also be communicated with network and other equipment by wireless communication.Above-mentioned channel radio Any communication standard or agreement, including but not limited to global system for mobile communications (Global System of can be used in letter Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code DivisionMultipleAccess, WCDMA), long term evolution (Long Term Evolution, LTE)), it is Email, short Messaging service (ShortMessagingService, SMS) etc..
Memory 1120 can be used for storing software program and module, and processor 1180 is stored in memory by operation 1120 software program and module, thereby executing the various function application and data processing of mobile phone.Memory 1120 can be led It to include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function Application program (such as sound-playing function, image player function etc.) etc.;Storage data area, which can be stored, uses institute according to mobile phone Data (such as audio data, phone directory etc.) of creation etc..In addition, memory 1120 may include high random access storage Device, can also include nonvolatile memory, and a for example, at least disk memory, flush memory device or other volatibility are solid State memory device.
Input unit 1130 can be used for receiving the number or character information of input, and generates and set with the user of mobile phone 1100 It sets and the related key signals of function control inputs.Specifically, input unit 1130 may include touch panel 1131 and other Input equipment 1132.Touch panel 1131, also referred to as touch screen, collect user on it or nearby touch operation (such as User is using any suitable objects or attachment such as finger, stylus on touch panel 1131 or near touch panel 1131 Operation), and corresponding attachment device is driven according to preset formula.Optionally, touch panel 1131 may include touching inspection Survey two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation Bring signal, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and will It is converted into contact coordinate, then gives processor 1180, and can receive order that processor 1180 is sent and be executed.This Outside, touch panel 1131 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touching Panel 1131 is controlled, input unit 1130 can also include other input equipments 1132.Specifically, other input equipments 1132 can be with Including but not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. One of or it is a variety of.
Display unit 1140 can be used for showing information input by user or be supplied to user information and mobile phone it is each Kind menu.Display unit 1140 may include display panel 1141, optionally, can use liquid crystal display (LiquidCrystalDisplay, LCD), Organic Light Emitting Diode (OrganicLight-EmittingDiode, OLED) etc. Form configures display panel 1141.Further, touch panel 1131 can cover display panel 1141, when touch panel 1131 After detecting touch operation on it or nearby, processor 1180 is sent to determine the type of touch event, is followed by subsequent processing Device 1180 provides corresponding visual output according to the type of touch event on display panel 1141.Although in Figure 11, touch-control Panel 1131 and display panel 1141 are the input and input function for realizing mobile phone as two independent components, but at certain In a little embodiments, can be integrated by touch panel 1131 and display panel 1141 and that realizes mobile phone output and input function.
Mobile phone 1100 may also include at least one sensor 1150, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ring The light and shade of border light adjusts the brightness of display panel 1141, and proximity sensor can close display when mobile phone is moved in one's ear Panel 1141 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect (generally three in all directions Axis) acceleration size, can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (such as Horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;As for The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor that mobile phone can also configure, it is no longer superfluous herein It states.
Voicefrequency circuit 1160, loudspeaker 1161, microphone 1162 can provide the audio interface between user and mobile phone.Audio Electric signal after the audio data received conversion can be transferred to loudspeaker 1161, be converted by loudspeaker 1161 by circuit 1160 For voice signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 1162, by voicefrequency circuit 1160 Audio data is converted to after reception, then by after the processing of audio data output processor 1180, through RF circuit 1110 to be sent to ratio Such as another mobile phone, or audio data is exported to memory 1120 to be further processed.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronics postal by WiFi module 1170 Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Figure 11 is shown WiFi module 1170, but it is understood that, and it is not belonging to must be configured into for mobile phone 1100, it can according to need completely Do not change in the range of the essence of invention and omits.
Processor 1180 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone, By running or execute the software program and/or module that are stored in memory 1120, and calls and be stored in memory 1120 Interior data execute the various functions and processing data of mobile phone, to carry out integral monitoring to mobile phone.Optionally, processor 1180 may include one or more processing units;Preferably, processor 1180 can integrate application processor and modulation /demodulation processing Device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is mainly located Reason wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1180.
Mobile phone 1100 further includes the power supply 1190 (such as battery) powered to all parts, it is preferred that power supply can pass through Power-supply management system and processor 1180 are logically contiguous, to realize management charging, electric discharge, Yi Jigong by power-supply management system The functions such as consumption management.
Although being not shown, mobile phone 1100 can also include camera, bluetooth module etc., and details are not described herein.
In embodiments of the present invention, processor 1180 included by the terminal is also with the following functions: terminal plays can When depending on changing content and/or audio, determine user to the first attention score of the screen of the terminal and user to the end Hold the second attention score of played audio;If the first attention score is less than first threshold and/or described second Attention score is less than second threshold, then the first attention score and/or the second attention score is based on, by broadcasting Put the broadcasting adjustment modes that state self-adaption model determines visual content and/or audio;According to the broadcasting adjustment modes tune The screen intensity and/or audio broadcast sound volume of the whole terminal.
In one embodiment, processor 1180 this there is function: according to the eye motion recognition result of user determine use Attention score of the family to the screen in eyesight;User is determined according to the intensity of user's feedback signal to made by audio To attention score of the audio in hearing.
In one embodiment, processor 1180 is also with the following functions: obtaining the eye information of user, the eyes letter Breath includes that eyes open closed state information and iris information;The number of users in front of the screen is determined according to the iris information; When the number of users is one, closed state information is opened according to the eyes and determines active user to the screen in eyesight Attention score;When the number of users is multiple, closed state information is opened according to the eyes respectively and determines multiple users To attention score of the screen in eyesight, multiple attention scores are ranked up, score is highest as user To attention score of the screen in eyesight.
In one embodiment, processor 1180 is also with the following functions: comprising user face before acquisition terminal screen Scene image;Face information is extracted from the scene image;Pass through predetermined portions face characteristic figure according to the face information Spectrum determines the degree of fatigue of the user;Determine user to the first attention point of the screen of terminal according to the degree of fatigue Second attention score of the audio that several and user plays the terminal.
In one embodiment, processor 1180 is also with the following functions: according to predetermined portions face characteristic map to working as Preceding user carries out eye feature picture frame pond, detects Network Recognition eye by drowsiness and opens closed state and to eye key click-through Row regression analysis obtains eyes and opens the degree of closing;Mouth feature picture frame pond is carried out according to the predetermined portions face characteristic map, By yawn Activity recognition Network Recognition mouth open and-shut mode, mouth open and-shut mode is obtained;Closed state, institute are opened according to the eyes State the degree of fatigue that eyes open the degree of closing and the mouth open and-shut mode determines the user.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of audio and video playing method characterized by comprising
In terminal plays visual content and/or audio, determine user to the first attention score of the screen of the terminal, And the second attention score of audio that user plays the terminal;
If the first attention score is less than first threshold and/or the second attention score is less than second threshold, base In the first attention score and/or the second attention score, is determined and visualized by broadcast state adaptive model The broadcasting adjustment modes of content and/or audio;
The screen intensity and/or audio broadcast sound volume of the terminal are adjusted according to the broadcasting adjustment modes.
2. audio and video playing method as described in claim 1, which is characterized in that the first attention score is user to institute Attention score of the screen in eyesight is stated, the second attention score is attention of the user to the audio in hearing Score;Determine the audio that user plays the first attention score of the screen of the terminal and user to the terminal The second attention score, comprising:
Attention score of the user to the screen in eyesight is determined according to the eye motion recognition result of user;
Attention point of the user to the audio in hearing is determined according to the intensity of user's feedback signal to made by audio Number.
3. audio and video playing method as claimed in claim 2, which is characterized in that true according to the eye motion recognition result of user Determine user includes: to attention score of the screen in eyesight
The eye information of user is obtained, the eye information includes that eyes open closed state information and iris information;
The number of users in front of the screen is determined according to the iris information;
When the number of users is one, closed state information is opened according to the eyes and determines that active user is regarding the screen Attention score in power;
When the number of users is multiple, closed state information is opened according to the eyes respectively and determines multiple users to the screen Attention score in eyesight is ranked up multiple attention scores, by the highest conduct user of score to the screen Attention score in eyesight.
4. audio and video playing method as described in claim 1, which is characterized in that determine that user shows information to terminal screen First attention score and user include: to the second attention score of the audio of the terminal plays
It include the scene image of user face before acquisition terminal screen;
Face information is extracted from the scene image;
The degree of fatigue of the user is determined by predetermined portions face characteristic map according to the face information;
Determine user to the first attention score of the screen of terminal and user to the terminal institute according to the degree of fatigue Second attention score of the audio of broadcasting.
5. audio and video playing method as claimed in claim 4, which is characterized in that pass through predetermined portions according to the face information Face characteristic map determines that the degree of fatigue of the user includes:
Eye feature picture frame pond is carried out to active user according to predetermined portions face characteristic map, network is detected by drowsiness and is known Other eye, which opens closed state and carries out regression analysis acquisition eyes to eye key point, opens the degree of closing;
Mouth feature picture frame pond is carried out according to the predetermined portions face characteristic map, passes through yawn Activity recognition Network Recognition Mouth open and-shut mode obtains mouth open and-shut mode;
Closed state is opened according to the eyes, the eyes open the degree of closing and the mouth open and-shut mode determines the fatigue of the user Degree.
6. a kind of audio and video display device characterized by comprising
First determining module, for determining user to the screen of the terminal in terminal plays visual content and/or audio The first attention score and user's the second attention score of audio that the terminal is played;
Second determining module, if being less than first threshold and/or the second attention score for the first attention score Less than second threshold, then it is based on the first attention score and/or the second attention score, it is adaptive by broadcast state Model is answered to determine the broadcasting adjustment modes of visual content and/or audio;
Module is adjusted, for adjusting the screen intensity and/or audio broadcast sound volume of the terminal according to the broadcasting adjustment modes.
7. audio and video display device as claimed in claim 6, which is characterized in that first determining module includes:
First determination unit determines note of the user to the screen in eyesight for the eye motion recognition result according to user Meaning power score;
Second determination unit, for determining that user exists to the audio according to the intensity of user's feedback signal to made by audio Attention score in hearing.
8. audio and video display device as claimed in claim 7, which is characterized in that first determination unit includes:
Subelement is obtained, for obtaining the eye information of user, the eye information includes that eyes open closed state information and iris Information;
First determines subelement, for determining the number of users in front of the screen according to the iris information;
Second determines subelement, for opening the determination of closed state information according to the eyes and working as when the number of users is one Attention score of the preceding user to the screen in eyesight;
Third determines subelement, for it is true to open closed state information according to the eyes respectively when the number of users is multiple Fixed attention score of multiple users to the screen in eyesight, is ranked up multiple attention scores, by score highest The attention score as user to the screen in eyesight.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to The step of any one of 5 the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In when the computer program is executed by processor the step of any one of such as claim 1 to 5 of realization the method.
CN201910429523.0A 2019-05-22 2019-05-22 Audio and video playing method, computer equipment and computer readable storage medium Pending CN110213663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910429523.0A CN110213663A (en) 2019-05-22 2019-05-22 Audio and video playing method, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910429523.0A CN110213663A (en) 2019-05-22 2019-05-22 Audio and video playing method, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110213663A true CN110213663A (en) 2019-09-06

Family

ID=67788102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910429523.0A Pending CN110213663A (en) 2019-05-22 2019-05-22 Audio and video playing method, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110213663A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830619A (en) * 2019-10-28 2020-02-21 维沃移动通信有限公司 Display method and electronic equipment
CN111078183A (en) * 2019-12-16 2020-04-28 北京明略软件系统有限公司 Audio and video information control method and device, intelligent equipment and computer readable storage medium
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN114035871A (en) * 2021-10-28 2022-02-11 深圳市优聚显示技术有限公司 Display method and system of 3D display screen based on artificial intelligence and computer equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150102996A1 (en) * 2013-10-10 2015-04-16 Samsung Electronics Co., Ltd. Display apparatus and power-saving processing method thereof
CN105100478A (en) * 2015-07-29 2015-11-25 努比亚技术有限公司 Apparatus and method for controlling audio output of mobile terminal
CN106094256A (en) * 2016-06-01 2016-11-09 宇龙计算机通信科技(深圳)有限公司 Home equipment control method, home equipment control device and intelligent glasses
CN106781282A (en) * 2016-12-29 2017-05-31 天津中科智能识别产业技术研究院有限公司 A kind of intelligent travelling crane driver fatigue early warning system
CN106982402A (en) * 2017-05-19 2017-07-25 北京小米移动软件有限公司 Control method, device and earphone that earphone is played
CN106998499A (en) * 2017-04-28 2017-08-01 张青 It is capable of the intelligent TV set and its control system and control method of intelligent standby
CN107861708A (en) * 2017-12-21 2018-03-30 广东欧珀移动通信有限公司 Volume method to set up, device, terminal device and storage medium
US20180270530A1 (en) * 2015-11-25 2018-09-20 Le Holdings (Beijing) Co., Ltd. Method and apparatus for automatically turning off video playback

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150102996A1 (en) * 2013-10-10 2015-04-16 Samsung Electronics Co., Ltd. Display apparatus and power-saving processing method thereof
CN105100478A (en) * 2015-07-29 2015-11-25 努比亚技术有限公司 Apparatus and method for controlling audio output of mobile terminal
US20180270530A1 (en) * 2015-11-25 2018-09-20 Le Holdings (Beijing) Co., Ltd. Method and apparatus for automatically turning off video playback
CN106094256A (en) * 2016-06-01 2016-11-09 宇龙计算机通信科技(深圳)有限公司 Home equipment control method, home equipment control device and intelligent glasses
CN106781282A (en) * 2016-12-29 2017-05-31 天津中科智能识别产业技术研究院有限公司 A kind of intelligent travelling crane driver fatigue early warning system
CN106998499A (en) * 2017-04-28 2017-08-01 张青 It is capable of the intelligent TV set and its control system and control method of intelligent standby
CN106982402A (en) * 2017-05-19 2017-07-25 北京小米移动软件有限公司 Control method, device and earphone that earphone is played
CN107861708A (en) * 2017-12-21 2018-03-30 广东欧珀移动通信有限公司 Volume method to set up, device, terminal device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830619A (en) * 2019-10-28 2020-02-21 维沃移动通信有限公司 Display method and electronic equipment
CN111078183A (en) * 2019-12-16 2020-04-28 北京明略软件系统有限公司 Audio and video information control method and device, intelligent equipment and computer readable storage medium
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN114035871A (en) * 2021-10-28 2022-02-11 深圳市优聚显示技术有限公司 Display method and system of 3D display screen based on artificial intelligence and computer equipment

Similar Documents

Publication Publication Date Title
CN110213663A (en) Audio and video playing method, computer equipment and computer readable storage medium
CN107582028B (en) Sleep monitoring method and device
CN110288978A (en) A kind of speech recognition modeling training method and device
CN104143078A (en) Living body face recognition method and device and equipment
CN108712603B (en) Image processing method and mobile terminal
CN108646907A (en) Back light brightness regulating method and Related product
CN109063583A (en) A kind of learning method and electronic equipment based on read operation
CN107734617A (en) Closing application program method, apparatus, storage medium and electronic equipment
CN108304758A (en) Facial features tracking method and device
CN105204642A (en) Adjustment method and device of virtual-reality interactive image
CN107608523B (en) Control method and device of mobile terminal, storage medium and mobile terminal
CN108345848A (en) The recognition methods of user's direction of gaze and Related product
CN111025922B (en) Target equipment control method and electronic equipment
CN105279499A (en) Age recognition method and device
CN110908513B (en) Data processing method and electronic equipment
CN107767839A (en) Brightness adjusting method and related product
CN111798811B (en) Screen backlight brightness adjusting method and device, storage medium and electronic equipment
CN106412312A (en) Method and system for automatically awakening camera shooting function of intelligent terminal, and intelligent terminal
CN104966086A (en) Living body identification method and apparatus
CN107529699A (en) Control method of electronic device and device
CN104077563A (en) Human face recognition method and device
CN108108671A (en) Description of product information acquisition method and device
CN113574525A (en) Media content recommendation method and equipment
CN110013260B (en) Emotion theme regulation and control method, equipment and computer-readable storage medium
CN111046742A (en) Eye behavior detection method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190906