CN108900908A - Video broadcasting method and device - Google Patents
Video broadcasting method and device Download PDFInfo
- Publication number
- CN108900908A CN108900908A CN201810725262.2A CN201810725262A CN108900908A CN 108900908 A CN108900908 A CN 108900908A CN 201810725262 A CN201810725262 A CN 201810725262A CN 108900908 A CN108900908 A CN 108900908A
- Authority
- CN
- China
- Prior art keywords
- user
- age
- target video
- video
- emotion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000001815 facial effect Effects 0.000 claims abstract description 73
- 230000004044 response Effects 0.000 claims abstract description 44
- 230000000694 effects Effects 0.000 claims abstract description 27
- 230000002996 emotional effect Effects 0.000 claims description 118
- 230000008451 emotion Effects 0.000 claims description 85
- 230000036651 mood Effects 0.000 claims description 47
- 230000036387 respiratory rate Effects 0.000 claims description 26
- 230000035565 breathing frequency Effects 0.000 claims description 13
- 238000012986 modification Methods 0.000 claims description 9
- 230000004048 modification Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 230000035479 physiological effects, processes and functions Effects 0.000 claims description 2
- 230000003044 adaptive effect Effects 0.000 abstract description 3
- 230000001105 regulatory effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000006854 communication Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000036772 blood pressure Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000036760 body temperature Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012550 audit Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010020772 Hypertension Diseases 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002633 protecting effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
- H04N21/4415—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4627—Rights management associated to the content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Security & Cryptography (AREA)
- Biomedical Technology (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application discloses video broadcasting method and device.One specific embodiment of this method includes:The message that viewing target video is requested in response to receiving user, obtains the facial image of user, wherein target video is corresponding with license age range.By facial image input character classification by age device trained in advance, the age of user is obtained, wherein character classification by age device is used to characterize the corresponding relationship at facial image and age.Determine the age of user whether within the scope of the corresponding license age range of target video.If so, playing target video, and physiological parameter when user watches target video is obtained, video display effect is adjusted according to physiological parameter, including at least one of following:The volume for adjusting target video, closes target video, Switch Video content at the picture brightness for adjusting target video.The embodiment can be realized according to the physiological parameter adaptive regulating video result of broadcast of user rich in targetedly video playing.
Description
Technical field
The invention relates to field of computer technology, and in particular to video broadcasting method and device.
Background technique
Video spatial scalable system refers to video by its division of teaching contents into several grades, and the masses for allowing to face are provided to every level-one
Group, the appearance of video spatial scalable system is primarily to adolescent development education, to distinguish its grade and suitability degree.Play guidance
See the effect of piece.
Video spatial scalable is a standard and a reference value.It is provided and is divided according to the content of video and be suitble to watch
Age bracket.Movie ratings essence is for spectator services.Movie ratings system be not 100% it is accurate, its original intention be for
Protection Children and teenager.In some areas, although there is classifying system, in the presence of being arranged, unreasonable, supervision is not in place
Etc. various problems.Protecting effect is extremely limited.Children can touch unsuitable programme content easily.
Summary of the invention
The embodiment of the present application proposes video broadcasting method and device.
In a first aspect, the embodiment of the present application provides a kind of video broadcasting method, including:In response to receiving user's request
The message for watching target video, obtains the facial image of user, wherein target video is corresponding with license age range;By people
Face image input character classification by age device trained in advance, obtains the age of user, wherein character classification by age device is for characterizing facial image
With the corresponding relationship at age;Determine the age of user whether within the scope of the corresponding license age range of target video;If so,
Target video is played, and obtains physiological parameter when user watches target video, video playing effect is adjusted according to physiological parameter
Fruit, wherein adjustment video display effect, including it is at least one of following:It adjusts the volume of target video, adjust the picture of target video
Target video, Switch Video content are closed in face brightness.
In some embodiments, physiological parameter includes palmic rate and/or respiratory rate;And it is adjusted according to physiological parameter
Video display effect, including:By facial image input mood classifier trained in advance, the type of emotion of user is obtained, wherein
Mood classifier is used to characterize the corresponding relationship of facial image and type of emotion;According to the palmic rate of user and/or breathing frequency
Rate determines the emotional intensity of user;The corresponding rating information table of target video is obtained, and determines to use according to rating information table
The age at family and the corresponding emotional intensity threshold value of the type of emotion of user, wherein rating information table is for characterizing age, mood class
The corresponding relationship of type and emotional intensity threshold value;In response to detecting that the emotional intensity of user is greater than the emotional intensity threshold determined
Value, reduces the volume and/or picture brightness of target video.
In some embodiments, after the volume and/or picture brightness for reducing target video, this method further includes:It rings
Ying Yu determines that the emotional intensity of user is less than or equal to the emotional intensity threshold value determined in the given time, restores target video
Initial volume and/or initial picture brightness.
In some embodiments, after the volume and/or picture brightness for reducing target video, this method further includes:It rings
Ying Yu determines that the emotional intensity of user is greater than the emotional intensity threshold value determined in the given time, closes the sound of target video
Amount and/or picture.
In some embodiments, this method further includes:Output includes the prompt of the palmic rate and/or respiratory rate of user
Information.
In some embodiments, the corresponding rating information table of target video obtains as follows:For watching target
User at least one user of video obtains facial image, maximum palmic rate and/or the maximum breathing frequency of the user
The facial image of the user is inputted mood classifier, obtains the type of emotion of the user by rate, and the facial image of the user is defeated
Enter character classification by age device, obtain the age of the user, and is true according to the maximum palmic rate and/or maximum breathing frequency of the user
The maximum emotional intensity of the fixed user;For at least one type of emotion of user age-grade at least one user
The average value of the maximum emotional intensity for the user for belonging to the age of the type of emotion is determined to belong to the mood by type of emotion
The emotional intensity threshold value of the user at the age of type;According to age, type of emotion and the feelings of user each at least one user
Thread intensity threshold generates the corresponding rating information table of target video.
In some embodiments, this method further includes:It is asked in response to receiving the modification including target emotion intensity threshold
It asks, the corresponding rating information table of at least one video is modified according to target emotion intensity threshold.
In some embodiments, this method further includes:It is target by the age at least one user for watching target video
The user of minimal ages in the corresponding license age range of video is determined as candidate user, and record detects the feelings of candidate user
At the time of thread intensity is greater than the type of emotion corresponding emotional intensity threshold value of candidate user as mood at the beginning of, in response to
It detects that the difference of current time and start time are greater than scheduled time threshold value, candidate user is determined as target user;In response to
The ratio of number of the quantity and candidate user of determining target user is greater than predetermined ratio, increases the minimum in license age range
Age.
Second aspect, the embodiment of the present application provide a kind of video play device, including:First acquisition unit is configured
At in response to receive user request viewing target video message, obtain the facial image of user, wherein target video with permitted
Can age range it is corresponding;First taxon is configured to inputting facial image into character classification by age device trained in advance, obtain
The age of user, wherein character classification by age device is used to characterize the corresponding relationship at facial image and age;First determination unit is matched
The age of determining user is set to whether within the scope of the corresponding license age range of target video;Broadcast unit, if being configured to
The age of user within the scope of the corresponding license age range of target video, then plays target video, and obtains user's viewing
Physiological parameter when target video adjusts video display effect according to physiological parameter, wherein adjustment video display effect, including
At least one of below:The volume for adjusting target video, closes target video, Switch Video at the picture brightness for adjusting target video
Content.
In some embodiments, physiological parameter includes palmic rate and/or respiratory rate;And the device further includes:The
Two taxons are configured to inputting facial image into mood classifier trained in advance, obtain the type of emotion of user,
In, mood classifier is used to characterize the corresponding relationship of facial image and type of emotion;Second acquisition unit, be configured to according to
The palmic rate and/or respiratory rate at family determine the emotional intensity of user;Second determination unit is configured to obtain target video
Corresponding rating information table, and determine according to rating information table the corresponding mood of type of emotion at the age and user of user
Intensity threshold, wherein rating information table is used to characterize the corresponding relationship at age, type of emotion and emotional intensity threshold value;Video tune
Whole unit is configured in response to detect that the emotional intensity of user is greater than the emotional intensity threshold value determined, reduces target view
The volume and/or picture brightness of frequency.
In some embodiments, video adjustment unit is further configured to:In the volume and/or picture for reducing target video
After the brightness of face, in response to determining that the emotional intensity of user is less than or equal to the emotional intensity threshold determined in the given time
Value, restores the initial volume and/or initial picture brightness of target video.
In some embodiments, video adjustment unit is further configured to:In the volume and/or picture for reducing target video
After the brightness of face, in response to determining that the emotional intensity of user is greater than the emotional intensity threshold value determined in the given time, close
Close the volume and/or picture of target video.
In some embodiments, which further includes output unit, is configured to:Output includes the palmic rate of user
And/or the prompt information of respiratory rate.
In some embodiments, which further includes generation unit, is configured to:For watching at least the one of target video
User in a user obtains facial image, maximum palmic rate and/or the maximum breathing frequency of the user, by the user's
Facial image inputs mood classifier, obtains the type of emotion of the user, and the facial image of the user is inputted character classification by age device,
The age of the user is obtained, and the user is determined most according to the maximum palmic rate and/or maximum breathing frequency of the user
Big emotional intensity;It, will for the type of emotion at least one type of emotion of user age-grade at least one user
The average value for belonging to the maximum emotional intensity of the user at the age of the type of emotion is determined to belong to the year of the type of emotion
The emotional intensity threshold value of the user in age;According to the age of user each at least one user, type of emotion and emotional intensity threshold value
Generate the corresponding rating information table of target video.
In some embodiments, which further includes modification unit, is configured to:In response to receiving including target emotion
The modification of intensity threshold is requested, and modifies the corresponding rating information table of at least one video according to target emotion intensity threshold.
In some embodiments, which further includes age license adjustment unit, is configured to:Target video will be watched
The age is that the user of the minimal ages in the corresponding license age range of target video is determined as candidate use at least one user
Family, at the time of record detects that the emotional intensity of candidate user is greater than the type of emotion corresponding emotional intensity threshold value of candidate user
It, will be candidate in response to detecting that the difference at current time and start time is greater than scheduled time threshold value at the beginning of as mood
User is determined as target user;It is greater than predetermined ratio in response to the quantity and the ratio of number of candidate user for determining target user
Value increases the minimal ages in license age range.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Storage dress
Set, be stored thereon with one or more programs, when one or more programs are executed by one or more processors so that one or
Multiple processors are realized such as method any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program,
In, it realizes when program is executed by processor such as method any in first aspect.
Video broadcasting method and device provided by the embodiments of the present application go out to request the user of viewing video by recognition of face
Age.Then judge to play video when the age of user is within the scope of the license age range of the video.And according to user
Physiological parameter adjustment video volume and/or picture brightness.The embodiment can be adaptive according to the physiological parameter of user
Video display effect is adjusted, is realized rich in targetedly video playing.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the video broadcasting method of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the video broadcasting method of the application;
Fig. 4 is the flow chart according to another embodiment of the video broadcasting method of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the video play device of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the exemplary system of the embodiment of the video broadcasting method or video play device of the application
System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102, wearable intelligent equipment 103, network
104 and server 105.Terminal is sent to by wireless network after the physiological parameter of the acquisition user of wearable intelligent equipment 103 to set
Standby 101,102.Network 104 between terminal device 101,102 and server 105 to provide the medium of communication link.Network
104 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102 and be interacted by network 104 with server 105, be disappeared with receiving or sending
Breath etc..Various telecommunication customer end applications can be installed, such as video playback class is applied, webpage is clear on terminal device 101,102
Device of looking at application, shopping class application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102 can be hardware, be also possible to software.It, can be with when terminal device 101,102 is hardware
It is with display screen and to support video playing, the various electronic equipments that there is camera and facial image is supported to identify, including
But it is not limited to smart phone, tablet computer, E-book reader, MP3 player (Moving Picture Experts Group
Audio Layer III, dynamic image expert's compression standard audio level 3), MP4 (Moving Picture Experts
Group Audio Layer IV, dynamic image expert's compression standard audio level 4) player, pocket computer on knee, platform
Formula computer, television set etc..When terminal device 101,102,103 is software, above-mentioned cited electronics may be mounted at
In equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, and list also may be implemented into
A software or software module.It is not specifically limited herein.
Wearable intelligent equipment 103 can be the Intelligent bracelet of the physiological parameter of acquisition user.Physiological parameter may include but
It is not limited to the information such as palmic rate, respiratory rate, blood pressure, body temperature.
Server 105 can be to provide the server of various services, such as to the video shown on terminal device 101,102
The background video server supported is provided.Background video server can to receive the facial image of user, physiological parameter into
What row was analyzed carries out the processing such as analyzing, and processing result (such as video adjusted) is fed back to terminal device.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
It should be noted that video broadcasting method provided by the embodiment of the present application can be held by terminal device 101,102
Row, can also be executed by server 105.Correspondingly, video play device can be set in terminal device 101,102, can also
To be set in server 105.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, wearable intelligent equipment, network and server.
With continued reference to Fig. 2, the process 200 of one embodiment of the video broadcasting method according to the application is shown.The view
Frequency playback method, includes the following steps:
Step 201, the message that viewing target video is requested in response to receiving user, obtains the facial image of user.
In the present embodiment, the executing subject (such as server shown in FIG. 1) of video broadcasting method can be by wired
Connection type or radio connection receive the request for playing target video from user using its terminal for carrying out video playing.
Then server-side obtains the facial image of the user of terminal acquisition.The executing subject of video broadcasting method can also be terminal.
Terminal directly acquires request and the facial image of the broadcasting target video of user.Wherein, target video refers to that user is specified and broadcasts
The video put.Target video is corresponding with license age range.For example, video A, license age range is 18 years old or more.License
Age range may include minimal ages, may also include max age.For example, license age range is -60 years old 18 years old.
Step 202, the character classification by age device that facial image input is trained in advance, obtains the age of user.
In the present embodiment, wherein character classification by age device is used to characterize the corresponding relationship at facial image and age.The present embodiment
The character classification by age device of use may include decision tree, logistic regression, naive Bayesian, neural network etc..Character classification by age device can be used
Multilayer convolutional neural networks framework.Multilayer convolutional neural networks framework includes input layer, hidden layer, output layer.Preceding layer it is defeated
Input value of the value as later layer out.Character classification by age device uses maximum probability on the basis of a simple probabilistic model
Value to data carries out classification prediction.Character classification by age device is trained in advance.Character classification by age device can be accurate to specifically age,
For example, 20 years old, 14 years old etc..Facial characteristics, training classifier can be extracted from a large amount of facial image sample.Character classification by age device
Construction and implement it is big know from experience pass through following steps:1, sample (comprising positive sample and negative sample) is selected, by all samples
It is divided into training sample and test sample two parts.2, classifier algorithm is executed based on training sample, generates classifier.It 3, will test
Sample inputs classifier, generates prediction result.4, according to prediction result, necessary evaluation index is calculated, assesses the property of classifier
Energy.
For example, the facial image of a large amount of 14 years old children of acquisition is as positive sample, adult facial image conduct in a large amount of 18 years old
Negative sample.Based on classifier algorithm is executed in positive sample and negative sample, classifier is generated.It is again that positive sample and negative sample difference is defeated
Enter classifier, generates prediction result to verify whether prediction result is 14 years old children.The property of classifier is assessed according to prediction result
Energy.
Step 203, determine the age of user whether within the scope of the corresponding license age range of target video.
In the present embodiment, step 202 determines the age of user, then license age range phase corresponding with target video
Compare, determines whether the user reaches the age of viewing target video.For example, determining user 30 by character classification by age device
Year.The license age range of target video is -60 years old 18 years old.Then the age of the user is in the corresponding license age area of target video
Between in range.If there is multiple users, then the facial image of each user is inputted into character classification by age device respectively, obtain each user
Age.And judge the age of each user whether within the scope of the corresponding license age range of target video.As all users
Age all within the scope of the corresponding license age range of target video when, just allow watch target video.
Step 204, if so, playing target video, and physiological parameter when user watches target video is obtained.
It in the present embodiment, can be the use when age of user is within the scope of the corresponding license age range of target video
Family plays target video.If there is the age that multiple users then need to meet age the smallest user is corresponding perhaps in target video
It can be within the scope of age range.The physiological parameter of user, including but not limited to rainbow can be obtained by devices such as Intelligent bracelet, cameras
The information such as film information, palmic rate, respiratory rate, electrocardio, skin conductivity, body temperature and blood pressure.Pupil variation in iris information
It is whether frightened that user can be reacted, can be obtained from facial image.Palmic rate, respiratory rate, electrocardio, skin conductivity, body temperature and
The information such as blood pressure can all react the mood of user.The respiratory rate of user can be also obtained by electromagnetic sensor.Electromagnetic sensor
It is mountable on the terminal device.
Step 205, video display effect is adjusted according to physiological parameter.
In the present embodiment, the variation for monitoring physiological parameter, when the amplitude of variation of some physiological parameter is joined more than the physiology
When several predetermined variance thresholds, it is believed that target video stimulates user excessive.Therefore it needs to adjust video display effect, wherein
Video display effect is adjusted, including at least one of following:Adjust target video volume, adjust target video picture brightness,
Close target video, Switch Video content.The volume of target video can be turned down, picture brightness can also be dimmed.Even directly
Close the picture and/or volume of video.It also can switch to other video content.For example, if the palmic rate of user by 70 times/
Minute, become 120 beats/min, amplitude of variation is more than the predetermined variance threshold 50% of palmic rate, then can be by the sound of target video
Amount is turned down, is perhaps reduced picture brightness or is reduced picture brightness while turning volume down.The predetermined variation threshold of physiological parameter
The setting of value can refer to the physiological parameter of healthy human body.Can also for all ages and classes people's setting cannot physiological parameter it is predetermined
Change threshold.For example, the predetermined variance threshold of the elderly's physiological parameter to be arranged lower, prevent mood from rising and falling acute
It is strong.Great rejoicing or great sorrow are all very harmful to health.
Optionally, using the reference value of the physiological parameter of Healthy People as reference, video display effect is adjusted.For example, normal
Palmic rate is 60 beats/min -100 beats/min.If the palmic rate of user is greater than 600 beats/min or less than 100 beat/min
Clock, then it is assumed that user emotion fluctuation is excessive, needs to adjust video display effect.Or the blood pressure of user not in the normal range when
It needs to adjust video display effect.
Optionally, in adjusting the predetermined time after video display effect, if user physiological parameter variation still above
Predetermined variance threshold or the physiological parameter of user be not in range of normal value, then it is assumed that target video is not suitable for user sight
It sees, picture, the sound etc. of program is shielded.To prevent target video excessive to the stimulation of user, cause heart disease, high blood
The diseases such as pressure.
With continued reference to the schematic diagram that Fig. 3, Fig. 3 are according to the application scenarios of the facial image method of the present embodiment.?
In the application scenarios of Fig. 3, user's request in 10 years old watches video on mobile phone 301《Snow White》.Mobile phone 301 passes through camera
The facial image for shooting user carries out age identification, identifies that age of user is 10 years old.Mobile phone gets video《Snow White》
License age range be 4 years old or more.It can then be played to user《Snow White》.In video display process, pass through intelligence in real time
Energy bracelet 302 obtains the physiological parameter (palmic rate) of user.When user watches queen to want malicious apple to Snow White
When, palmic rate becomes 120 beats/min from average 70 beats/min, as shown in wire frame 302.Show that user feels probably at this time
Fear.It then can be by turning video down《Snow White》Volume and reduce picture brightness and come so that user is less frightened, thus
Reduce palmic rate.
The method provided by the above embodiment of the application, can by the way that video playing is associated with the physiological parameter of user
According to the physiological parameter adaptive regulating video result of broadcast of user, realize rich in targetedly video playing.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of facial image method.The facial image
The process 400 of method, includes the following steps:
Step 401, the message that viewing target video is requested in response to receiving user, obtains the facial image of user.
Step 402, the character classification by age device that facial image input is trained in advance, obtains the age of user.
Step 403, determine the age of user whether within the scope of the corresponding license age range of target video.
Step 401-403 and step 201-203 are essentially identical, therefore repeat no more.
Step 404, if so, play target video, and obtain user watch target video when palmic rate and/or
Respiratory rate.
It in the present embodiment, can be the use when age of user is within the scope of the corresponding license age range of target video
Family plays target video.If there is the age that multiple users then need to meet age the smallest user is corresponding perhaps in target video
It can be within the scope of age range.The palmic rate and/or respiratory rate when user watches target video can be obtained by Intelligent bracelet.
Or the electromagnetic sensor by installing in terminal obtains the respiratory rate of user.
Step 405, the mood classifier that facial image input is trained in advance, obtains the type of emotion of user.
In the present embodiment, type of emotion can refer to that the emotion according to the mankind recognizes the classification carried out to mood, specifically,
Type of emotion may include tranquil, happy, sad, surprised, angry and frightened etc..Mood classifier for characterize facial image with
The corresponding relationship of type of emotion.Mood classifier is used to characterize the corresponding relationship of facial image and mood.What the present embodiment used
Mood classifier may include decision tree, logistic regression, naive Bayesian, neural network etc..Mood classifier is simple at one
On the basis of probabilistic model, classification prediction is carried out to data using maximum probability value.Multilayer volume can be used in mood classifier
Product neural network framework.Multilayer convolutional neural networks framework includes input layer, hidden layer, output layer.The output valve of preceding layer is made
For the input value of later layer.Mood classifier is trained in advance.Face can be extracted from a large amount of facial image sample
Feature, training classifier.The construction of mood classifier and the big cognition of implementation pass through following steps:1, selecting sample (includes
Positive sample and negative sample), all samples are divided into training sample and test sample two parts.2, classification is executed based on training sample
Device algorithm generates classifier.3, test sample is inputted into classifier, generates prediction result.4, it according to prediction result, calculates necessary
Evaluation index, assess the performance of classifier.
For example, a large amount of happy facial images of acquisition are as positive sample, the facial image of a large amount of indignation is as negative sample.Base
In executing classifier algorithm in positive sample and negative sample, classifier is generated.Positive sample and negative sample are inputted into classifier respectively again,
Prediction result is generated to verify whether prediction result is happy.The performance of classifier is assessed according to prediction result.
Step 406, the emotional intensity of user is determined according to the palmic rate of user and/or respiratory rate.
In the present embodiment, emotional intensity can represent full journey of the mood of a certain type when expressing corresponding emotion
Degree.For example, palmic rate and/or the higher mood for illustrating user of respiratory rate are stronger.Palmic rate and breathing frequency can be used
The weighted sum of rate characterizes emotional intensity.For example, setting wh for the weight of palmic rate, setting the weight of respiratory rate to
Wr, then emotional intensity=palmic rate * wh+ respiratory rate * wr.The weight of the two can it is identical also cannot be different.If lacked
Any one of lose, then 0 is set by the value for lacking item.
Optionally, emotional intensity is normalized.Specifically, the side due to different users when expressing mood
Formula be it is different, extravert may be more willing to show one's feelings, and will laugh when happy, when sad
It will wail, and introvert is more more containing, may only smile when expressing the mood of same degree or gently sob
Tears, so, in order to more accurately describe the emotional change of each user, emotional intensity can be normalized.Tool
Body, it, can be using emotional intensity of user emotion when most strong as maximum value for the mood of same type of emotion
MAX, and using emotional intensity of user emotion when most tranquil as minimum value MIN, the emotional intensity X of other emotional states then can be with
Calculating is normalized based on maximum value and minimum value, for example, the emotional intensity X ' after normalization can be according to formula X '=(X-
MIN)/(MAX-MIN) is calculated.In this manner it is possible to ensure that the emotional intensity of each user is distributed in 0 after normalization
And between 1, to ensure that identical to the emotional intensity carving effect of different user.And it can unify again to expand emotional intensity
100 times so that emotional intensity facilitates calculating in the range of 0-100.
Step 407, the corresponding rating information table of target video is obtained, and determines the year of user according to rating information table
The corresponding emotional intensity threshold value of the type of emotion of age and user.
In the present embodiment, wherein rating information table is corresponding with emotional intensity threshold value for characterizing age, type of emotion
Relationship.The classification audit stage is being carried out to target video audit, a large number of users can be allowed to try target video to generate rating information
Table.As follows:
Type of emotion | Age | Emotional intensity threshold value |
It is happy | 10 | 20 |
It is happy | 30 | 15 |
It is frightened | 10 | 30 |
Table 1
Rating information table may also include the programm name of target video.It may also include program volume to distinguish different programs
Number.The included age is the age for permitting to include in age range in rating information table.
In an optional implementation of the present embodiment, the corresponding rating information table of target video obtains as follows
It arrives:
Step 4071, for watch target video at least one user in user, obtain the user facial image,
Maximum palmic rate and/or maximum breathing frequency.
Step 4072, the facial image of the user is inputted into mood classifier, obtains the type of emotion of the user.
Step 4073, the facial image of the user is inputted into character classification by age device, obtains the age of the user.
Step 4074, the maximum feelings of the user are determined according to the maximum palmic rate of the user and/or maximum breathing frequency
Thread intensity.
Step 4075, for the mood class at least one type of emotion of user age-grade at least one user
The average value of the maximum emotional intensity for the user for belonging to the age of the type of emotion is determined to belong to the type of emotion by type
The emotional intensity threshold value of the user at the age.For example, multiple 10 years old users watched target video, each user is in the viewing phase
Between type of emotion it is identical when, emotional intensity constantly changes, and the maximum emotional intensity of each user is averaged, what is obtained is averaged
It is worth the emotional intensity threshold value of the user at the age as the type of emotion.
Step 4076, mesh is generated according to the age of user each at least one user, type of emotion and emotional intensity threshold value
Mark the corresponding rating information table of video.The type of emotion of different age group user and emotional intensity threshold value are combined into target video
Corresponding rating information table.
In an optional implementation of the present embodiment, this method further includes:In response to receiving including target emotion
The modification of intensity threshold is requested, and modifies the corresponding rating information table of at least one video according to target emotion intensity threshold.That is, can
To modify the corresponding rating information table of each video according to user's own bodies situation.For example, cardiac or hypertension are suffered from
Person emotional intensity threshold value can be arranged lower.
Step 408, in response to detecting that the emotional intensity of user is greater than the emotional intensity threshold value determined, target view is reduced
The volume and/or picture brightness of frequency.
In the present embodiment, according to the age of user and type of emotion from the corresponding rating information table of target video be that can look into
Find emotional intensity threshold value.Then when the emotional intensity of user is greater than the emotional intensity threshold value determined, it is believed that target video
User is stimulated excessive.Therefore the volume of target video can be turned down, picture brightness can be also dimmed.For example, if 10 years old
Fear when user watches target video, emotional intensity 20, and 10 years old, mood class in the corresponding rating information table of target video
Type corresponding emotional intensity threshold value 10 when being frightened, then can turn the volume of target video down, perhaps reduce picture brightness or
Picture brightness is reduced while turning volume down.
In an optional implementation of the present embodiment, after the volume and/or picture brightness for reducing target video,
This method further includes:In response to determining that the emotional intensity of user is less than or equal to the emotional intensity threshold determined in the given time
Value, restores the initial volume and/or initial picture brightness of target video.When the emotional intensity of user drops to emotional intensity threshold value
Under when, target video initial volume and/or initial picture brightness can be restored.It is restored to the volume for reducing target video
And/or the result of broadcast before picture brightness.
In an optional implementation of the present embodiment, after the volume and/or picture brightness for reducing target video,
This method further includes:In response to determining that the emotional intensity of user is greater than the emotional intensity threshold value determined in the given time,
Close the volume and/or picture of target video.If the pre- timing after the volume and/or picture brightness for reducing target video
The emotional intensity of interior user cannot still drop to emotional intensity threshold value.User is then no longer allowed to watch target video.
In an optional implementation of the present embodiment, this method further includes:Output include user palmic rate and/
Or the prompt information of respiratory rate.It can export, can also be exported by Intelligent bracelet on the display screen, it can also vibrating alert use
Family.The control strategy of video playing can be also stored in rating information table.For example, providing prompt, stop broadcasting etc..
In an optional implementation of the present embodiment, this method further includes:By watch target video at least one
The age is that the user of the minimal ages in the corresponding license age range of target video is determined as candidate user in user.Record inspection
It measures at the time of the emotional intensity of candidate user is greater than the type of emotion corresponding emotional intensity threshold value of candidate user as mood
At the beginning of.In response to detecting that the difference at current time and start time is greater than scheduled time threshold value, candidate user is determined
For target user.It is greater than predetermined ratio in response to the quantity and the ratio of number of candidate user for determining target user, increase permitted
It can minimal ages in age range.That is, when the emotional intensity for detecting a certain proportion of low age user does not decline for a long time, then
Think that the minimal ages in the license age range of target video are inappropriate, needs to adjust upward.For example, considerable by 14 years old or more
It sees and is adjusted to 15 years old or more may be viewed by.If still have in 15 years old or more user the emotional intensity of a certain proportion of user feedback it is long when
Between do not decline, then it is assumed that the license age of target video is inappropriate, continue up adjustment license age range in minimal ages.
The process is repeated until user's accounting that the emotional intensity of feedback does not decline for a long time is less than predetermined ratio.
Figure 4, it is seen that compared with the corresponding embodiment of Fig. 2, the process of the video broadcasting method in the present embodiment
400 highlight the step of adjusting video display effect according to emotional intensity.The scheme of the present embodiment description can be more quasi- as a result,
Video display effect really is adjusted according to the physiological parameter of user.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides a kind of video playing dresses
The one embodiment set, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively
In kind electronic equipment.
As shown in figure 5, the video play device 500 of the present embodiment includes:First acquisition unit 501, the first taxon
502, the first determination unit 503, broadcast unit 504.Wherein, first acquisition unit 501 is configured in response to receive user
The message of request viewing target video, obtains the facial image of user, wherein target video is corresponding with license age range.
First taxon 502 is configured to inputting facial image into character classification by age device trained in advance, obtains the age of user,
In, character classification by age device is used to characterize the corresponding relationship at facial image and age.First determination unit 503 is configured to determine user
Age whether within the scope of the corresponding license age range of target video.If broadcast unit 504 is configured to the age of user
When within the scope of the corresponding license age range of target video, then playing target video, and obtaining user's viewing target video
Physiological parameter, video display effect is adjusted according to physiological parameter, wherein adjustment video display effect, including following at least one
?:The volume for adjusting target video, closes target video, Switch Video content at the picture brightness for adjusting target video.
In the present embodiment, the first acquisition unit 501 of video play device 500, the first taxon 502, first are true
The specific processing of order member 503, broadcast unit 504 can refer to step 201, step 202, step in Fig. 2 corresponding embodiment
203, step 204.
In an optional implementation of the present embodiment, physiological parameter includes palmic rate and/or respiratory rate;And
Device 500 further includes:Second taxon is configured to inputting facial image into mood classifier trained in advance, be used
The type of emotion at family, wherein mood classifier is used to characterize the corresponding relationship of facial image and type of emotion.Second obtains list
Member is configured to determine the emotional intensity of user according to the palmic rate and/or respiratory rate of user.Second determination unit, quilt
It is configured to obtain the corresponding rating information table of target video, and determines age and the user of user according to rating information table
The corresponding emotional intensity threshold value of type of emotion, wherein rating information table is for characterizing age, type of emotion and emotional intensity threshold value
Corresponding relationship.Video adjustment unit is configured in response to detect that the emotional intensity of user is strong greater than the mood determined
Threshold value is spent, the volume and/or picture brightness of target video are reduced.
In an optional implementation of the present embodiment, video adjustment unit is further configured to:Reducing target
After the volume and/or picture brightness of video, in response to determining that it is true that the emotional intensity of user is less than or equal in the given time
The emotional intensity threshold value made restores the initial volume and/or initial picture brightness of target video.
In an optional implementation of the present embodiment, video adjustment unit is further configured to:Reducing target
After the volume and/or picture brightness of video, determined in response to determining that the emotional intensity of user is greater than in the given time
Emotional intensity threshold value, close the volume and/or picture of target video.
In an optional implementation of the present embodiment, device 500 further includes output unit, is configured to:Output packet
Include the palmic rate of user and/or the prompt information of respiratory rate.
In an optional implementation of the present embodiment, device 500 further includes generation unit, is configured to:For seeing
It sees the user at least one user of target video, obtain the facial image of the user, maximum palmic rate and/or most calls out
Frequency is inhaled, the facial image of the user is inputted into mood classifier, the type of emotion of the user is obtained, by the face figure of the user
As input character classification by age device, the age of the user, and maximum palmic rate and/or the maximum breathing frequency according to the user are obtained
Rate determines the maximum emotional intensity of the user.For at least one type of emotion of user age-grade at least one user
In type of emotion, the average value of the maximum emotional intensity for the user for belonging to the age of the type of emotion is determined to belong to this
The emotional intensity threshold value of the user at the age of type of emotion.According to the age of user each at least one user, type of emotion
Rating information table corresponding with emotional intensity threshold value generation target video.
In an optional implementation of the present embodiment, device 500 further includes modification unit, is configured to:In response to
The modification request including target emotion intensity threshold is received, it is corresponding to modify at least one video according to target emotion intensity threshold
Rating information table.
In an optional implementation of the present embodiment, device 500 further includes age license adjustment unit, is configured
At:By the minimal ages that the age at least one user for watching target video is in the corresponding license age range of target video
User be determined as candidate user, record detect the emotional intensity of candidate user be greater than candidate user type of emotion it is corresponding
At the time of emotional intensity threshold value as mood at the beginning of, in response to detecting that it is pre- that the difference at current time and start time is greater than
It fixes time threshold value, candidate user is determined as target user.In response to determining the quantity of target user and the number of candidate user
The ratio between amount is greater than predetermined ratio, increases the minimal ages in license age range.
Below with reference to Fig. 6, it illustrates the electronic equipment (ends as shown in Figure 1 for being suitable for being used to realize the embodiment of the present application
End equipment/server) computer system 600 structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, no
The function and use scope for coping with the embodiment of the present application bring any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interface 605 is connected to lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores
The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And
In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as:A kind of processor packet
Include first acquisition unit, the first taxon, the first determination unit and broadcast unit.Wherein, the title of these units is at certain
In the case of do not constitute restriction to the unit itself, for example, first acquisition unit is also described as " in response to receiving
User requests the message of viewing target video, obtains the unit of the facial image of user ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should
Device:The message that viewing target video is requested in response to receiving user, obtains the facial image of user, wherein target video
It is corresponding with license age range;By facial image input character classification by age device trained in advance, the age of user is obtained, wherein
Character classification by age device is used to characterize the corresponding relationship at facial image and age;Determine whether the age of user is corresponding in target video
Permit within the scope of age range;If so, playing target video, and physiological parameter when user watches target video is obtained,
Video display effect is adjusted according to physiological parameter, wherein adjustment video display effect, including it is at least one of following:Adjust target
The volume of video, closes target video, Switch Video content at the picture brightness for adjusting target video.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from the inventive concept, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (18)
1. a kind of video broadcasting method, including:
The message that viewing target video is requested in response to receiving user, obtains the facial image of the user, wherein the mesh
It is corresponding with license age range to mark video;
By facial image input character classification by age device trained in advance, the age of the user is obtained, wherein the age point
Class device is used to characterize the corresponding relationship at facial image and age;
Determine the age of the user whether within the scope of the corresponding license age range of the target video;
If so, playing the target video, and the physiological parameter when user watches the target video is obtained, according to
The physiological parameter adjusts video display effect, wherein the adjustment video display effect includes at least one of following:Adjustment institute
It states the volume of target video, the picture brightness of the adjustment target video, close the target video, Switch Video content.
2. according to the method described in claim 1, wherein, the physiological parameter includes palmic rate and/or respiratory rate;And
It is described that video display effect is adjusted according to the physiological parameter, including:
By facial image input mood classifier trained in advance, the type of emotion of the user is obtained, wherein the feelings
Thread classifier is used to characterize the corresponding relationship of facial image and type of emotion;
The emotional intensity of the user is determined according to the palmic rate of the user and/or respiratory rate;
The corresponding rating information table of the target video is obtained, and determines the year of the user according to the rating information table
The corresponding emotional intensity threshold value of the type of emotion of age and the user, wherein the rating information table is for characterizing age, mood
The corresponding relationship of type and emotional intensity threshold value;
In response to detecting that the emotional intensity of the user is greater than the emotional intensity threshold value determined, the target video is reduced
Volume and/or picture brightness.
3. according to the method described in claim 2, wherein, in the volume and/or picture brightness for reducing the target video
Later, the method also includes:
It is extensive in response to determining that the emotional intensity of the user is less than or equal to the emotional intensity threshold value determined in the given time
The initial volume and/or initial picture brightness of the multiple target video.
4. according to the method described in claim 2, wherein, in the volume and/or picture brightness for reducing the target video
Later, the method also includes:
In response to determining that the emotional intensity of the user is greater than the emotional intensity threshold value determined in the given time, institute is closed
State the volume and/or picture of target video.
5. according to the method described in claim 4, wherein, the method also includes:
Output includes the palmic rate of the user and/or the prompt information of respiratory rate.
6. according to the method described in claim 2, wherein, the target video corresponding rating information table is as follows
It arrives:
The user at least one user for watching the target video obtains facial image, the maximum heartbeat of the user
The facial image of the user is inputted the mood classifier, obtains the mood class of the user by frequency and/or maximum breathing frequency
The facial image of the user is inputted the character classification by age device, obtains the age of the user, and the maximum according to the user by type
Palmic rate and/or maximum breathing frequency determine the maximum emotional intensity of the user;
For the type of emotion at least one type of emotion of user age-grade at least one described user, will belong to
The average value of the maximum emotional intensity of the user at the age of the type of emotion is determined to belong to the age of the type of emotion
The emotional intensity threshold value of user;
The target video is generated according to the age of each user, type of emotion and emotional intensity threshold value at least one described user
Corresponding rating information table.
7. according to the method described in claim 6, wherein, the method also includes:
In response to receive including target emotion intensity threshold modification request, according to the target emotion intensity threshold modify to
Few corresponding rating information table of a video.
8. the method according to one of claim 2-7, wherein the method also includes:
It is in the corresponding license age range of the target video by the age at least one user for watching the target video
The users of minimal ages be determined as candidate user, record detects that the emotional intensity of the candidate user is greater than described candidate use
At the time of the type of emotion at family corresponding emotional intensity threshold value as mood at the beginning of, in response to detect current time with
The difference of the start time is greater than scheduled time threshold value, and the candidate user is determined as target user;
In response to determine target user quantity and candidate user ratio of number be greater than predetermined ratio, increase the license year
Minimal ages in age section.
9. a kind of video play device, including:
First acquisition unit is configured in response to receive the message that user requests viewing target video, obtains the user
Facial image, wherein the target video with license age range it is corresponding;
First taxon is configured to inputting the facial image into character classification by age device trained in advance, obtains the user
Age, wherein the character classification by age device is used to characterize the corresponding relationship at facial image and age;
Whether first determination unit is configured to determine the age of the user in the corresponding license age area of the target video
Between in range;
Broadcast unit, if being configured to age of the user within the scope of the corresponding license age range of the target video,
The target video is then played, and obtains the physiological parameter when user watches the target video, according to the physiology
Parameter adjusts video display effect, wherein the adjustment video display effect, including it is at least one of following:Adjust the target
The volume of video, the picture brightness of the adjustment target video, the closing target video, Switch Video content.
10. device according to claim 9, wherein the physiological parameter includes palmic rate and/or respiratory rate;With
And
Described device further includes:
Second taxon is configured to inputting the facial image into mood classifier trained in advance, obtains the user
Type of emotion, wherein the mood classifier is used to characterize the corresponding relationship of facial image and type of emotion;
Second acquisition unit is configured to determine the feelings of the user according to the palmic rate and/or respiratory rate of the user
Thread intensity;
Second determination unit is configured to obtain the corresponding rating information table of the target video, and is believed according to the classification
Breath table determines the corresponding emotional intensity threshold value of type of emotion at the age and the user of the user, wherein the classification
Information table is used to characterize the corresponding relationship at age, type of emotion and emotional intensity threshold value;
Video adjustment unit is configured in response to detect that the emotional intensity of the user is greater than the emotional intensity threshold determined
Value, reduces the volume and/or picture brightness of the target video.
11. device according to claim 10, wherein the video adjustment unit is further configured to:
After the volume for reducing the target video and/or picture brightness, in response to determining institute in the given time
The emotional intensity for stating user is less than or equal to the emotional intensity threshold value determined, restore the target video initial volume and/or
Initial picture brightness.
12. device according to claim 10, wherein the video adjustment unit is further configured to:
After the volume for reducing the target video and/or picture brightness, in response to determining institute in the given time
The emotional intensity for stating user is greater than the emotional intensity threshold value determined, closes the volume and/or picture of the target video.
13. device according to claim 12, wherein described device further includes output unit, is configured to:
Output includes the palmic rate of the user and/or the prompt information of respiratory rate.
14. device according to claim 10, wherein described device further includes generation unit, is configured to:
The user at least one user for watching the target video obtains facial image, the maximum heartbeat of the user
The facial image of the user is inputted the mood classifier, obtains the mood class of the user by frequency and/or maximum breathing frequency
The facial image of the user is inputted the character classification by age device, obtains the age of the user, and the maximum according to the user by type
Palmic rate and/or maximum breathing frequency determine the maximum emotional intensity of the user;
For the type of emotion at least one type of emotion of user age-grade at least one described user, will belong to
The average value of the maximum emotional intensity of the user at the age of the type of emotion is determined to belong to the age of the type of emotion
The emotional intensity threshold value of user;
The target video is generated according to the age of each user, type of emotion and emotional intensity threshold value at least one described user
Corresponding rating information table.
15. device according to claim 14, wherein described device further includes modification unit, is configured to:
In response to receive including target emotion intensity threshold modification request, according to the target emotion intensity threshold modify to
Few corresponding rating information table of a video.
16. device described in one of 0-15 according to claim 1, wherein described device further includes age license adjustment unit, quilt
It is configured to:
It is in the corresponding license age range of the target video by the age at least one user for watching the target video
The users of minimal ages be determined as candidate user, record detects that the emotional intensity of the candidate user is greater than described candidate use
At the time of the type of emotion at family corresponding emotional intensity threshold value as mood at the beginning of, in response to detect current time with
The difference of the start time is greater than scheduled time threshold value, and the candidate user is determined as target user;
In response to determine target user quantity and candidate user ratio of number be greater than predetermined ratio, increase the license year
Minimal ages in age section.
17. a kind of electronic equipment, including:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
Now such as method described in any one of claims 1-8.
18. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
Now such as method described in any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810725262.2A CN108900908A (en) | 2018-07-04 | 2018-07-04 | Video broadcasting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810725262.2A CN108900908A (en) | 2018-07-04 | 2018-07-04 | Video broadcasting method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108900908A true CN108900908A (en) | 2018-11-27 |
Family
ID=64348420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810725262.2A Pending CN108900908A (en) | 2018-07-04 | 2018-07-04 | Video broadcasting method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108900908A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109982124A (en) * | 2019-03-26 | 2019-07-05 | 深圳创维-Rgb电子有限公司 | User's scene intelligent analysis method, device and storage medium |
CN110543607A (en) * | 2019-08-05 | 2019-12-06 | 平安科技(深圳)有限公司 | Page data generation method and device, computer equipment and storage medium |
CN110868634A (en) * | 2019-11-27 | 2020-03-06 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN110929190A (en) * | 2019-09-23 | 2020-03-27 | 平安科技(深圳)有限公司 | Page playing method and device, electronic equipment and storage medium |
CN111723758A (en) * | 2020-06-28 | 2020-09-29 | 腾讯科技(深圳)有限公司 | Video information processing method and device, electronic equipment and storage medium |
CN111782878A (en) * | 2020-07-06 | 2020-10-16 | 聚好看科技股份有限公司 | Server, display equipment and video searching and sorting method thereof |
CN113556603A (en) * | 2021-07-21 | 2021-10-26 | 维沃移动通信(杭州)有限公司 | Method and device for adjusting video playing effect and electronic equipment |
CN113724544A (en) * | 2021-08-30 | 2021-11-30 | 安徽淘云科技股份有限公司 | Playing method and related equipment thereof |
CN113727171A (en) * | 2021-08-27 | 2021-11-30 | 维沃移动通信(杭州)有限公司 | Video processing method and device and electronic equipment |
CN114116112A (en) * | 2021-12-08 | 2022-03-01 | 深圳依时货拉拉科技有限公司 | Page processing method and device for mobile terminal and computer equipment |
CN114257191A (en) * | 2020-09-24 | 2022-03-29 | 原相科技股份有限公司 | Equalizer adjustment method and electronic device |
CN114969431A (en) * | 2021-04-13 | 2022-08-30 | 中移互联网有限公司 | Image processing method and device and electronic equipment |
CN116369920A (en) * | 2023-06-05 | 2023-07-04 | 深圳市心流科技有限公司 | Electroencephalogram training device, working method, electronic device and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004357173A (en) * | 2003-05-30 | 2004-12-16 | Matsushita Electric Ind Co Ltd | Channel selecting device, measurement data analyzer, and television signal transceiver system |
CN104166530A (en) * | 2013-05-16 | 2014-11-26 | 中兴通讯股份有限公司 | Display parameter adjustment method and device and terminal |
CN104284254A (en) * | 2014-10-22 | 2015-01-14 | 天津三星电子有限公司 | Display device and method for adjusting video playing parameters |
CN105872617A (en) * | 2015-12-28 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Program grading play method and device based on face recognition |
CN106507168A (en) * | 2016-10-09 | 2017-03-15 | 乐视控股(北京)有限公司 | A kind of video broadcasting method and device |
CN107085512A (en) * | 2017-04-24 | 2017-08-22 | 广东小天才科技有限公司 | A kind of audio frequency playing method and mobile terminal |
-
2018
- 2018-07-04 CN CN201810725262.2A patent/CN108900908A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004357173A (en) * | 2003-05-30 | 2004-12-16 | Matsushita Electric Ind Co Ltd | Channel selecting device, measurement data analyzer, and television signal transceiver system |
CN104166530A (en) * | 2013-05-16 | 2014-11-26 | 中兴通讯股份有限公司 | Display parameter adjustment method and device and terminal |
CN104284254A (en) * | 2014-10-22 | 2015-01-14 | 天津三星电子有限公司 | Display device and method for adjusting video playing parameters |
CN105872617A (en) * | 2015-12-28 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Program grading play method and device based on face recognition |
CN106507168A (en) * | 2016-10-09 | 2017-03-15 | 乐视控股(北京)有限公司 | A kind of video broadcasting method and device |
CN107085512A (en) * | 2017-04-24 | 2017-08-22 | 广东小天才科技有限公司 | A kind of audio frequency playing method and mobile terminal |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109982124A (en) * | 2019-03-26 | 2019-07-05 | 深圳创维-Rgb电子有限公司 | User's scene intelligent analysis method, device and storage medium |
CN110543607A (en) * | 2019-08-05 | 2019-12-06 | 平安科技(深圳)有限公司 | Page data generation method and device, computer equipment and storage medium |
CN110929190A (en) * | 2019-09-23 | 2020-03-27 | 平安科技(深圳)有限公司 | Page playing method and device, electronic equipment and storage medium |
CN110868634A (en) * | 2019-11-27 | 2020-03-06 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN110868634B (en) * | 2019-11-27 | 2023-08-22 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN111723758B (en) * | 2020-06-28 | 2023-10-31 | 腾讯科技(深圳)有限公司 | Video information processing method and device, electronic equipment and storage medium |
CN111723758A (en) * | 2020-06-28 | 2020-09-29 | 腾讯科技(深圳)有限公司 | Video information processing method and device, electronic equipment and storage medium |
CN111782878A (en) * | 2020-07-06 | 2020-10-16 | 聚好看科技股份有限公司 | Server, display equipment and video searching and sorting method thereof |
CN111782878B (en) * | 2020-07-06 | 2023-09-19 | 聚好看科技股份有限公司 | Server, display device and video search ordering method thereof |
CN114257191B (en) * | 2020-09-24 | 2024-05-17 | 达发科技股份有限公司 | Equalizer adjusting method and electronic device |
CN114257191A (en) * | 2020-09-24 | 2022-03-29 | 原相科技股份有限公司 | Equalizer adjustment method and electronic device |
CN114969431A (en) * | 2021-04-13 | 2022-08-30 | 中移互联网有限公司 | Image processing method and device and electronic equipment |
CN113556603A (en) * | 2021-07-21 | 2021-10-26 | 维沃移动通信(杭州)有限公司 | Method and device for adjusting video playing effect and electronic equipment |
CN113556603B (en) * | 2021-07-21 | 2023-09-19 | 维沃移动通信(杭州)有限公司 | Method and device for adjusting video playing effect and electronic equipment |
CN113727171A (en) * | 2021-08-27 | 2021-11-30 | 维沃移动通信(杭州)有限公司 | Video processing method and device and electronic equipment |
CN113724544B (en) * | 2021-08-30 | 2023-08-22 | 安徽淘云科技股份有限公司 | Playing method and related equipment thereof |
CN113724544A (en) * | 2021-08-30 | 2021-11-30 | 安徽淘云科技股份有限公司 | Playing method and related equipment thereof |
CN114116112A (en) * | 2021-12-08 | 2022-03-01 | 深圳依时货拉拉科技有限公司 | Page processing method and device for mobile terminal and computer equipment |
CN116369920A (en) * | 2023-06-05 | 2023-07-04 | 深圳市心流科技有限公司 | Electroencephalogram training device, working method, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108900908A (en) | Video broadcasting method and device | |
US11430260B2 (en) | Electronic display viewing verification | |
US11887352B2 (en) | Live streaming analytics within a shared digital environment | |
US11271765B2 (en) | Device and method for adaptively providing meeting | |
US9894415B2 (en) | System and method for media experience data | |
JP6752819B2 (en) | Emotion detection system | |
US20180124459A1 (en) | Methods and systems for generating media experience data | |
CN103760968B (en) | Method and device for selecting display contents of digital signage | |
US20180115802A1 (en) | Methods and systems for generating media viewing behavioral data | |
US20180124458A1 (en) | Methods and systems for generating media viewing experiential data | |
US20200053312A1 (en) | Intelligent illumination and sound control in an internet of things (iot) computing environment | |
US10939165B2 (en) | Facilitating television based interaction with social networking tools | |
CN109919079A (en) | Method and apparatus for detecting learning state | |
Yazdani et al. | Multimedia content analysis for emotional characterization of music video clips | |
US11483618B2 (en) | Methods and systems for improving user experience | |
US20180109828A1 (en) | Methods and systems for media experience data exchange | |
US20200342979A1 (en) | Distributed analysis for cognitive state metrics | |
CN108882032A (en) | Method and apparatus for output information | |
CN110447232A (en) | For determining the electronic equipment and its control method of user emotion | |
CN109982124A (en) | User's scene intelligent analysis method, device and storage medium | |
US20220101146A1 (en) | Neural network training with bias mitigation | |
US20200077136A1 (en) | DYNAMIC MODIFICATION OF MEDIA CONTENT IN AN INTERNET OF THINGS (IoT) COMPUTING ENVIRONMENT | |
WO2011031932A1 (en) | Media control and analysis based on audience actions and reactions | |
US11750866B2 (en) | Systems and methods for generating adapted content depictions | |
CN105159990B (en) | A kind of method and apparatus of media data grading control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181127 |