CN108875652A - User's scene analysis device and method - Google Patents

User's scene analysis device and method Download PDF

Info

Publication number
CN108875652A
CN108875652A CN201810657585.2A CN201810657585A CN108875652A CN 108875652 A CN108875652 A CN 108875652A CN 201810657585 A CN201810657585 A CN 201810657585A CN 108875652 A CN108875652 A CN 108875652A
Authority
CN
China
Prior art keywords
user
scene
face
video image
validated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810657585.2A
Other languages
Chinese (zh)
Inventor
陈洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Phicomm Shanghai Co Ltd
Original Assignee
Sichuan Feixun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Feixun Information Technology Co Ltd filed Critical Sichuan Feixun Information Technology Co Ltd
Priority to CN201810657585.2A priority Critical patent/CN108875652A/en
Publication of CN108875652A publication Critical patent/CN108875652A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present invention provides a kind of user's scene analysis device and methods, including:Video image obtains module, for extracting an at least frame video image from video flowing according to preset rules;Validated user identification module identifies current active subscriber for obtaining the video image that module is extracted according to video image;User's scene matching module, the validated user for being identified according to validated user identification module match user's scene, complete user's scene analysis.It is not only applicable in single user application scenarios, it is applicable in multi-user scene simultaneously, and validated user identification is carried out for all users occurred in video image, scene is more really and accurately understood with this, convenient for more accurately providing customization service for user, as smart television recommends program or smart home to carry out collecting operation etc. to the data that user uses according to matched application scenarios.

Description

User's scene analysis device and method
Technical field
The present invention relates to Smart Home technical field more particularly to a kind of user's scene analysis device and methods.
Background technique
It is most of at present or controlled by control panel, remote controler etc. for household electrical appliance, with AI The arrival in (Artificial Intelligence, artificial intelligence) epoch, especially with computer vision and speech recognition skill The rapid development of art, home control is substantive to intelligent development and gradually to be popularized, and promotes the safety, just of domestic environment Benefit, comfort, energy saving etc..Continuous promotion with people to smart home demand, computer vision the relevant technologies are in intelligence What can be applied in energy household is more extensive, complete by recognition of face in smart television such as based on the face recognition technology of image Relevant control, program recommendation of pairs of TV etc..
Currently, having many methods realized by recognition of face to home control, such as by carrying camera shooting on TV Head real time monitoring watches the user of TV, and the identity of the user is analyzed using face recognition technology, and then recommend to be suitble to it TV programme.But in existing technical solution, usually both for single user's scene, meet the need of single scene It asks, it is clear that be less consistent with the demand of true usage scenario.
Summary of the invention
The object of the present invention is to provide a kind of user's scene analysis device and method, effectively solve to be directed to intelligence in the prior art User's scene analysis method of energy TV is only for single user, the technical issues of not being consistent with true usage scenario.
Technical solution provided by the invention is as follows:
A kind of user's scene analysis device includes in user's scene analysis device:
Video image obtains module, for extracting an at least frame video image from video flowing according to preset rules;
Validated user identification module, for obtaining the video image of module extraction according to video image to current active subscriber It is identified;
User's scene matching module, validated user for being identified according to validated user identification module to user's scene into Row matching, completes user's scene analysis.
It is different from occurring the as technical solution of user in the prior art, in the technical scheme, first from video flowing Video image is obtained, current active subscriber is judged according to video image to match pre-set user in turn later Scape is not only applicable in single user application scenarios, while being applicable in multi-user scene, and for all users occurred in video image into The identification of row validated user, more really and accurately understands scene with this, convenient for more accurately providing customization service for user, As smart television recommends program or smart home to carry out collection operation to the data that user uses according to matched application scenarios Deng.
It is further preferred that including in the validated user identification module:
Face identification unit, for being identified to the face occurred in video image;
Validated user judging unit, the face for being identified according to the face identification unit carry out current active subscriber Judgement.
It is further preferred that further including in the validated user identification module:
Age estimation unit, the face for identifying to face identification unit carry out Age estimation;
Size acquiring unit, for obtaining the size for the face that face identification unit identifies;
The validated user judging unit is obtained according to the face age of the Age estimation unit judges and the size The facial size that unit obtains judges whether relative users are validated user.
It in the technical scheme, whether is validated user to it according to the age of the face occurred in video image and size Judged, removed with this apart from farther away inactive users, it is avoided to interfere user's scene analysis.
It is further preferred that further including in the validated user identification module:
Angle acquiring unit, for obtaining the deviation angle of face that face identification unit identifies relative to application apparatus Degree;
The face deviation angle judgement that the validated user judging unit is obtained according to the angle acquiring unit is mutually applied Whether family is validated user.
It in the technical scheme, whether is that validated user is sentenced to it according to the deviation angle of face in video image It is disconnected, the excessive inactive users of deviation angle are filtered out.Here deviation angle, specially face are set relative to video flowing acquisition It sets (for application apparatus), such as the deviation angle of camera.
It is further preferred that the face identification unit is also used to know the face of the eye closing occurred in video image Not;
It further include statistic unit in the validated user identification module, the statistic unit is according to continuous multiple frames video image Statistics eye closing face corresponds to the time that user continuously closes one's eyes;
Whether the continuous closed-eye time of user that the validated user judging unit is counted according to the statistic unit judges it For validated user.
In the technical scheme, the time that the eye closing user occurred in video flowing continuously closes one's eyes is counted, if being in for a long time Closed-eye state determines that it is inactive users.
It is further preferred that the statistic unit is also used to correspond to user company according to continuous multiple frames video image statistics face The continuous time occurred;
Whether the continuous time of occurrence of user that the validated user judging unit is counted according to the statistic unit judges it For validated user.
In the technical scheme, the time that counting user continuously occurs in video streaming determines if the time occurred is shorter It is inactive users.
It is further preferred that including judging unit and matching unit in user's scene matching module, wherein
Judging unit, for judge current active subscriber that validated user identification module identifies whether with a pre-set user The user set in scene is identical;
When judging unit is judged then to match when the user's exact matching set in validated user and a pre-set user scene Unit directly matches user's scene;
When judging unit judges that the user set in validated user and all pre-set user scenes is not exactly the same, Then matching unit matching is overlapped the most user's scene of number of users with current active subscriber.
In the technical scheme, preset user's scene is matched according to the current active subscriber identified, convenient for being to use Family provides the service of customization.
It is further preferred that further include that prompt unit and user's scene create unit in user's scene matching module, Wherein,
When judging unit judges that the user set in validated user and all pre-set user scenes is not exactly the same, Then prompt unit prompts the user whether creation user's scene;
Otherwise if receiving the instruction of user's scene creation, user's scene creates unit and creates new user's scene, The most user's scene of number of users is overlapped with current active subscriber with units match.
In the technical scheme, if the current active subscriber identified cannot be exactly matched with preset user's scene, branch Hold user re-create new user's scene carry out it is perfect.
The present invention also provides a kind of user's scene analysis method, include in user's scene analysis method:
It obtains video flowing and therefrom extracts an at least frame video image according to preset rules;
Current active subscriber is identified according to the video image of extraction;
User's scene is matched according to the validated user identified, completes user's scene analysis.
It is different from occurring the as technical solution of user in the prior art, in the technical scheme, first from video flowing Video image is obtained, current active subscriber is judged according to video image to match pre-set user in turn later Scape is not only applicable in single user application scenarios, while being applicable in multi-user scene, and for all users occurred in video image into The identification of row validated user, more really and accurately understands scene, convenient for more accurately providing customization service, such as intelligence for user Energy TV recommends program or smart home to carry out collecting operation etc. to the data that user uses according to matched application scenarios.
It is further preferred that in step S20, according to the video image of extraction current active subscriber is identified, packet It includes:
The face occurred in video image is identified;
Age estimation is carried out to the face identified;
Obtain the size of the face identified;
Judge whether relative users are validated user according to the age of face and size;And/or
In step S20, according to the video image of extraction current active subscriber is identified, including:
The face occurred in video image is identified;
Obtain deviation angle of the face identified relative to application apparatus;
Judge whether relative users are validated user according to the face deviation angle of acquisition;And/or
In step S20, according to the video image of extraction current active subscriber is identified, including:
The face of the eye closing occurred in video image is identified;
The time that user continuously closes one's eyes is corresponded to according to continuous multiple frames video image statistics eye closing face;
Judge whether it is validated user according to the continuous closed-eye time of the user of the statistics;And/or
In step S20, according to the video image of extraction current active subscriber is identified, including:
The face occurred in video image is identified;
The time that user continuously occurs is corresponded to according to continuous multiple frames video image statistics face;
Judge whether it is validated user according to the continuous time of occurrence of the user of statistics.
It is further preferred that in step S30, according to the validated user identified user's scene is matched, including:
Judge whether the current active subscriber identified is identical with the user set in a pre-set user scene;If It is then directly to match user's scene;Otherwise,
Matching is overlapped the most user's scene of number of users with current active subscriber.
It is further preferred that the current active subscriber that identifies of step judgement whether with set in a pre-set user scene User it is identical in, judging that the user set in validated user and all pre-set user scenes is not exactly the same Later, further include:
Prompt the user whether creation user's scene;
If receiving the instruction of user's scene creation, enters the step of creating new user's scene, otherwise go to step Matching is overlapped the most user's scene of number of users with current active subscriber.
In the technical scheme, preset user's scene is matched according to the current active subscriber identified, convenient for being to use Family provides the service of customization, and if the current active subscriber that identifies cannot exactly match with preset user's scene, support use Family re-create new user's scene carry out it is perfect.
Detailed description of the invention
Below by clearly understandable mode, preferred embodiment is described with reference to the drawings, to above-mentioned characteristic, technical characteristic, Advantage and its implementation are further described.
Fig. 1 is a kind of embodiment schematic diagram of user's scene analysis device in the present invention;
Fig. 2 is a kind of embodiment schematic diagram of validated user identification module in the present invention;
Fig. 3 is validated user identification module another embodiment schematic diagram in the present invention;
Fig. 4 is validated user identification module another embodiment schematic diagram in the present invention;
Fig. 5 is a kind of embodiment schematic diagram of user's scene matching module in the present invention;
Fig. 6 is a kind of embodiment flow diagram of user's scene analysis method in the present invention.
Drawing reference numeral explanation:
100- user's scene analysis device, 110- video image obtain module, 120- validated user identification module, and 130- is used Family scene matching module, 121- face identification unit, 122- validated user judging unit, 123- Age estimation unit, 124- ruler Very little acquiring unit, 125- angle acquiring unit, 126- statistic unit, 131- judging unit, 132- matching unit.
Specific embodiment
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, Detailed description of the invention will be compareed below A specific embodiment of the invention.It should be evident that drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing, and obtain other embodiments.
To make simplified form, part related to the present invention is only schematically shown in each figure, their not generations Its practical structures as product of table.
It is as shown in Figure 1 a kind of embodiment schematic diagram of user's scene analysis device provided by the invention, it can be with from figure Find out, includes in user's scene analysis device 100:Video image obtain module 110, validated user identification module 120 and User's scene matching module 130, wherein validated user identification module 120 obtains module 110 and user with video image respectively Scape matching module 130 connects.
In the present embodiment, it after user's scene analysis device 100 is started to work, is obtained by picture pick-up devices such as cameras Video flowing before camera switching obtains module 110 with this video image and extracts at least frame view from video flowing according to preset rules Frequency image;Later, validated user identification module 120 obtains video image that module 110 is extracted to currently having according to video image Effectiveness family is identified;Finally, user's scene matching module 130 identified according to validated user identification module 120 it is effective Family matches user's scene, completes user's scene analysis.Module 110 is obtained for video image and extracts view from video flowing The preset rules of frequency image specially select a frame video image to carry out subsequent Face datection and e.g. work as camera shooting every a framing 25 frame images of shooting per second, can every 12 or 13 frames choose a video image;It, can every 15 when camera 30 frame images of shooting per second Frame chooses video image etc., is not specifically limited, can be selected according to the actual situation here.
Validated user refers to really in the user using the application apparatus, as the subsequent offer of user's scene analysis device 100 The core customer of service, is targetedly serviced.Validated user identification module 120 is during identifying validated user, needle Pair be all people's face in video image, as long as appear in the face in video image be all used as validated user identify pair As being not only applicable in single user application scenarios, while being applicable in multi-user scene, can more really and accurately understand field with this Scape, convenient for more accurately providing customization service for user.Such as, in one example, by user's scene analysis device application In smart television, camera is set towards the direction of viewing user in smart television, when user opens user's scene analysis function When energy, camera starts to be shot to obtain video flowing to viewing user, later the selecting video image from obtained video flowing As the foundation of user's scene analysis, the validated user of viewing smart television is analyzed, is matched with this according to these validated users Application scenarios are preset out, finally recommend program to viewing user according to the application scenarios that matching obtains.It is, of course, also possible to answer Its operating habit is recorded in other smart homes, such as according to the validated user identified.
It in the present embodiment, include that face identification unit 121 and validated user judge in validated user identification module 120 Unit 122, wherein face identification unit 121 is for carrying out detection identification to the face occurred in video image, in this process In, the face for meeting primary image quality requirement detected, carried out convenient for validated user judging unit 122 the currently active User judges.When the face quantity that face identification unit 121 detects is 1, then the identification to the face is directly entered Operation, identifies the user identity of the face and matches it with pre-set user scene;When the face quantity detected is big When being equal to 2, further all faces are made whether with the judgement for validated user.
Above embodiment is improved to obtain present embodiment, as shown in Fig. 2, in the present embodiment, it is effective It further include sentencing at the age in family identification module 120 other than including face identification unit 121 and validated user judging unit 122 Disconnected unit 123 and size acquiring unit 124, wherein what Age estimation unit 123 was used to identify face identification unit 121 Face carries out Age estimation, and size acquiring unit 124 is used to obtain the size for the face that face identification unit 121 identifies, with This validated user judging unit 122 judges whether relative users are validated user according to face age and facial size.
In the present embodiment, by with a distance from photographic device farther out, the obtained smaller user of face of shooting is determined as in vain The user of application apparatus is used in user, non-present.Due to children and adult face size different from, sentencing The age of relative users judge according to the face identified first before disconnected, and is grouped, e.g., when judging face correspondence The age of user is 6 years old, then is classified as children's group;When judging that the face corresponding age for 20 years old, is then classified as adult group, It can set, such as set it to 15 years old according to the actual situation as the age boundary between children's group and year adult group, For another example it is set as 18 years old etc..For different groups, it is preset with size threshold, when the size of the face occurred in video image When greater than to size threshold, then validated user is determined that it is, inactive users are otherwise determined that it is.
In one example, the display screen of smart television is 5,000,000 pixels, and camera is set to the top of smart television, just The viewing user of smart television is shot, and the size threshold that children organize is set as 20*20 pixel, by the ruler of adult group Very little threshold value is set as 25*25 pixel, then when a certain face is classified as children's group, and the size in middle video image is 50* 60 pixels, then determine that it is validated user, is the viewing user of current smart television;When a certain face is classified as adult group, and Size in middle video image is 15*25 pixel, then determines that it is inactive users, (be such as sitting in farther out apart from smart television The user etc. of dining table), the not viewing user of smart television.
Above embodiment is improved to obtain present embodiment, as shown in figure 3, in the present embodiment, this is effectively It further include angle in subscriber identification module 120 other than including face identification unit 121 and validated user judging unit 122 Acquiring unit 125, for obtaining the deviation angle of face that face identification unit 121 identifies relative to application apparatus, with this, Validated user judging unit 122 judges whether relative users are to have according to the face deviation angle that angle acquiring unit 125 obtains Effectiveness family.
It in the present embodiment, whether is that validated user is sentenced to it according to the deviation angle of face in video image It is disconnected, convenient for the excessive inactive users of deviation angle are filtered out.Deviation angle is specially face relative to inclined in application apparatus Move angle, for the deviation angle it is specific setting can according to the actual situation depending on, such as by the angle initialization be 30 °, 45 ° Deng.In one example, which is applied to smart television, camera is set to above smart television, just To viewing user's shooting, deviation angle (being set as 45 °) is the angle that user's face deviates face smart television, then when detecting The deviation angle of one face is up to 60 °, it is believed that its probability for watching TV is smaller, determines the user for inactive users;Work as inspection The deviation angle of a face is measured up to 20 °, determines the user for validated user.In other embodiments, deviation angle is being limited The time that face persistently deviates the angle can be limited while spending, the deviation angle that a such as face corresponds to user is 50 °, and is held The time of continuous offset is more than 1min, then judges the user for inactive users.
Above embodiment is improved to obtain present embodiment, as shown in figure 4, in the present embodiment, this is effectively It further include statistics in subscriber identification module 120 other than including face identification unit 121 and validated user judging unit 122 Unit 126, for corresponding to the time that user continuously closes one's eyes according to continuous multiple frames video image statistics eye closing face.
In the present embodiment, face identification unit 121 identifies the face of the eye closing occurred in video image, system Meter unit 126 correspond to the time that user continuously closes one's eyes to the face and counts, and is greater than first in advance when the time that it is continuously closed one's eyes If the time, then judge it for inactive users.Setting for the first preset time can be set, such as according to the actual situation 1min, 3min (minute) etc. are set it to, is not specifically limited here.
In another embodiment, statistic unit 126, which is also used to count each face in video image and corresponds to user, continuously goes out The existing time judges it for validated user, otherwise judges it for no effectiveness if the time continuously occurred is greater than the second preset time Family.Setting for the second preset time can be set according to the actual situation, such as set it to 10s, 20s (second), Here it is not specifically limited.
It is noted that in practical applications, judging validated user, root according to face age and facial size for above-mentioned Validated user is judged according to deviation angle, and validated user is judged according to the duration of closing one's eyes and is had according to the time judgement continuously occurred Four kinds of embodiments at effectiveness family, can be used alone the distinguishing rule as validated user, can also be according to different applications Scene is used in any combination, convenient for effectively really analyze validated user.Such as, in one example, user is meeting face year It needs to meet the condition that deviation angle is less than angle threshold while age and facial size condition, determines that it is validated user;Again Such as, in one example, while meeting aforementioned four condition, determine that it is validated user etc..
In addition, if application apparatus is that booting for the first time sentences current active subscriber using the application scenarios analytical equipment It is disconnected, then the judgement of validated user is carried out for the first frame image that camera takes, by the face age, facial size, The static conditions such as deviation angle are judged;When user using a period of time, such as have viewed the intelligence of a period of time TV needs to carry out judgement to validated user again to recommend program, then can be using the video obtained for the previous period according to this Stream judged, including, closed-eye time, the time persistently occurred etc..
After validated user identification module 120 identifies current active subscriber, user's scene matching module 130 is according to effective The validated user that subscriber identification module 120 identifies matches user's scene.In one embodiment, as shown in figure 5, with It include judging unit 131 and matching unit 132 in family scene matching module 130, wherein judging unit 131 is for judging identification Whether current active subscriber out is identical with the user set in a pre-set user scene.
In the present embodiment, after validated user identification module 120 identifies current active subscriber, judging unit 131 first identify its identity and are judged whether comprising known users, if so, being used into normal use mode Facial image (can also be sent to master by the step of family scene matching, the intelligent terminal for otherwise issuing safety prompt function to owner Appoint), prompting owner's current active subscriber is stranger.
During matching user's scene, first determine whether current active subscriber is complete with the user in user's scene Full matching, if so, matching unit 132 directly matches user's scene;When judging unit 131 is judged when validated user and institute There is the user set in pre-set user scene all not exactly the same, then the matching of matching unit 132 is overlapped use with current active subscriber The most user's scene of amount amount.
It in another embodiment, further include that prompt unit and user's scene are newly-built single in user's scene matching module 130 Member, specifically, when judging unit 131 is judged when all incomplete phase of user set in validated user and all pre-set user scenes Together, then prompt unit prompts the user whether creation user's scene;If receiving the instruction of user's scene creation, user's scene is newly-built single Member creates new user's scene, and otherwise matching unit 132, which is matched, is overlapped number of users most user with current active subscriber Scape.After being matched to application scenarios, the customization server for user's scene can be realized, if being based on user's scene Orient the favorite program of recommended user;For another example, it makes and customizes desktop and system setting etc..
In one example, preset user's scene includes:Set 1:User A;Set 2:User B;Set 3:User A With user B;Set 4:User B, user C and user D.Then, when judging that current active subscriber includes user A, matching gathers 1 pair The application scenarios answered;When judging that current active subscriber includes user B and user C, prompt whether to need to create new applied field Scape, when user's confirmation does not need to create, 4 corresponding user's scenes are gathered in matching;When judging that current active subscriber includes user F, issues the intelligent terminal of safety prompt function to owner, and prompt owner's current active subscriber F is stranger;At the same time, prompt is It is no to need newly-built user's scene, if desired, then to re-create new user's scene.
It is illustrated in figure 6 a kind of embodiment flow diagram of user's scene analysis method provided by the invention, from figure As can be seen that including in user's scene analysis method:S10 obtains video flowing and therefrom extracts at least one according to preset rules Frame video image;S20 identifies current active subscriber according to the video image of extraction;S30 is effective according to what is identified Family matches user's scene, completes user's scene analysis.
In the present embodiment, right by the video flowing before the picture pick-up devices such as camera acquisition camera lens after start-up operation In the preset rules for extracting video image from video flowing, specially a frame video image is selected to carry out every a framing subsequent Face datection e.g., can every 12 or 13 frames one video image of selection when camera 25 frame images of shooting per second;When camera is per second Shoot 30 frame images, can every 15 frame choose video image etc., be not specifically limited, can be selected according to the actual situation here It is fixed.
Validated user refers to really in the user using the application apparatus, as the subsequent offer service of user's scene analysis method Core customer, targetedly serviced.During identifying validated user, it is directed to all people in video image Face is not only applicable in single user applied field as long as appearing in the object that the face in video image is all used as validated user to identify Scape, while it being applicable in multi-user scene, scene can be understood more really and accurately with this, convenient for more accurately providing for user Customize service.Such as, in one example, this is applied to smart television, is set in smart television towards the direction of viewing user Camera is set, when user opens user's scene analysis function, camera starts to be shot to obtain video flowing to viewing user, Foundation of the selecting video image as user's scene analysis from obtained video flowing later analyzes having for viewing smart television Effectiveness family is matched according to these validated users with this and presets application scenarios, the application scenarios finally obtained according to matching Recommend program to viewing user.It is, of course, also possible to be applied in other smart homes, as according to the validated user identified Record its operating habit etc..
Judgement for validated user, in one embodiment, including:The face occurred in video image is known Not;Age estimation is carried out to the face identified;Obtain the size of the face identified;Judged according to the age of face and size Whether relative users are validated user.In this process, the face for meeting primary image quality requirement detected, just Judged in carrying out current active subscriber.When the face quantity detected is 1, then the identification behaviour to the face is directly entered Make, identifies the user identity of the face and match it with pre-set user scene;When the face quantity detected is greater than When equal to 2, further all faces are made whether with the judgement for validated user.
In the present embodiment, by with a distance from photographic device farther out, the obtained smaller user of face of shooting is determined as in vain The user of application apparatus is used in user, non-present.Due to children and adult face size different from, sentencing The age of relative users judge according to the face identified first before disconnected, and is grouped, e.g., when judging face correspondence The age of user is 6 years old, then is classified as children's group;When judging that the face corresponding age for 20 years old, is then classified as adult group, It can set, such as set it to 15 years old according to the actual situation as the age boundary between children's group and year adult group, For another example it is set as 18 years old etc..For different groups, it is preset with size threshold, when the size of the face occurred in video image When greater than to size threshold, then validated user is determined that it is, inactive users are otherwise determined that it is.
In another embodiment, including:The face occurred in video image is identified;Obtain the face identified Deviation angle relative to application apparatus;Judge whether relative users are validated user according to the face deviation angle of acquisition.
It in the present embodiment, whether is that validated user is sentenced to it according to the deviation angle of face in video image It is disconnected, convenient for the excessive inactive users of deviation angle are filtered out.Deviation angle is specially face relative to inclined in application apparatus Move angle, for the deviation angle it is specific setting can according to the actual situation depending on, such as by the angle initialization be 30 °, 45 ° Deng.In other embodiments, the time that face persistently deviates the angle can be limited while limiting deviation angle, such as one The deviation angle that face corresponds to user is 50 °, and the time persistently deviated is more than 1min, then judges the user for inactive users.
In another embodiment, including:The face of the eye closing occurred in video image is identified;According to continuous more Frame video image statistics eye closing face corresponds to the time that user continuously closes one's eyes;It is judged according to the continuous closed-eye time of the user of statistics It whether is validated user.In the present embodiment, it is greater than the first preset time when its time continuously closed one's eyes, then judges it for nothing Effectiveness family.Setting for the first preset time can be set according to the actual situation, such as set it to 1min, 3min It (minute) etc., is not specifically limited here.
In another embodiment, including:The face occurred in video image is identified;According to continuous multiple frames video Image statistics face corresponds to the time that user continuously occurs;Judge whether it is effective according to the continuous time of occurrence of the user of statistics User.If the time that the corresponding user of a face continuously occurs is greater than the second preset time, judge that it, for validated user, is otherwise sentenced It break as inactive users.Setting for the second preset time can be set according to the actual situation, such as be set it to 10s, 20s (second) etc., are not specifically limited here.
It is noted that in practical applications, judging validated user, root according to face age and facial size for above-mentioned Validated user is judged according to deviation angle, and validated user is judged according to the duration of closing one's eyes and is had according to the time judgement continuously occurred Four kinds of embodiments at effectiveness family, can be used alone the distinguishing rule as validated user, can also be according to different applications Scene is used in any combination, convenient for effectively really analyze validated user.Such as, in one example, user is meeting face year It needs to meet the condition that deviation angle is less than angle threshold while age and facial size condition, determines that it is validated user;Again Such as, in one example, while meeting aforementioned four condition, determine that it is validated user etc..
In addition, if application apparatus is that booting for the first time sentences current active subscriber using the application scenarios analysis method It is disconnected, then the judgement of validated user is carried out for the first frame image that camera takes, by the face age, facial size, The static conditions such as deviation angle are judged;When user using a period of time, such as have viewed the intelligence of a period of time TV needs to carry out judgement to validated user again to recommend program, then can be using the video obtained for the previous period according to this Stream judged, including, closed-eye time, the time persistently occurred etc..
After identifying current active subscriber, user's scene is matched according to the validated user identified.Specifically, first First its identity is identified and is judged whether comprising known users, if so, carrying out user's scene into normal use mode Otherwise the step of matching, issues the intelligent terminal (facial image can also be sent to director) of safety prompt function to owner, prompt Owner's current active subscriber is stranger.
During matching user's scene, in one embodiment, first determine whether current active subscriber whether with one User's exact matching in user's scene, if so, directly matching user's scene;When judge when validated user with it is all pre- If the user set in user's scene is not exactly the same, then matching is overlapped the most user of number of users with current active subscriber Scene.In another embodiment, when judging that the user set in validated user and all pre-set user scenes is endless Exactly the same, then prompt unit prompts the user whether creation user's scene;If receiving the instruction of user's scene creation, user's scene is new It builds unit and creates new user's scene, otherwise matching is overlapped the most user's scene of number of users with current active subscriber.? It is fitted on after application scenarios, the customization server for user's scene can be realized, if being pushed away based on user's scene orientation Recommend the favorite program of user;For another example, it makes and customizes desktop and system setting etc..
It should be noted that above-described embodiment can be freely combined as needed.The above is only of the invention preferred Embodiment, it is noted that for those skilled in the art, in the premise for not departing from the principle of the invention Under, several improvements and modifications can also be made, these modifications and embellishments should also be considered as the scope of protection of the present invention.

Claims (12)

1. a kind of user's scene analysis device, which is characterized in that include in user's scene analysis device:
Video image obtains module, for extracting an at least frame video image from video flowing according to preset rules;
Validated user identification module carries out current active subscriber for obtaining the video image that module is extracted according to video image Identification;
User's scene matching module, the validated user for being identified according to validated user identification module is to the progress of user's scene Match, completes user's scene analysis.
2. user's scene analysis device as described in claim 1, which is characterized in that wrapped in the validated user identification module It includes:
Face identification unit, for being identified to the face occurred in video image;
Validated user judging unit, the face for being identified according to the face identification unit sentence current active subscriber It is disconnected.
3. user's scene analysis device as claimed in claim 2, which is characterized in that also wrapped in the validated user identification module It includes:
Age estimation unit, the face for identifying to face identification unit carry out Age estimation;
Size acquiring unit, for obtaining the size for the face that face identification unit identifies;
Face age and the size acquiring unit of the validated user judging unit according to the Age estimation unit judges The facial size of acquisition judges whether relative users are validated user.
4. user's scene analysis device as claimed in claim 2 or claim 3, which is characterized in that in the validated user identification module Further include:
Angle acquiring unit, for obtaining the deviation angle of face that face identification unit identifies relative to application apparatus;
The validated user judging unit judges that relative users are according to the face deviation angle that the angle acquiring unit obtains No is validated user.
5. user's scene analysis device as claimed in claim 2 or claim 3, which is characterized in that
The face identification unit is also used to identify the face of the eye closing occurred in video image;
It further include statistic unit in the validated user identification module, the statistic unit is according to continuous multiple frames video image statistics Eye closing face corresponds to the time that user continuously closes one's eyes;
The continuous closed-eye time of user that the validated user judging unit is counted according to the statistic unit judges whether it is to have Effectiveness family.
6. user's scene analysis device as claimed in claim 5, which is characterized in that
The statistic unit is also used to correspond to the time that user continuously occurs according to continuous multiple frames video image statistics face;
The continuous time of occurrence of user that the validated user judging unit is counted according to the statistic unit judges whether it is to have Effectiveness family.
7. user's scene analysis device as described in claims 1 or 2 or 3 or 6, which is characterized in that user's scene matching It include judging unit and matching unit in module, wherein
Judging unit, for judge current active subscriber that validated user identification module identifies whether with a pre-set user scene The user of middle setting is identical;
When judging unit judges the user set in validated user and a pre-set user scene exact matching, then matching unit Directly match user's scene;
When judging unit judges that the user set in validated user and all pre-set user scenes is not exactly the same, then The most user's scene of number of users is overlapped with current active subscriber with units match.
8. user's scene analysis device as claimed in claim 7, which is characterized in that also wrapped in user's scene matching module It includes prompt unit and user's scene creates unit, wherein
When judging unit judges that the user set in validated user and all pre-set user scenes is not exactly the same, then mention Show that unit prompts the user whether creation user's scene;
If receiving the instruction of user's scene creation, user's scene creates unit and creates new user's scene, and otherwise matching is single Member matching is overlapped the most user's scene of number of users with current active subscriber.
9. a kind of user's scene analysis method, which is characterized in that include in user's scene analysis method:
It obtains video flowing and therefrom extracts an at least frame video image according to preset rules;
Current active subscriber is identified according to the video image of extraction;
User's scene is matched according to the validated user identified, completes user's scene analysis.
10. user's scene analysis method as claimed in claim 9, which is characterized in that
In step S20, according to the video image of extraction current active subscriber is identified, including:
The face occurred in video image is identified;
Age estimation is carried out to the face identified;
Obtain the size of the face identified;
Judge whether relative users are validated user according to the age of face and size;And/or
In step S20, according to the video image of extraction current active subscriber is identified, including:
The face occurred in video image is identified;
Obtain deviation angle of the face identified relative to application apparatus;
Judge whether relative users are validated user according to the face deviation angle of acquisition;And/or
In step S20, according to the video image of extraction current active subscriber is identified, including:
The face of the eye closing occurred in video image is identified;
The time that user continuously closes one's eyes is corresponded to according to continuous multiple frames video image statistics eye closing face;
Judge whether it is validated user according to the continuous closed-eye time of the user of the statistics;And/or
In step S20, according to the video image of extraction current active subscriber is identified, including:
The face occurred in video image is identified;
The time that user continuously occurs is corresponded to according to continuous multiple frames video image statistics face;
Judge whether it is validated user according to the continuous time of occurrence of the user of statistics.
11. user's scene analysis method as described in claim 9 or 10, which is characterized in that in step S30, according to identifying Validated user user's scene matched, including:
Judge whether the current active subscriber identified is identical with the user set in a pre-set user scene;If so, Directly match user's scene;Otherwise,
Matching is overlapped the most user's scene of number of users with current active subscriber.
12. user's scene analysis method as claimed in claim 11, which is characterized in that currently have what step judgement identified During whether effectiveness family is identical with the user set in a pre-set user scene, judging to preset when validated user with all After the user set in user's scene is not exactly the same, further include:
Prompt the user whether creation user's scene;
If receiving the instruction of user's scene creation, enter the step of creating new user's scene, otherwise go to step matching The most user's scene of number of users is overlapped with current active subscriber.
CN201810657585.2A 2018-06-26 2018-06-26 User's scene analysis device and method Pending CN108875652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810657585.2A CN108875652A (en) 2018-06-26 2018-06-26 User's scene analysis device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810657585.2A CN108875652A (en) 2018-06-26 2018-06-26 User's scene analysis device and method

Publications (1)

Publication Number Publication Date
CN108875652A true CN108875652A (en) 2018-11-23

Family

ID=64294282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810657585.2A Pending CN108875652A (en) 2018-06-26 2018-06-26 User's scene analysis device and method

Country Status (1)

Country Link
CN (1) CN108875652A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461931A (en) * 2020-05-12 2020-07-28 深圳市汇智通咨询有限公司 Automatic control system and control method for intelligent cell
CN112580390A (en) * 2019-09-27 2021-03-30 百度在线网络技术(北京)有限公司 Security monitoring method and device based on intelligent sound box, sound box and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046227A (en) * 2015-07-24 2015-11-11 上海依图网络科技有限公司 Key frame acquisition method for human image video system
US20170201791A1 (en) * 2015-08-28 2017-07-13 Shenzhen Skyworth-Rgb Electronic Co., Ltd Interactive method on intelligent home appliance based on smart tv video scenes and the system thereof
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN107948754A (en) * 2017-11-29 2018-04-20 成都视达科信息技术有限公司 A kind of video recommendation method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046227A (en) * 2015-07-24 2015-11-11 上海依图网络科技有限公司 Key frame acquisition method for human image video system
US20170201791A1 (en) * 2015-08-28 2017-07-13 Shenzhen Skyworth-Rgb Electronic Co., Ltd Interactive method on intelligent home appliance based on smart tv video scenes and the system thereof
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN107948754A (en) * 2017-11-29 2018-04-20 成都视达科信息技术有限公司 A kind of video recommendation method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580390A (en) * 2019-09-27 2021-03-30 百度在线网络技术(北京)有限公司 Security monitoring method and device based on intelligent sound box, sound box and medium
CN112580390B (en) * 2019-09-27 2023-10-17 百度在线网络技术(北京)有限公司 Security monitoring method and device based on intelligent sound box, sound box and medium
CN111461931A (en) * 2020-05-12 2020-07-28 深圳市汇智通咨询有限公司 Automatic control system and control method for intelligent cell
CN111461931B (en) * 2020-05-12 2021-04-27 深圳市汇智通咨询有限公司 Automatic control system and control method for intelligent cell

Similar Documents

Publication Publication Date Title
CN108322788B (en) Advertisement display method and device in live video
CN105654471B (en) Augmented reality AR system and method applied to internet video live streaming
CN108846365B (en) Detection method and device for fighting behavior in video, storage medium and processor
CN101860704B (en) Display device for automatically closing image display and realizing method thereof
CN101588443A (en) Statistical device and detection method for television audience ratings based on human face
CN103024521A (en) Program screening method, program screening system and television with program screening system
WO2017177903A1 (en) Online verification method and system for real-time gesture detection
CN108063979A (en) Video playing control method, device and computer readable storage medium
CN103079034A (en) Perception shooting method and system
CN107404670A (en) A kind of video playing control method and device
CN113138705A (en) Method, device and equipment for adjusting display mode of display interface
CN107480265B (en) Data recommendation method, device, equipment and storage medium
CN107702273B (en) Air conditioner control method and device
CN106470357A (en) barrage message display method and device
CN107948737A (en) The recommendation method and device of TV programme
CN110087131A (en) TV control method and main control terminal in television system
CN108875652A (en) User's scene analysis device and method
CN109905757A (en) The method that video caption broadcasts is controlled by recognition of face
CN111405363A (en) Method and device for identifying current user of set top box in home network
CN103986971A (en) Internet television with parental lock-out function
CN112752153A (en) Video playing processing method, intelligent device and storage medium
CN115396705A (en) Screen projection operation verification method, platform and system
CN114898443A (en) Face data acquisition method and device
EP3941075A1 (en) Multimedia data processing method and apparatus
CN108985244B (en) Television program type identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200805

Address after: 201616 Shanghai city Songjiang District Sixian Road No. 3666

Applicant after: Phicomm (Shanghai) Co.,Ltd.

Address before: 610100 125 Longquan Street Park Road, Longquanyi District, Chengdu, Sichuan.

Applicant before: SICHUAN PHICOMM INFORMATION TECHNOLOGY Co.,Ltd.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181123