WO2020192222A1 - 用户场景智能分析方法、装置和存储介质 - Google Patents

用户场景智能分析方法、装置和存储介质 Download PDF

Info

Publication number
WO2020192222A1
WO2020192222A1 PCT/CN2019/130304 CN2019130304W WO2020192222A1 WO 2020192222 A1 WO2020192222 A1 WO 2020192222A1 CN 2019130304 W CN2019130304 W CN 2019130304W WO 2020192222 A1 WO2020192222 A1 WO 2020192222A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
scene
face
age range
preset
Prior art date
Application number
PCT/CN2019/130304
Other languages
English (en)
French (fr)
Inventor
何腾飞
董飞洋
孙雷
Original Assignee
深圳创维-Rgb电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维-Rgb电子有限公司 filed Critical 深圳创维-Rgb电子有限公司
Publication of WO2020192222A1 publication Critical patent/WO2020192222A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Definitions

  • This application relates to the field of television technology, and in particular to a method, device and storage medium for intelligent analysis of user scenarios.
  • the smart TVs on the market only use the voice recognition technology in AI technology to enhance the intelligence of the TV. Due to the defects of the voice recognition technology itself, it can only "passively" perform related operations based on the voice commands input by the user. , It is not smart enough to meet the growing user demand.
  • the main purpose of the present application is to provide a method, device and storage medium for intelligent analysis of user scenarios, aiming to solve the technical problem that televisions in the prior art cannot meet the intelligent needs of users.
  • this application provides a method for intelligent analysis of user scenarios, including the following steps:
  • Acquiring scene data including image information collected by the intelligent sensing hardware in real time, and recognizing the scene data to determine the user's face attributes in the image information and the user scene displayed by the image information;
  • the face attributes include face feature points, face area brightness, and face size
  • the step of determining the age range of the user according to the user's face attributes includes:
  • the age range of the user is determined to be middle age.
  • the scene data further includes sound information
  • the step of identifying the scene data to determine the user scene displayed by the image information includes:
  • the step of performing a corresponding scene operation according to the age range of the user and the user scene includes:
  • the step of performing a corresponding scene operation according to the age range of the user and the user scene includes:
  • the age range of the user is determined to be middle age, obtaining the viewing distance from the user measured by the intelligent sensing hardware;
  • the step of performing a corresponding scene operation according to the age range of the user and the user scene includes:
  • the method further includes:
  • a TV program is randomly selected from the TV program set to be played.
  • the method further includes:
  • the broadcast image quality of the TV program is improved and the high-frequency volume is increased.
  • the method further includes:
  • the present application also provides a device, the device including: a memory, a processor, and computer-readable instructions stored on the memory and running on the processor, and the computer can When the read instruction is executed by the processor, the steps of the intelligent analysis method for the user scene as described above are realized.
  • the present application also provides a computer-readable non-volatile computer-readable storage medium having computer-readable instructions stored on the computer-readable non-volatile computer-readable storage medium, and When the computer-readable instructions are executed by the processor, the steps of the intelligent analysis method for the user scenario described above are realized.
  • This application discloses a method and device for intelligent analysis of user scenes, and a non-volatile computer-readable storage medium, which acquire scene data including image information collected by the intelligent sensing hardware in real time, and identify the scene data, To determine the user's face attribute in the image information and the user scene displayed by the image information; determine the user's age range according to the user's face attribute; execute the corresponding scene according to the user's age range and the user scene operating.
  • the smart TV “actively” triggers the corresponding scene operation according to the captured age range and scene of the user, thereby enhancing the intelligence of the TV and achieving the purpose of meeting the needs of the user.
  • FIG. 1 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the present application;
  • FIG. 2 is a schematic flowchart of an embodiment of a method for intelligent analysis of user scenarios of this application
  • FIG. 3 is a schematic flowchart of another embodiment of a user scenario intelligent analysis method of this application.
  • FIG. 4 is a schematic flowchart of another embodiment of the method for intelligent analysis of user scenarios according to this application.
  • FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application.
  • the terminal of this application is a device, and the device can be a television, or a terminal device with a storage function such as a server, a computer, a smart phone, a tablet computer, and a portable computer.
  • the terminal may include a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 can be a high-speed RAM memory or a stable memory (non-volatile memory), such as disk storage.
  • the memory 1005 may also be a storage device independent of the foregoing processor 1001.
  • the terminal may also include a camera, a Wi-Fi module, etc., which will not be repeated here.
  • terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or fewer components than shown in the figure, or combine some components, or arrange different components.
  • the network interface 1004 is mainly used to connect to a back-end server and perform data communication with the back-end server;
  • the user interface 1003 mainly includes an input unit such as a keyboard.
  • the keyboard includes a wireless keyboard and a wired keyboard to connect to the client.
  • Perform data communication with the client; and the processor 1001 can be used to call computer readable instructions stored in the memory 1005 and perform the following operations:
  • Acquiring scene data including image information collected by the intelligent sensing hardware in real time, and recognizing the scene data to determine the user's face attributes in the image information and the user scene displayed by the image information;
  • processor 1001 may call computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the age range of the user is determined to be middle age.
  • processor 1001 may call computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the step of identifying the scene data to determine the user scene displayed by the image information includes:
  • processor 1001 may call computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the age range of the user is determined to be middle age, obtaining the viewing distance from the user measured by the intelligent sensing hardware;
  • processor 1001 may call computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • processor 1001 may call computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • a TV program is randomly selected from the TV program set to be played.
  • processor 1001 may call computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the broadcast image quality of the TV program is improved and the high-frequency volume is increased.
  • processor 1001 may call computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the optional embodiments of the device of the present application are basically the same as the following embodiments of the user scenario intelligent analysis method, and will not be repeated here.
  • FIG. 2 is a schematic flowchart of an embodiment of a user scenario intelligent analysis method of this application.
  • the user scenario intelligent analysis method provided in this embodiment includes the following steps:
  • Step S10 acquiring scene data including image information collected by the intelligent sensing hardware in real time, and recognizing the scene data to determine the user's facial attributes in the image information and the user scene displayed by the image information;
  • the application of the user scenario intelligent analysis method on a TV is taken as an example for illustration. It should be understood that the user scenario intelligent analysis method can also be applied to other intelligent terminals such as computers.
  • the smart TV to which this method is applied has built-in and/or externally connected smart sensing hardware.
  • the smart sensing hardware at least includes an infrared sensor, a distance sensor, a sound collector, and a camera.
  • the above scene data includes at least the image information, sound information, user's infrared sensing information around the smart TV, and the user and the smart TV. Distance information between.
  • the above-mentioned built-in related algorithm in this embodiment can be selected as a face recognition algorithm based on deep learning.
  • the face recognition algorithm based on deep learning uses a convolutional neural network to learn a large number of face images. Therefore, the relevant features of different faces can be distinguished more accurately, and the accuracy of face recognition can be improved.
  • Step S20 Determine the age range of the user according to the face attributes of the user
  • the age range of the user can be determined through the face attributes.
  • the age range of the user may be determined by the correlation between the size of the photographed face, the brightness of the face, and the extracted feature points of the face.
  • Step S30 Perform a corresponding scene operation according to the age range of the user and the user scene
  • the current scene of the user is determined from the image information, sound information, and/or other scene data.
  • the smart TV executes the corresponding preset scene operation according to the age range and user scene of the user.
  • This embodiment acquires scene data including image information collected by intelligent sensing hardware in real time, and recognizes the scene data to determine the user's face attributes in the image information and the user scene displayed by the image information; determine according to the user's face attributes The age range of the user; perform corresponding scene operations according to the age range of the user and the user scene.
  • the smart TV “actively” triggers the corresponding scene operation according to the captured age range and scene of the user through the above-mentioned method, thereby enhancing the intelligence of the TV and achieving the purpose of meeting the needs of the user.
  • the step of determining the age range of the user according to the facial attributes of the user includes:
  • Step S21 judging whether the face size of the current user is larger than the preset adult face size
  • Step S22 When the face size of the current user is smaller than a preset adult face size, determine the age range of the user as a young age;
  • Step S23 when the face size of the current user is greater than or equal to the preset adult face size, determine whether the user's facial feature points and the brightness of the face area meet the corresponding preset aging feature standard and preset aging face brightness standard;
  • Step S24 if yes, determine the age range of the user as old age
  • Step S25 if not, determine the age range of the user as middle age.
  • the face attributes determined from the image information include the face size, and the age range of the user is determined according to the aforementioned face attributes.
  • the adult face size is also preset in this embodiment.
  • the current user’s face size is compared with the preset adult face size.
  • the current user’s face size is smaller than the preset In the case of adult face size, the age range of the user is determined as the young age.
  • the face attributes determined from the image information also include face feature points.
  • the relevant technology of face recognition is used to scan the face in the image information to obtain a face that can reflect the contour of the face.
  • Feature points This embodiment also presets the aging feature standard. It is easy to understand that the above facial feature points can reflect the full picture of the user's face. Therefore, it is possible to determine whether the user is aging by comparing the facial feature points with the aging feature standard.
  • a certain number of face models can also be preset, and the extracted face feature points can be compared with preset multiple face models for similarity, so as to determine whether the age range of the user is old.
  • the face attributes determined from the image information also include the brightness of the face area, which can reflect the gloss of the face skin.
  • This embodiment uses the exposure The compensation method appropriately adjusts the exposure of the image, extracts the brightness of the face area in the image information, and prevents the problem of unclear definition of the brightness of the face area due to the overexposure or reduced exposure of the image.
  • This embodiment also presets the brightness standard for the elderly face. It is easy to understand that the skin of the elderly user will have problems such as dullness and wrinkles. Based on the above principle, the extracted face area brightness is compared with the preset elderly face The brightness standards are compared to determine whether the user’s age range is old.
  • the definition of the user as an elderly user in this embodiment not only requires that the facial feature points meet the preset aging feature standard, but also the brightness of the face area must meet the preset aging facial brightness standard.
  • the user's age range is determined as old.
  • the user's age range is determined to be middle age.
  • the smart TV by verifying whether the face size, face feature points, and face area brightness in the face attributes meet the corresponding standards, the user’s age range is determined to be young, middle, and old, accurately defined Therefore, the smart TV triggers different operations according to the age range of the user.
  • FIG. 3 is a schematic flowchart of another embodiment of the user scene intelligent analysis method of this application.
  • the step of identifying the scene data to determine the user scene displayed by the image information includes:
  • Step S11 Identify whether the sound information in the scene data includes crying sounds
  • Step S12 if yes, judge whether the user has a crying expression according to the face attributes of the user;
  • Step S13 when the user has a crying expression, determine that the user scene displayed in the image information is a crying scene;
  • the step of performing a corresponding scene operation according to the age range of the user and the user scene includes:
  • Step S31 When the age range of the user is a young age and the user scene is a crying scene, play a comfort program.
  • the step of performing a corresponding scene operation according to the age range of the user and the user scene includes:
  • Step S32 when the age range of the user is determined to be middle age, obtain the viewing distance from the user measured by the intelligent sensing hardware;
  • Step S33 Determine whether the viewing distance belongs to a preset healthy viewing distance range
  • Step S34 If the viewing distance does not belong to the preset healthy viewing distance range, a distance reminder message is generated.
  • the step of performing a corresponding scene operation according to the age range of the user and the user scene includes:
  • Step S35 when the age range of the user is old and the user scene is an emergency scene, a remote alarm prompt message is issued.
  • the user scenario is determined by analyzing the scenario data.
  • the scene data includes sound information.
  • a crying sound database is preset, and the voice information is input into the crying sound database. If a crying sound is detected in the crying sound database in the crying sound database, it will be further processed through face recognition.
  • the method detects whether the user's expression is crying expression.
  • the expression of the user may be determined by depicting the feature points of the face, and then the expression of the user may be compared with a preset expression model of multiple emotions to determine whether the expression of the user is a crying expression.
  • the above-mentioned method of detecting whether the sound information includes a crying sound and detecting whether the user's expression is a crying expression is not limited to one of the above-mentioned methods, and relevant technical personnel may also implement it in other ways.
  • the user scene is defined as a crying scene
  • the user’s age range is determined to be a young age
  • the preset young age comfort program is played. Appease young users.
  • the smart sensing hardware can also measure the distance between the user and the smart TV. Therefore, the scene data includes the viewing distance measured by the smart sensing hardware.
  • a healthy viewing distance range is preset. After the viewing distance is obtained, it is determined whether the above-mentioned viewing distance belongs to the preset healthy viewing distance range. If the viewing distance does not belong to the preset healthy viewing distance range, and the age of the user The range is determined to be middle-aged, and a distance reminder message is generated to remind the user to keep a certain distance from the TV to watch. It is easy to understand that the distance reminding information can be a warning sound from a smart TV speaker, or a reminding information displayed on the smart TV screen.
  • the smart sensing hardware includes a camera and an infrared sensor. If the smart sensing hardware detects that the user falls on the ground and maintains the same posture for more than a certain period of time, the user scene is determined as an emergency scene.
  • the smart sensing hardware may also include a smart bracelet device worn by the user. The smart bracelet detects the heart rate of the user, and when the heart rate of the user is abnormal, the user scene is determined as the emergency scene.
  • the smart TV can call the built-in address book, and there are preset emergency contacts in the address book, and send a remote alarm by calling the preset emergency contact Prompt information.
  • FIG. 4 is a schematic flowchart of another embodiment of the user scenario intelligent analysis method of this application. After the step S20 determines the user's age range according to the user's face attributes, the method further includes:
  • Step S40 input the age range of the user into a preset program database to obtain a collection of TV programs corresponding to the age range in the preset program database;
  • Step S50 randomly selecting a TV program from the TV program set to play.
  • the method further includes:
  • step S60 when the age range of the user is old, the playback image quality of the TV program is improved and the high-frequency volume is increased.
  • a program database is preset. After the user's age range is determined, it is input into the preset program database to obtain a corresponding TV program collection, and a TV program is randomly selected from the TV program collection to play.
  • a variety of TV programs are set up in advance according to the age range, and children’s programs are set for users whose age range is young; theater programs are set for users whose age range is middle-aged; and for users whose age range is old, set A warm family show.
  • different TV programs are played, thus enriching the user’s viewing experience.
  • the user’s age range is elderly, in order to enable the elderly users to have a better viewing experience, and to enhance the playing of TV programs. Quality and increase the high-frequency volume.
  • the method further includes:
  • Step S70 Obtain a specific gesture and/or ambient brightness collected by the intelligent sensing hardware, and perform a corresponding operation according to the specific gesture and/or adjust the brightness of a TV program according to the ambient brightness.
  • the intelligent sensing hardware can also collect user gestures and environmental brightness.
  • a specific gesture database is preset in this embodiment, and the collected user gestures are input into the preset specific gesture database for query. It is assumed that the mapping relationship between the specific gesture and the corresponding operation is stored in the specific gesture database. If there is a specific gesture corresponding to the user's gesture in the preset specific gesture database, the smart TV will perform the corresponding operation according to the specific gesture, thereby making the user more intelligent in the way of using the TV.
  • this embodiment also presets a brightness adjustment table.
  • the above brightness adjustment table reflects the mapping relationship between the brightness of the environment and the brightness of the TV program.
  • the smart TV adjusts the brightness of the TV program according to the acquired environment brightness, thereby improving the user Viewing experience.
  • the embodiment of the present application also proposes a computer-readable non-volatile computer-readable storage medium, the computer-readable non-volatile computer-readable storage medium stores computer-readable instructions, and the computer-readable When the instructions are executed by the processor, the operations of the intelligent analysis method for user scenes as described above are realized.
  • the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇
  • the technical solution of this application essentially or the part that contributes to the prior art can be embodied in the form of a software product, and the computer software product is stored in a non-volatile computer-readable storage medium (such as The ROM/RAM, magnetic disk, optical disk) includes several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Neurosurgery (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种用户场景智能分析方法、装置和存储介质,所述方法包括如下步骤:实时获取所述智能感知硬件采集的包括图像信息的场景数据,并对所述场景数据进行识别,以确定所述图像信息中用户的人脸属性和所述图像信息显示的用户场景;根据用户的人脸属性确定所述用户的年龄范围;根据用户的年龄范围和所述用户场景执行对应的场景操作。本申请通过上述方式,提升电视机的智能化,满足日益增长的用户需求。

Description

用户场景智能分析方法、装置和存储介质
本申请要求于2019年3月26日提交中国专利局、申请号为201910240946.8、发明名称为“用户场景智能分析方法、装置和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中
技术领域
本申请涉及电视技术领域,尤其涉及一种用户场景智能分析方法、装置和存储介质。
背景技术
随着AI(Artificial Intelligence,人工智能)时代的到来,家用电器的使用也变得越来越智能化。电视机作为最常见的家用电器,用户对于电视机的使用方式也向智能化发展并逐步普及,这大大地提升了家居环境的安全性、便利性、舒适性以及节能性。
然而,市面上的智能电视仅运用了AI技术中的语音识别技术来提升电视机的智能性,由于语音识别技术自身存在的缺陷,其只能根据用户输入的语音指令“被动”的执行相关操作,显得不够智能,难以满足日益增长的用户需求。
技术解决方案
本申请的主要目的在于提供了一种用户场景智能分析方法、装置和存储介质,旨在解决现有技术中电视机难以满足用户的智能化需求的技术问题。
为实现上述目的,本申请提供了一种用户场景智能分析方法,包括以下步骤:
实时获取所述智能感知硬件采集的包括图像信息的场景数据,并对所述场景数据进行识别,以确定所述图像信息中用户的人脸属性和所述图像信息显示的用户场景;
根据用户的人脸属性确定所述用户的年龄范围;
根据用户的年龄范围和所述用户场景执行对应的场景操作。
可选地,所述人脸属性包括人脸特征点、人脸区域亮度以及人脸尺寸,所述根据用户的人脸属性确定所述用户的年龄范围的步骤包括:
判断当前用户的人脸尺寸是否大于预设成人人脸尺寸;
当所述当前用户的人脸尺寸小于预设成人人脸尺寸时,将所述用户的年龄范围确定为幼龄;
当所述当前用户的人脸尺寸大于等于预设成人人脸尺寸时,判断用户的人脸特征点以及人脸区域亮度是否符合对应的预设老龄特征标准和预设老龄人脸亮度标准;
若是,则将用户的年龄范围确定为老龄;
若否,则将用户的年龄范围确定为中龄。
可选地,所述场景数据还包括声音信息;
所述对所述场景数据进行识别,以确定所述图像信息显示的用户场景的步骤包括:
识别所述场景数据中的声音信息是否包括哭闹声;
若是,则根据所述用户的人脸属性,判断所述用户是否为哭闹表情;
当所述用户为哭闹表情时,确定所述图像信息显示的用户场景为哭闹场景;
所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
当所述用户的年龄范围为幼龄且所述用户场景为哭闹场景时,播放安抚节目。
可选地,所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
当用户的年龄范围确定为中龄时,获取所述智能感知硬件测量的与所述用户的观看距离;
判断所述观看距离是否属于预设健康观看距离范围;
若所述观看距离不属于预设健康观看距离范围,则生成距离提醒信息。
可选地,所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
当所述用户的年龄范围为老龄且所述用户场景为急救场景时,发出远程报警提示信息。
可选地,所述根据用户的人脸属性确定所述用户的年龄范围的步骤之后,还包括:
将所述用户的年龄范围输入至预设节目数据库中,以得到预设节目数据库中与所述年龄范围对应的电视节目集;
从所述电视节目集中随机选择一个电视节目进行播放。
可选地,所述根据用户的人脸属性确定所述用户的年龄范围的步骤之后,还包括:
当所述用户的年龄范围为老龄时,提升电视节目的播放画质并增大高频音量。
可选地,所述方法还包括:
获取所述智能感知硬件采集的特定手势和/或环境亮度,并对应根据所述特定手势执行对应的操作和/或根据所述环境亮度调节电视节目亮度。
此外,为实现上述目的,本申请还提供一种装置,所述装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机可读指令,所述计算机可读指令被所述处理器执行时实现如上所述用户场景智能分析方法的步骤。
此外,为实现上述目的,本申请还提供一种计算机可读非易失性计算机可读存储介质,所述计算机可读非易失性计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如上所述用户场景智能分析方法的步骤。
本申请公开了一种用户场景智能分析方法、装置和非易失性计算机可读存储介质,通过实时获取所述智能感知硬件采集的包括图像信息的场景数据,并对所述场景数据进行识别,以确定所述图像信息中用户的人脸属性和所述图像信息显示的用户场景;根据用户的人脸属性确定所述用户的年龄范围;根据用户的年龄范围和所述用户场景执行对应的场景操作。本申请通过上述方式,使智能电视根据捕捉到的用户的年龄范围和场景,“主动的”触发对应的场景操作,进而提升电视机的智能化,达到满足用户需求的目的。
附图说明
图1是本申请实施例方案涉及的硬件运行环境的装置结构示意图;
图2为本申请用户场景智能分析方法一实施例的流程示意图;
图3为本申请用户场景智能分析方法另一实施例的流程示意图;
图4为本申请用户场景智能分析方法又一实施例的流程示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的可选实施例仅仅用以解释本申请,并不用于限定本申请。
如图1所示,图1是本申请实施例方案涉及的硬件运行环境的终端结构示意图。
本申请终端是一种装置,该装置可以是电视机,还可以是服务器、电脑、智能手机、平板电脑、便携计算机等具有存储功能的终端设备。
如图1所示,该终端可以包括:处理器1001,例如CPU,通信总线1002,用户接口1003,网络接口1004,存储器1005。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选的用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
可选地,终端还可以包括摄像头、Wi-Fi模块等等,在此不再赘述。
本领域技术人员可以理解,图1中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
在图1所示的终端中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要包括输入单元比如键盘,键盘包括无线键盘和有线键盘,用于连接客户端,与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的计算机可读指令,并执行以下操作:
实时获取所述智能感知硬件采集的包括图像信息的场景数据,并对所述场景数据进行识别,以确定所述图像信息中用户的人脸属性和所述图像信息显示的用户场景;
根据用户的人脸属性确定所述用户的年龄范围;
根据用户的年龄范围和所述用户场景执行对应的场景操作。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
判断当前用户的人脸尺寸是否大于预设成人人脸尺寸;
当所述当前用户的人脸尺寸小于预设成人人脸尺寸时,将所述用户的年龄范围确定为幼龄;
当所述当前用户的人脸尺寸大于等于预设成人人脸尺寸时,判断用户的人脸特征点以及人脸区域亮度是否符合对应的预设老龄特征标准和预设老龄人脸亮度标准;
若是,则将用户的年龄范围确定为老龄;
若否,则将用户的年龄范围确定为中龄。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
所述对所述场景数据进行识别,以确定所述图像信息显示的用户场景的步骤包括:
识别所述场景数据中的声音信息是否包括哭闹声;
若是,则根据所述用户的人脸属性,判断所述用户是否为哭闹表情;
当所述用户为哭闹表情时,确定所述图像信息显示的用户场景为哭闹场景;
当所述用户的年龄范围为幼龄且所述用户场景为哭闹场景时,播放安抚节目。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
当用户的年龄范围确定为中龄时,获取所述智能感知硬件测量的与所述用户的观看距离;
判断所述观看距离是否属于预设健康观看距离范围;
若所述观看距离不属于预设健康观看距离范围,则生成距离提醒信息。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
当所述用户的年龄范围为老龄且所述用户场景为急救场景时,发出远程报警提示信息。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
将所述用户的年龄范围输入至预设节目数据库中,以得到预设节目数据库中与所述年龄范围对应的电视节目集;
从所述电视节目集中随机选择一个电视节目进行播放。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
当所述用户的年龄范围为老龄时,提升电视节目的播放画质并增大高频音量。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
获取所述智能感知硬件采集的特定手势和/或环境亮度,并对应根据所述特定手势执行对应的操作和/或根据所述环境亮度调节电视节目亮度。
本申请装置的可选实施例与下述用户场景智能分析方法各实施例基本相同,在此不作赘述。
请参阅图2,图2为本申请用户场景智能分析方法一实施例的流程示意图,本实施例提供的用户场景智能分析方法包括如下步骤:
步骤S10,实时获取所述智能感知硬件采集的包括图像信息的场景数据,并对所述场景数据进行识别,以确定所述图像信息中用户的人脸属性和所述图像信息显示的用户场景;
本实施例中,以该用户场景智能分析方法应用在电视机上为例进行阐述,应当理解的是,该用户场景智能分析方法还可以应用于电脑等其他智能终端。
本实施例中,应用该方法的智能电视内置和/或外接智能感知硬件,上述智能感知硬件至少包括红外传感器、距离传感器、声音采集器以及摄像头。在此基础上,容易理解的是,该智能电视能通过智能感知硬件采集当前环境的场景数据,上述场景数据至少包括智能电视周围的图像信息、声音信息、用户的红外感应信息以及用户与智能电视之间的距离信息。智能电视获取到场景数据之后,调用内置的相关算法,对场景数据进行识别,来确定图像信息中用户的人脸属性和用户场景。
应当理解的是,本实施例中的上述内置的相关算法可选为基于深度学习的人脸识别算法,基于深度学习的人脸识别算法利用卷积神经网络对海量的人脸图片进行了学习,因此,能更精确的区分出不同人脸的相关特征,提高人脸识别的准确性。
步骤S20,根据用户的人脸属性确定所述用户的年龄范围;
在确定图像信息中用户的人脸属性之后,即可以通过人脸属性确定用户的年龄范围。可选的,可以通过拍摄到人脸的尺寸、人脸的亮度和人脸的提取特征点的相关关系确定用户的年龄范围。
步骤S30,根据用户的年龄范围和所述用户场景执行对应的场景操作
本实施例中,确定用户的年龄范围后,从所述图像信息、声音信息和/或其他场景数据中确定用户的当前场景。当用户的年龄范围和用户场景符合智能电视执行场景操作的条件时,智能电视即根据用户的年龄范围和用户场景执行对应的预设场景操作。
本实施例通过实时获取智能感知硬件采集的包括图像信息的场景数据,并对场景数据进行识别,以确定图像信息中用户的人脸属性和图像信息显示的用户场景;根据用户的人脸属性确定用户的年龄范围;根据用户的年龄范围和用户场景执行对应的场景操作。本实施例通过上述方式,使智能电视根据捕捉到的用户的年龄范围和场景,“主动的”触发对应的场景操作,进而提升电视机的智能化,达到满足用户需求的目的。
进一步地,上述根据用户的人脸属性确定所述用户的年龄范围的步骤包括:
步骤S21,判断当前用户的人脸尺寸是否大于预设成人人脸尺寸;
步骤S22,当所述当前用户的人脸尺寸小于预设成人人脸尺寸时,将所述用户的年龄范围确定为幼龄;
步骤S23,当所述当前用户的人脸尺寸大于等于预设成人人脸尺寸时,判断用户的人脸特征点以及人脸区域亮度是否符合对应的预设老龄特征标准和预设老龄人脸亮度标准;
步骤S24,若是,则将用户的年龄范围确定为老龄;
步骤S25,若否,则将用户的年龄范围确定为中龄。
应当理解的是,从图像信息中确定的人脸属性包括人脸尺寸,根据上述人脸属性确定用户的年龄范围。可选的,先判断图像信息中被识别出来的人脸尺寸与预设成人人脸尺寸之间的大小关系,容易理解的是,在对图像信息进行人脸识别的过程中,如若图像信息中存在人脸信息,则根据上述人脸信息生成人脸图像框,上述人脸图像框不仅能反映人脸的尺寸大小,还能锁定图像信息中人脸的位置,方便对图像信息中的人脸进行分析,来得到其它的人脸属性。本实施例中还预设有成人人脸尺寸,在确定当前用户的人脸尺寸之后,将当前用户的人脸尺寸与预设成人人脸尺寸进行对比,当当前用户的人脸尺寸小于预设成人人脸尺寸时,将用户的年龄范围确定为幼龄。
此外,从图像信息中确定的人脸属性还包括人脸特征点,本实施例中运用人脸识别的相关技术,通过对图像信息中的人脸进行扫描,得到能反映人脸轮廓的人脸特征点。本实施例中还预设有老龄特征标准,容易理解的是,上述人脸特征点能反映用户的人脸全貌,因此可以通过对比人脸特征点和老龄特征标准的方式,判断用户是否为老龄用户;此外,也可以预设有一定数量的人脸模型,将提取出来的人脸特征点与预设的多个人脸模型进行相似度比对,从而确定用户的年龄范围是否为老龄。
除了对图像信息中的人脸特征点进行相关对比之外,从图像信息中确定的人脸属性还包括人脸区域亮度,所述人脸区域亮度能反映人脸皮肤光泽,本实施例通过曝光补偿的方式适当的调整图像的曝光度,提取图像信息中的人脸区域亮度,防止由于图像过曝或降曝而导致的对人脸区域亮度界定不明确的问题。本实施例中还预设有老龄人脸亮度标准,容易理解的是,老龄用户的皮肤会出现暗沉、皱纹等问题,基于上述原理,将提取出的人脸区域亮度与预设老龄人脸亮度标准进行对比,从而确定用户的年龄范围是否为老龄。
需要特别注意的是,本实施例对于用户为老龄用户的界定,不仅要求人脸特征点符合预设的老龄特征标准,其人脸区域亮度也要符合预设的老龄人脸亮度标准,只有当人脸属性中的人脸特征点和人脸区域亮度符合对应的预设老龄特征标准和预设老龄人脸亮度标准时,才将用户的年龄范围确定为老龄。除此之外,将用户的年龄范围确定为中龄。
本实施例通过验证人脸属性中的人脸尺寸、人脸特征点和人脸区域亮度,是否符合对应的标准的方式,将用户的年龄范围确定为幼龄、中龄以及老龄,准确的定义了用户的年龄范围,进而使得智能电视根据用户年龄范围的不同,触发不同的操作。
进一步地,请参阅图3,图3为本申请用户场景智能分析方法另一实施例的流程示意图,所述对所述场景数据进行识别,以确定所述图像信息显示的用户场景的步骤包括:
步骤S11,识别所述场景数据中的声音信息是否包括哭闹声;
步骤S12,若是,则根据所述用户的人脸属性,判断所述用户是否为哭闹表情;
步骤S13,当所述用户为哭闹表情时,确定所述图像信息显示的用户场景为哭闹场景;
所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
步骤S31,当所述用户的年龄范围为幼龄且所述用户场景为哭闹场景时,播放安抚节目。
进一步地,所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
步骤S32,当用户的年龄范围确定为中龄时,获取所述智能感知硬件测量的与所述用户的观看距离;
步骤S33,判断所述观看距离是否属于预设健康观看距离范围;
步骤S34,若所述观看距离不属于预设健康观看距离范围,则生成距离提醒信息。
进一步地,所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
步骤S35,当所述用户的年龄范围为老龄且所述用户场景为急救场景时,发出远程报警提示信息。
在将用户的年龄范围确定为幼龄、中龄以及老龄后,通过对场景数据进行分析来确定用户场景。
容易理解的是,由于智能感应硬件还能采集电视机周围环境声音,因此,场景数据中包括声音信息。本实施例中,识别场景数据中的声音信息是否包括哭闹声。可选的,预设有哭闹声音数据库,将所述声音信息输入至哭闹声音数据库中,若在哭闹声音数据库中检测到声音信息中存在哭闹声,则进一步的通过人脸识别的方法检测用户的表情是否为哭闹表情。可选的,可以通过描绘人脸特征点的方式,确定用户的表情,进而将用户的表情与预设的多种情绪的表情模型进行对比,从而确定用户的表情是否为哭闹表情。应当理解的是,上述检测声音信息是否包括哭闹声以及检测用户表情是否为哭闹表情的方法,并不限制于上述提及方法的一种,相关技术人员也可以通过其他方式实现。
如若声音信息中包括哭闹声且用户的表情为哭闹表情,则将用户场景定义为哭闹场景,用户的年龄范围又被确定为是幼龄时,则播放预设的幼龄安抚节目,安抚幼龄用户。
容易理解的是,智能感应硬件还能测量用户与智能电视之间距离,因此,场景数据中包括智能感知硬件测量的观看距离。本实施例中,预设有健康观看距离范围,获取到观看距离之后,判断上述观看距离是否属于预设健康观看距离范围,若所述观看距离不属于预设健康观看距离范围,且用户的年龄范围又被确定为是中龄,则生成距离提醒信息以提示用户与电视机保持一定距离进行观看。容易理解的是,该距离提醒信息可以是智能电视音响发出的警示声,也可以是智能电视屏幕上显示的提醒信息。
容易理解的是,智能感应硬件包括摄像头以及红外传感器,若智能感应硬件检测到用户摊倒在地,超过一定时间仍保持相同姿势,则将用户场景确定为急救场景。此外,智能感应硬件还可以包括用户佩戴的智能手环设备,智能手环检测用户的心率,当用户的心率出现异常时,即将用户场景确定为急救场景。当用户的年龄范围确定为老龄,且用户场景为急救场景时,智能电视可以调用内置的通讯录,通讯录中预设有紧急联系人,通过向预设紧急联系人拨打电话的方式发出远程报警提示信息。
本实施例通过上述方式,当用户的年龄范围与用户场景满足一定的关系时,“主动”的触发对应的场景操作,从而提升电视的智能化。
进一步地,请参阅图4,图4为本申请用户场景智能分析方法的又一实施例的流程示意图,所述步骤S20根据用户的人脸属性确定所述用户的年龄范围之后,还包括:
步骤S40,将所述用户的年龄范围输入至预设节目数据库中,以得到预设节目数据库中与所述年龄范围对应的电视节目集;
步骤S50,从所述电视节目集中随机选择一个电视节目进行播放。
进一步地,所述步骤S20根据用户的人脸属性确定所述用户的年龄范围之后,还包括:
步骤S60,当所述用户的年龄范围为老龄时,提升电视节目的播放画质并增大高频音量。
本实施例中预设有节目数据库,确定用户的年龄范围后,将其输入至预设节目数据库中,得到对应的电视节目集,并从电视节目集中随机选择一个电视节目进行播放。本实施例预先根据年龄范围设置了多种电视节目,针对年龄范围为低龄的用户,设置了儿童节目;针对年龄范围为中龄的用户,设置了剧场节目;针对年龄范围为老龄的用户,设置了家庭温馨节目。
根据用户年龄范围的不同,播放不同的电视节目,从而丰富了用户的观看体验,特别的,当用户的年龄范围为老龄时,为了使老龄用户获得更好的观看体验,提升电视节目的播放画质并增大高频音量。
进一步地,所述方法还包括:
步骤S70,获取所述智能感知硬件采集的特定手势和/或环境亮度,并对应根据所述特定手势执行对应的操作和/或根据所述环境亮度调节电视节目亮度。
本实施例中,智能感知硬件还能采集用户的手势和环境亮度,可选的,本实施例中预设有特定手势数据库,将采集的用户手势输入至预设特定手势数据库中进行查询,预设特定手势数据库中存储有特定手势与对应操作的映射关系。如若预设特定手势数据库中存在与用户手势对应的特点手势,则智能电视根据特定手势执行对应的操作,从而使用户对于电视机的使用方式愈发智能。
此外,本实施例中还预设有亮度调节表,上述亮度调节表反映了环境亮度大小与电视节目亮度大小之间的映射关系,智能电视根据获取的环境亮度来调节电视节目亮度,从而提升用户的观看体验。
此外,本申请实施例还提出一种计算机可读非易失性计算机可读存储介质,所述计算机可读非易失性计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如上所述用户场景智能分析方法的操作。
本申请计算机可读非易失性计算机可读存储介质的可选实施例与上述用户场景智能分析方法各实施例基本相同,在此不作赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个非易失性计算机可读存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的可选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种用户场景智能分析方法,其中,应用于智能电视,所述智能电视内置和/或外接智能感知硬件,所述方法包括如下步骤:
    实时获取所述智能感知硬件采集的包括图像信息的场景数据,并对所述场景数据进行识别,以确定所述图像信息中用户的人脸属性和所述图像信息显示的用户场景;
    根据用户的人脸属性确定所述用户的年龄范围;
    根据用户的年龄范围和所述用户场景执行对应的场景操作。
  2. 如权利要求1所述的用户场景智能分析方法,其中,所述人脸属性包括人脸特征点、人脸区域亮度以及人脸尺寸,所述根据用户的人脸属性确定所述用户的年龄范围的步骤包括:
    判断当前用户的人脸尺寸是否大于预设成人人脸尺寸;
    当所述当前用户的人脸尺寸小于预设成人人脸尺寸时,将所述用户的年龄范围确定为幼龄;
    当所述当前用户的人脸尺寸大于等于预设成人人脸尺寸时,判断用户的人脸特征点以及人脸区域亮度是否符合对应的预设老龄特征标准和预设老龄人脸亮度标准;
    当用户的人脸特征点以及人脸区域亮度符合对应的预设老龄特征标准和预设老龄人脸亮度标准时,则将用户的年龄范围确定为老龄;
    当用户的人脸特征点以及人脸区域亮度不符合对应的预设老龄特征标准和预设老龄人脸亮度标准时,则将用户的年龄范围确定为中龄。
  3. 如权利要求1所述的用户场景智能分析方法,其中,所述场景数据还包括声音信息;
    所述对所述场景数据进行识别,以确定所述图像信息显示的用户场景的步骤包括:
    识别所述场景数据中的声音信息是否包括哭闹声;
    当所述场景数据中的声音信息包括哭闹声时,则根据所述用户的人脸属性,判断所述用户是否为哭闹表情;
    当所述用户为哭闹表情时,确定所述图像信息显示的用户场景为哭闹场景;
    所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
    当所述用户的年龄范围为幼龄且所述用户场景为哭闹场景时,播放安抚节目。
  4. 如权利要求1所述的用户场景智能分析方法,其中,所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
    当用户的年龄范围确定为中龄时,获取所述智能感知硬件测量的与所述用户的观看距离;
    判断所述观看距离是否属于预设健康观看距离范围;
    若所述观看距离不属于预设健康观看距离范围,则生成距离提醒信息。
  5. 如权利要求1所述的用户场景智能分析方法,其中,所述根据用户的年龄范围和所述用户场景执行对应的场景操作的步骤包括:
    当所述用户的年龄范围为老龄且所述用户场景为急救场景时,发出远程报警提示信息。
  6. 如权利要求1所述的用户场景智能分析方法,其中,所述根据用户的人脸属性确定所述用户的年龄范围的步骤之后,还包括:
    将所述用户的年龄范围输入至预设节目数据库中,以得到预设节目数据库中与所述年龄范围对应的电视节目集;
    从所述电视节目集中随机选择一个电视节目进行播放。
  7. 如权利要求6所述的用户场景智能分析方法,其中,所述根据用户的人脸属性确定所述用户的年龄范围的步骤之后,还包括:
    当所述用户的年龄范围为老龄时,提升电视节目的播放画质并增大高频音量。
  8. 如权利要求1所述的用户场景智能分析方法,其中,所述方法还包括:
    获取所述智能感知硬件采集的特定手势和/或环境亮度,并对应根据所述特定手势执行对应的操作和/或根据所述环境亮度调节电视节目亮度。
  9. 一种装置,其中,所述装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机可读指令,所述计算机可读指令被所述处理器执行时,执行如下步骤:
    实时获取所述智能感知硬件采集的包括图像信息的场景数据,并对所述场景数据进行识别,以确定所述图像信息中用户的人脸属性和所述图像信息显示的用户场景;
    根据用户的人脸属性确定所述用户的年龄范围;
    根据用户的年龄范围和所述用户场景执行对应的场景操作。
  10. 如权利要求9所述的装置,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    判断当前用户的人脸尺寸是否大于预设成人人脸尺寸;
    当所述当前用户的人脸尺寸小于预设成人人脸尺寸时,将所述用户的年龄范围确定为幼龄;
    当所述当前用户的人脸尺寸大于等于预设成人人脸尺寸时,判断用户的人脸特征点以及人脸区域亮度是否符合对应的预设老龄特征标准和预设老龄人脸亮度标准;
    当用户的人脸特征点以及人脸区域亮度符合对应的预设老龄特征标准和预设老龄人脸亮度标准时,则将用户的年龄范围确定为老龄;
    当用户的人脸特征点以及人脸区域亮度不符合对应的预设老龄特征标准和预设老龄人脸亮度标准时,则将用户的年龄范围确定为中龄。
  11. 如权利要求9所述的装置,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    识别所述场景数据中的声音信息是否包括哭闹声;
    当所述场景数据中的声音信息包括哭闹声时,则根据所述用户的人脸属性,判断所述用户是否为哭闹表情;
    当所述用户为哭闹表情时,确定所述图像信息显示的用户场景为哭闹场景;
    当所述用户的年龄范围为幼龄且所述用户场景为哭闹场景时,播放安抚节目。
  12. 如权利要求9所述的装置,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    当用户的年龄范围确定为中龄时,获取所述智能感知硬件测量的与所述用户的观看距离;
    判断所述观看距离是否属于预设健康观看距离范围;
    若所述观看距离不属于预设健康观看距离范围,则生成距离提醒信息。
  13. 如权利要求9所述的装置,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    当所述用户的年龄范围为老龄且所述用户场景为急救场景时,发出远程报警提示信息。
  14. 如权利要求9所述的装置,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    将所述用户的年龄范围输入至预设节目数据库中,以得到预设节目数据库中与所述年龄范围对应的电视节目集;
    从所述电视节目集中随机选择一个电视节目进行播放。
  15. 一种非易失性计算机可读存储介质,其中,所述非易失性计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时,执行如下步骤:
    实时获取所述智能感知硬件采集的包括图像信息的场景数据,并对所述场景数据进行识别,以确定所述图像信息中用户的人脸属性和所述图像信息显示的用户场景;
    根据用户的人脸属性确定所述用户的年龄范围;
    根据用户的年龄范围和所述用户场景执行对应的场景操作。
  16. 如权利要求15所述的非易失性计算机可读存储介质,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    判断当前用户的人脸尺寸是否大于预设成人人脸尺寸;
    当所述当前用户的人脸尺寸小于预设成人人脸尺寸时,将所述用户的年龄范围确定为幼龄;
    当所述当前用户的人脸尺寸大于等于预设成人人脸尺寸时,判断用户的人脸特征点以及人脸区域亮度是否符合对应的预设老龄特征标准和预设老龄人脸亮度标准;
    当用户的人脸特征点以及人脸区域亮度符合对应的预设老龄特征标准和预设老龄人脸亮度标准时,则将用户的年龄范围确定为老龄;
    当用户的人脸特征点以及人脸区域亮度不符合对应的预设老龄特征标准和预设老龄人脸亮度标准时,则将用户的年龄范围确定为中龄。
  17. 如权利要求15所述的非易失性计算机可读存储介质,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    识别所述场景数据中的声音信息是否包括哭闹声;
    当所述场景数据中的声音信息包括哭闹声时,则根据所述用户的人脸属性,判断所述用户是否为哭闹表情;
    当所述用户为哭闹表情时,确定所述图像信息显示的用户场景为哭闹场景;
    当所述用户的年龄范围为幼龄且所述用户场景为哭闹场景时,播放安抚节目。
  18. 如权利要求15所述的非易失性计算机可读存储介质,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    当用户的年龄范围确定为中龄时,获取所述智能感知硬件测量的与所述用户的观看距离;
    判断所述观看距离是否属于预设健康观看距离范围;
    若所述观看距离不属于预设健康观看距离范围,则生成距离提醒信息。
  19. 如权利要求15所述的非易失性计算机可读存储介质,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    当所述用户的年龄范围为老龄且所述用户场景为急救场景时,发出远程报警提示信息。
  20. 如权利要求15所述的非易失性计算机可读存储介质,所述计算机可读指令被所述处理器执行时,还执行如下步骤:
    将所述用户的年龄范围输入至预设节目数据库中,以得到预设节目数据库中与所述年龄范围对应的电视节目集;
    从所述电视节目集中随机选择一个电视节目进行播放。
PCT/CN2019/130304 2019-03-26 2019-12-31 用户场景智能分析方法、装置和存储介质 WO2020192222A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910240946.8A CN109982124A (zh) 2019-03-26 2019-03-26 用户场景智能分析方法、装置和存储介质
CN201910240946.8 2019-03-26

Publications (1)

Publication Number Publication Date
WO2020192222A1 true WO2020192222A1 (zh) 2020-10-01

Family

ID=67081128

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130304 WO2020192222A1 (zh) 2019-03-26 2019-12-31 用户场景智能分析方法、装置和存储介质

Country Status (2)

Country Link
CN (1) CN109982124A (zh)
WO (1) WO2020192222A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220128A (zh) * 2021-05-27 2021-08-06 齐喝彩(上海)人工智能科技有限公司 一种自适应智能交互方法、装置及电子设备
CN113867162A (zh) * 2021-09-27 2021-12-31 歌尔科技有限公司 家电控制方法、智能终端及计算机可读存储介质
CN114443182A (zh) * 2020-10-30 2022-05-06 深圳Tcl数字技术有限公司 一种界面切换方法、存储介质及终端设备
CN115942063A (zh) * 2022-11-10 2023-04-07 深圳创维-Rgb电子有限公司 观影位置提示方法、装置、电视设备及可读存储介质
WO2023155590A1 (zh) * 2022-02-17 2023-08-24 珠海格力电器股份有限公司 一种设备控制方法和装置、电子设备和存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109982124A (zh) * 2019-03-26 2019-07-05 深圳创维-Rgb电子有限公司 用户场景智能分析方法、装置和存储介质
CN111586333A (zh) * 2020-04-15 2020-08-25 深圳新融典科技有限公司 一种网络视频直播方法及系统
CN112037010A (zh) * 2020-08-12 2020-12-04 无锡锡商银行股份有限公司 基于SSR-Net的多场景风险评级模型在个人贷款上的应用方法、装置及存储介质
CN115119056A (zh) * 2022-06-08 2022-09-27 深圳康佳电子科技有限公司 一种控制播放设备信号源的方法、装置、设备及存储介质
CN114938475A (zh) * 2022-06-23 2022-08-23 展讯半导体(南京)有限公司 播放控制方法及装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917896A (zh) * 2015-06-12 2015-09-16 努比亚技术有限公司 一种推送数据的方法和终端设备
CN107992199A (zh) * 2017-12-19 2018-05-04 广东小天才科技有限公司 一种用于电子设备的情绪识别方法、系统及电子设备
CN108521606A (zh) * 2018-04-25 2018-09-11 上海与德科技有限公司 一种观看电视的监控方法、装置、存储介质及智能电视
WO2018194243A1 (en) * 2017-04-17 2018-10-25 Hyperconnect, Inc. Video communication device, video communication method, and video communication mediating method
CN109982124A (zh) * 2019-03-26 2019-07-05 深圳创维-Rgb电子有限公司 用户场景智能分析方法、装置和存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7127736B2 (en) * 2000-11-17 2006-10-24 Sony Corporation Content processing apparatus and content processing method for digest information based on input of a content user
KR20090011685A (ko) * 2007-07-27 2009-02-02 주식회사 케이티 영상추론을 이용한 노래방 서비스 시스템 및 그 방법과이를 위한 노래방 서비스 서버
JP2011217209A (ja) * 2010-03-31 2011-10-27 Sony Corp 電子機器、コンテンツ推薦方法及びプログラム
CN201830355U (zh) * 2010-09-29 2011-05-11 康佳集团股份有限公司 具有安抚婴儿功能的电视机
CN103903389A (zh) * 2012-12-30 2014-07-02 青岛海尔软件有限公司 应用热释电红外探测器的居家老人看护系统
CN103324729B (zh) * 2013-06-27 2017-03-08 小米科技有限责任公司 一种推荐多媒体资源的方法和装置
CN103580968A (zh) * 2013-11-12 2014-02-12 中国联合网络通信有限公司物联网研究院 基于物联网云计算的智能家居系统
CN105163139B (zh) * 2014-05-28 2018-06-01 青岛海尔电子有限公司 信息推送方法、信息推送服务器和智能电视
CN104077276B (zh) * 2014-06-28 2017-04-26 青岛歌尔声学科技有限公司 一种便携式智能婴幼儿看护装置
CN104618464A (zh) * 2015-01-16 2015-05-13 中国科学院上海微系统与信息技术研究所 一种基于物联网的智能居家养老服务系统
CN204406615U (zh) * 2015-03-04 2015-06-17 南京信息工程大学 一种婴儿睡眠监测装置
CN106162244A (zh) * 2015-04-20 2016-11-23 中兴通讯股份有限公司 一种节目的推送方法及装置
CN106878364A (zh) * 2015-12-11 2017-06-20 比亚迪股份有限公司 用于车辆的信息推送方法、系统、云服务器和车辆
CN106303699A (zh) * 2016-08-24 2017-01-04 三星电子(中国)研发中心 用于播放电视节目的方法和装置
CN108900908A (zh) * 2018-07-04 2018-11-27 三星电子(中国)研发中心 视频播放方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917896A (zh) * 2015-06-12 2015-09-16 努比亚技术有限公司 一种推送数据的方法和终端设备
WO2018194243A1 (en) * 2017-04-17 2018-10-25 Hyperconnect, Inc. Video communication device, video communication method, and video communication mediating method
CN107992199A (zh) * 2017-12-19 2018-05-04 广东小天才科技有限公司 一种用于电子设备的情绪识别方法、系统及电子设备
CN108521606A (zh) * 2018-04-25 2018-09-11 上海与德科技有限公司 一种观看电视的监控方法、装置、存储介质及智能电视
CN109982124A (zh) * 2019-03-26 2019-07-05 深圳创维-Rgb电子有限公司 用户场景智能分析方法、装置和存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114443182A (zh) * 2020-10-30 2022-05-06 深圳Tcl数字技术有限公司 一种界面切换方法、存储介质及终端设备
CN113220128A (zh) * 2021-05-27 2021-08-06 齐喝彩(上海)人工智能科技有限公司 一种自适应智能交互方法、装置及电子设备
CN113220128B (zh) * 2021-05-27 2022-11-04 齐喝彩(上海)人工智能科技有限公司 一种自适应智能交互方法、装置及电子设备
CN113867162A (zh) * 2021-09-27 2021-12-31 歌尔科技有限公司 家电控制方法、智能终端及计算机可读存储介质
WO2023155590A1 (zh) * 2022-02-17 2023-08-24 珠海格力电器股份有限公司 一种设备控制方法和装置、电子设备和存储介质
CN115942063A (zh) * 2022-11-10 2023-04-07 深圳创维-Rgb电子有限公司 观影位置提示方法、装置、电视设备及可读存储介质

Also Published As

Publication number Publication date
CN109982124A (zh) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2020192222A1 (zh) 用户场景智能分析方法、装置和存储介质
US20220317641A1 (en) Device control method, conflict processing method, corresponding apparatus and electronic device
WO2020192400A1 (zh) 播放终端的播放控制方法、装置、设备和计算机可读存储介质
CN105118257B (zh) 智能控制系统及方法
US11330321B2 (en) Method and device for adjusting video parameter based on voiceprint recognition and readable storage medium
CN107155133B (zh) 音量调节方法、音频播放终端及计算机可读存储介质
TWI639114B (zh) 具有智慧語音服務功能之電子裝置及調整輸出聲音之方法
WO2020135334A1 (zh) 电视应用主题切换方法、电视、可读存储介质及设备
CN107484034A (zh) 字幕显示方法、终端及计算机可读存储介质
CN108903521B (zh) 一种应用于智能画框的人机交互方法、智能画框
WO2017141530A1 (ja) 情報処理装置、情報処理方法、及びプログラム
JP7231638B2 (ja) 映像に基づく情報取得方法及び装置
CN108924452A (zh) 局部录屏方法、装置及计算机可读存储介质
US20220405375A1 (en) User identity verification method and electronic device
CN107809654A (zh) 电视机系统及电视机控制方法
KR20170094745A (ko) 영상 인코딩 방법 및 이를 지원하는 전자 장치
US20230316685A1 (en) Information processing apparatus, information processing method, and program
CN111442464B (zh) 空调器及其控制方法
JP6973380B2 (ja) 情報処理装置、および情報処理方法
CN113794934A (zh) 防沉迷引导方法、电视和计算机可读存储介质
CN114501144A (zh) 基于图像的电视控制方法、装置、设备及存储介质
CN109986553B (zh) 一种主动交互的机器人、系统、方法及存储装置
CN113709629A (zh) 频响参数调节方法、装置、设备及存储介质
CN108153568B (zh) 一种信息处理方法及电子设备
CN108769799B (zh) 一种信息处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19920938

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01-03-2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19920938

Country of ref document: EP

Kind code of ref document: A1