CN113497979A - Interface mode display method, cloud server, television, system and storage medium - Google Patents

Interface mode display method, cloud server, television, system and storage medium Download PDF

Info

Publication number
CN113497979A
CN113497979A CN202010202821.9A CN202010202821A CN113497979A CN 113497979 A CN113497979 A CN 113497979A CN 202010202821 A CN202010202821 A CN 202010202821A CN 113497979 A CN113497979 A CN 113497979A
Authority
CN
China
Prior art keywords
user
television
interface mode
detected
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010202821.9A
Other languages
Chinese (zh)
Inventor
陈小平
梁志威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Viomi Electrical Technology Co Ltd
Original Assignee
Foshan Viomi Electrical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Viomi Electrical Technology Co Ltd filed Critical Foshan Viomi Electrical Technology Co Ltd
Priority to CN202010202821.9A priority Critical patent/CN113497979A/en
Publication of CN113497979A publication Critical patent/CN113497979A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to the field of intelligent household appliances, in particular to an interface mode display method, a cloud server, a television, a system and a storage medium, wherein the method comprises the following steps: receiving an image to be detected which is sent by a television and contains a user; determining the user characteristics of the user according to the image to be detected; determining an interface mode corresponding to the user according to the user characteristics of the user; and sending the interface mode corresponding to the user to the television so as to enable the television to display the interface mode. The user characteristics of the user are identified according to the image to be detected, and the interface mode corresponding to the television is determined according to the user characteristics, so that the user experience is improved.

Description

Interface mode display method, cloud server, television, system and storage medium
Technical Field
The application relates to the technical field of televisions, in particular to an interface mode display method, a cloud server, a television, a system and a storage medium.
Background
With the continuous development and improvement of television technology, more and more people select the smart television to watch programs. However, most of the intelligent televisions are in a fixed interface mode, and the interfaces displayed for different people are the same, so that the intelligent display method is not intelligent enough. On the other hand, the user needs to see the television channels or program lists in the interface through the function directories layer by layer, so that the user needs to spend more time to select favorite programs, the operation is complicated, and the user experience is reduced.
Disclosure of Invention
The application provides an interface mode display method, a cloud server, a television, a system and a storage medium.
In a first aspect, the present application provides an interface mode display method, which is applied to a cloud server, and the method includes:
receiving an image to be detected which is sent by a television and contains a user;
determining the user characteristics of the user according to the image to be detected;
determining an interface mode corresponding to the user according to the user characteristics of the user;
and sending the interface mode corresponding to the user to the television so as to enable the television to display the interface mode.
In a second aspect, the present application provides an interface mode display method, applied to a television, the method including:
when detecting that a user needs to watch television, acquiring an image to be detected, wherein the image to be detected comprises at least one user;
sending the image to be detected to a cloud server so that the cloud server determines an interface mode corresponding to the user according to the image to be detected;
and receiving the interface mode corresponding to the user and sent by the cloud server, and controlling the television to display the interface mode.
In a third aspect, the present application further provides a cloud server, which includes a memory and a processor;
the memory for storing a computer program;
the processor is configured to execute the computer program and implement the interface mode display method when executing the computer program.
In a fourth aspect, the present application further provides a television, including a camera, a memory, and a processor;
the shooting device is used for collecting an image to be detected;
the memory for storing a computer program;
the processor is configured to execute the computer program and implement the interface mode display method when executing the computer program.
In a fifth aspect, the present application further provides an interface mode display system, where the interface mode display system includes a television and a cloud server;
the television is provided with a communication module;
the cloud server is provided with a communication module to establish communication connection with the television;
the television is used for displaying an interface mode corresponding to a user according to an image to be detected of the user acquired by the shooting device, and the cloud server is used for realizing the interface mode display method; or
The cloud server determines an interface mode corresponding to the user according to the image to be detected, and the television is used for realizing the interface mode display method.
In a sixth aspect, the present application further provides a computer-readable storage medium storing a computer program, which when executed by a processor causes the processor to implement the interface mode display method as described above.
The application discloses an interface mode display method, a cloud server, a television, a system and a storage medium, wherein user characteristics of a user can be determined according to an image to be detected by receiving the image to be detected which contains the user and is sent by the television; the interface mode corresponding to the user is determined according to the user characteristics of the user, so that the interface mode suitable for the user is determined according to the user characteristics of the user, the method is more intelligent, and the individual requirements of the user are met; the interface mode corresponding to the user is sent to the television, so that the television displays the interface mode, the operation is simple, the user can select the television programs more conveniently and quickly, and the experience degree of the user in watching the television programs is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic block diagram of an interface mode display system provided by an embodiment of the present application;
fig. 2 is a schematic block diagram of a television set provided by an embodiment of the present application;
fig. 3 is a schematic block diagram of a cloud server provided by an embodiment of the present application;
FIG. 4 is a flowchart illustrating steps of a method for displaying interface modes according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a prediction result of an image to be detected according to an embodiment of the present application;
fig. 6 is a schematic diagram of determining a user group corresponding to a user according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating steps of another interface mode display method provided by an embodiment of the present application;
FIG. 8 is a schematic view of a scenario in which an infrared sensor provided by an embodiment of the present application detects a user;
fig. 9 is a schematic diagram of a display interface selection box provided by an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an interface mode display system according to an embodiment of the present application. The interface mode display system 100 includes a television 10 and a cloud server 20.
Specifically, the television 10 and the cloud server 20 are respectively provided with a communication module, which may include, but is not limited to, a bluetooth module, a Wi-Fi module, a 4G module, a 5G module, an NB-IoT module, a LoRa module, and the like. For example, the television 10 is communicatively connected to the cloud server 20 through an NB-IoT module.
The television 10 has a fully open platform, and is equipped with an operating system; the user can install and uninstall various application software by himself while enjoying the common television content, and continuously expand and upgrade the functions of the new television product, thereby continuously bringing rich personalized experience to the user.
Illustratively, the television 10 may be an OLED television, an LED television, a curved-surface television, a full-screen television, a 3D television, a smart television, an ultra high definition UHD television, or the like.
Specifically, the television 10 enters the interface mode after being turned on. The interface mode comprises a plurality of layers of directories and program lists. Wherein the interface mode may include, but is not limited to, a childhood mode, a youth mode, a middle-aged mode, an elderly mode, and the like.
It should be noted that the cloud server 20 is a simple, efficient, safe, reliable, and elastically scalable processing capability service platform. Each cluster node of the cloud server platform is deployed in a backbone data center of the Internet and can independently provide Internet infrastructure services such as computing, storage, online backup, hosting, bandwidth and the like.
As shown in fig. 1, the television set 10 includes a camera 11. For example, the camera 11 may be disposed in a frame of the television 10, or may be a separate external camera. Wherein the photographing device 11 includes a camera.
Illustratively, the television set 10 may control the camera 11 to capture an image to be detected including at least one user.
Specifically, the television 10 and the cloud server 20 are cooperatively used to execute the interface mode display method provided in the embodiment of the present application, so as to identify the user characteristics of the user according to the image to be detected, and determine the interface mode corresponding to the television 10 according to the user characteristics, thereby improving the user experience.
Illustratively, in the television 10, when the television 10 detects that the user needs to watch television, an image to be detected is acquired by the shooting device 11; sending the image to be detected to the cloud server 20, so that the cloud server 20 determines an interface mode corresponding to the user according to the image to be detected; the interface mode corresponding to the user is acquired, and the television 10 is controlled to display the interface mode corresponding to the user.
Illustratively, in the cloud server 20, an image to be detected of the user sent by the television 10 may be received; determining user characteristics of a user according to the image to be detected; determining an interface mode corresponding to a user according to the user characteristics of the user; the interface mode corresponding to the user is sent to the television 10, so that the television 10 displays the interface mode corresponding to the user.
Referring to fig. 2, fig. 2 is a schematic block diagram of a television according to an embodiment of the present disclosure. In fig. 2, the television 10 includes a processor 101, a memory 102, and a camera 103, wherein the processor 101, the memory 102, and the camera 103 are connected via a bus, such as an I2C (Inter-integrated circuit) bus.
The memory 102 may include, among other things, a non-volatile storage medium and an internal memory. The non-volatile storage medium may store an operating system and a computer program. The computer program includes program instructions that, when executed, cause a processor to perform any one of the interface mode display methods.
The camera 103 is used for capturing images and transmitting the captured images to be detected to the processor 101 and the memory 102.
The processor 101 is used to provide computing and control capabilities to support the operation of the entire television 10.
The Processor may be a Central Processing Unit (CPU), or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein the processor 101 is configured to run a computer program stored in the memory 102, and when executing the computer program, implement the following steps:
when detecting that a user needs to watch television, acquiring an image to be detected, wherein the image to be detected comprises at least one user; sending the image to be detected to a cloud server so that the cloud server determines an interface mode corresponding to the user according to the image to be detected; and receiving the interface mode corresponding to the user and sent by the cloud server, and controlling the television to display the interface mode.
In some embodiments, the television is installed with an infrared sensor and a shooting device, and the processor realizes that when detecting that a user needs to watch television and acquiring an image to be detected, the processor:
if the infrared sensor detects that the user exists, a shooting instruction is generated, and the shooting device is controlled to shoot an image to be detected according to the shooting instruction; or if the starting signal of the television is detected, generating a shooting instruction according to the starting signal, and controlling the shooting device to shoot the image to be detected according to the shooting instruction.
In some embodiments, after the receiving the interface mode corresponding to the user sent by the cloud server, the processor further implements:
if the plurality of interface modes exist, displaying an interface selection frame to remind the user of selecting the interface mode, and determining the interface mode displayed by the television according to the selection operation of the user; or if multiple interface modes exist, combining the multiple interface modes to obtain the interface mode displayed by the television.
Referring to fig. 3, fig. 3 is a schematic block diagram of a cloud server according to an embodiment of the present disclosure. In fig. 3, the cloud server 20 includes a processor 201 and a memory 202, wherein the processor 201 and the memory 202 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
The memory 202 may include, among other things, a non-volatile storage medium and an internal memory. The non-volatile storage medium may store an operating system and a computer program. The computer program includes program instructions that, when executed, cause a processor to perform any one of the interface mode display methods.
Processor 201 is used to provide computing and control capabilities, supporting the operation of the entire cloud server 20.
The Processor may be a Central Processing Unit (CPU), or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein the processor 201 is configured to run a computer program stored in the memory 202, and when executing the computer program, implement the following steps:
receiving an image to be detected which is sent by a television and contains a user; determining the user characteristics of the user according to the image to be detected; determining an interface mode corresponding to the user according to the user characteristics of the user; and sending the interface mode corresponding to the user to the television so as to enable the television to display the interface mode.
In some embodiments, the user characteristics include age and gender; when the processor determines the user characteristics of the user according to the image to be detected, the processor realizes that:
determining identity information corresponding to the user in the image to be detected based on the trained face recognition model; and determining the age and the gender of the user according to the identity information corresponding to the user based on an identity database in the cloud server.
In some embodiments, the processor, when implementing determining the identity information corresponding to the user in the image to be detected, implements:
inputting the image to be detected into the trained face recognition model, and outputting a prediction identity corresponding to a user in the image to be detected and a prediction probability corresponding to the prediction identity; and if the prediction probability corresponding to the prediction identity is larger than a preset probability threshold, taking the prediction identity as the identity information of the user.
In some embodiments, the processor, when determining the interface mode corresponding to the user according to the user characteristic of the user, implements:
determining a first sub-population corresponding to the user according to the gender of the user; determining a second sub-population corresponding to the user according to the age of the user; determining a user group corresponding to the user according to the first sub-group and the second sub-group corresponding to the user; and determining the interface mode corresponding to the user based on the preset corresponding relation between the user group and the interface mode.
For convenience of understanding, the interface mode display method provided by the embodiment of the present application will be described in detail below with reference to the television and the cloud server in fig. 1 to 3. It should be noted that the television and the cloud server form a limitation on an application scenario of the interface mode display method provided in the embodiment of the present application.
Referring to fig. 4, fig. 4 is a flowchart illustrating steps of an interface mode display method according to an embodiment of the present application. The interface mode display method can be applied to the cloud server, and improves the experience degree of users by identifying the user characteristics of the users according to the images to be detected and determining the interface mode displayed by the television according to the user characteristics.
As shown in fig. 4, the interface mode display method includes steps S10 to S40.
And step S10, receiving the image to be detected which is sent by the television and contains the user.
Specifically, an image to be detected containing a user and transmitted by a television is received through a built-in communication module.
Illustratively, the communication module built in the cloud server may include, but is not limited to, a bluetooth module, a Wi-Fi module, a 4G module, a 5G module, an NB-IoT module, a LoRa module, and the like.
Illustratively, the image to be detected comprises at least one user; for example, in the image to be detected, there may be one user or a plurality of users. Wherein the user is a user who needs to watch television. For example, the television can detect whether a user is in front of or around the television; if the user exists, the television can acquire the image to be detected corresponding to the user through the camera.
In some embodiments, after receiving the image to be detected of the user sent by the television, the image to be detected may be preprocessed. The pre-processing may include, but is not limited to, image normalization processing, brightness equalization processing, and contrast enhancement processing.
The image normalization process is to convert UNIT type data of 0 to 255 in an image to between 0 and 1. The brightness equalization processing refers to adjusting the brightness component of the image based on the HSV color space. The contrast enhancement processing means that the image is subjected to nonlinear stretching according to a histogram equalization algorithm, and the pixel values of the image are redistributed so that the number of pixels in a certain gray scale range is approximately the same.
In this embodiment, the preprocessing of the image to be detected may include one or a combination of image normalization, brightness equalization, and contrast enhancement.
For example, the image to be detected may be subjected to image normalization processing to obtain a normalized image to be detected.
Exemplarily, image normalization processing can be performed on an image to be detected to obtain a normalized image to be detected; and then carrying out contrast enhancement processing on the normalized image to be detected to obtain the image to be detected after contrast enhancement.
By preprocessing the image to be detected, the details in the image to be detected can be more prominent, and the accuracy of determining the user characteristics of the user can be improved.
And step S20, determining the user characteristics of the user according to the image to be detected.
Illustratively, the user characteristics may include, but are not limited to, age and gender.
In some embodiments, determining the user characteristic of the user from the image to be detected may include: determining identity information corresponding to the user in the image to be detected based on the trained face recognition model; and determining the age and the gender of the user according to the identity information corresponding to the user based on an identity database in the cloud server.
Specifically, the image to be detected is input into a trained face recognition model, and a prediction identity corresponding to the user in the image to be detected and a prediction probability corresponding to the prediction identity are output.
Illustratively, the preprocessed image to be detected can be input into a trained face recognition model for recognition, so that the recognition accuracy can be improved.
It should be noted that the face recognition model in the embodiment of the present application may include, but is not limited to, a recognition algorithm based on a face feature point, a recognition algorithm based on an entire face image, a recognition algorithm based on a template, a recognition algorithm based on a neural network, or a theory based on a lighting estimation model.
In some embodiments, before the image to be detected is input into the trained face recognition model, the initial face recognition model needs to be trained to obtain the trained face recognition model. For example, a preset number of sample images are configured, and the initial face recognition model is trained until convergence, so as to obtain a trained face recognition model. The trained face recognition model may be stored in a cloud server.
Wherein the sample images may comprise images of different users.
In an embodiment of the present application, the face recognition model may include a convolutional neural network. Illustratively, an image to be detected is input into a trained face recognition model, convolution and pooling are performed on the image to be detected for a plurality of times, and then the processed result is subjected to full-connection processing and normalization processing to recognize and obtain a prediction identity corresponding to a user in the image to be detected and a prediction probability corresponding to the prediction identity.
As shown in fig. 5, fig. 5 is a schematic diagram of a prediction result of an image to be detected. For example, if the image includes a user, the obtained prediction result may include: [ (Xiaoming, 90%) ]. Wherein, the 'Xiaoming' represents the prediction identity, and the '90%' represents the prediction probability corresponding to the prediction identity of the 'Xiaoming'.
After the predicted identity of the user in the image to be detected and the predicted probability corresponding to the predicted identity are obtained, whether the predicted probability corresponding to the predicted identity is larger than a preset probability threshold value needs to be judged. If the prediction probability corresponding to the predicted identity is larger than a preset probability threshold, the predicted identity is used as identity information of the user, for example, the name of the user is small and clear.
The specific value of the preset probability threshold may be set according to an actual situation, and the specific value is not limited herein.
The predicted identity of the user is determined based on the trained face recognition model, so that the identity information of the user can be accurately determined, and the accuracy of subsequently determining the interface mode corresponding to the user is improved.
And step S30, determining the interface mode corresponding to the user according to the user characteristics of the user.
For example, the interface mode can be divided into a child mode, a young mode, a middle-aged mode and an old mode according to the age; the interface can also be divided according to gender into a female youth mode, a male youth mode, a female middle-age mode, a male middle-age mode, a female elderly mode, and a male elderly mode.
It will be appreciated that users of different age groups have different requirements on the interface mode of the television, for example, youth is more likely to have more popular, fresh program videos; the elderly tend to have concise and clear interface patterns. Similarly, users with different genders have different requirements on the interface mode of the television; for example, women in young years prefer the interface mode with the contents of love drama, idol drama, etc., while men in young years prefer the interface mode with the contents of swordsmen, military affairs, war, etc.; in addition, the influence of the interface mode with complex operation on young people is small, and the influence of the interface mode with complex operation on old people is large.
In some embodiments, determining the interface mode corresponding to the user according to the user characteristic of the user may include: determining a first sub-group corresponding to the user according to the gender of the user; determining a second sub-population corresponding to the user according to the age of the user; determining a user group corresponding to the user according to the first sub-group and the second sub-group corresponding to the user; and determining the interface mode corresponding to the user based on the preset corresponding relation between the user group and the interface mode.
The user group refers to a plurality of users having the same attribute.
It should be noted that, the users are classified according to their genders to determine the first sub-group to which the users belong. And then determining a second sub-population to which the user belongs according to the age group to which the age of the user belongs. And finally, determining a user group corresponding to the user according to a second sub-group to which the user belongs on the basis of the first sub-group to which the user belongs.
Illustratively, the first sub-population may include a female population and a male population. For example, if the user a is a male, the user a is divided into male groups; and if the user A is female, dividing the user A into female groups.
Specifically, the age groups may include age ranges of 5-16 years, 17-35 years, 36-59 years, 60-90 years, and the like. The second sub-population may be divided into a child population, a young population, a middle-aged population, and an elderly population according to age. For example, age groups of 5-16 years belong to the pediatric population; age groups of 17-35 years belong to the young group.
For example, if the user's nail is 25 years old and belongs to the age range of 17-35 years old, the second sub-population corresponding to the user's nail may be determined to be the young population.
Specifically, the user group corresponding to the user is determined according to the first sub-group and the second sub-group corresponding to the user.
For example, if the first sub-population corresponding to the user a is a male population and the second sub-population corresponding to the user a is a young population, it may be determined that the user population corresponding to the user a is a male young population. As shown in fig. 6, fig. 6 is a schematic diagram of determining a user group corresponding to a user.
For example, if the first sub-population corresponding to the user a is a female population and the second sub-population corresponding to the user a is a middle-aged population, it may be determined that the user population corresponding to the user a is a female middle-aged population.
In some embodiments, the interface mode corresponding to the user is determined according to the user group corresponding to the user based on a preset corresponding relationship between the user group and the interface mode.
The preset corresponding relation between the user group and the interface mode is preset and stored in the cloud server. Illustratively, the preset correspondence between the user groups and the interface modes is shown in table 1.
Table 1 shows a user group and interface mode comparison table
User group Interface mode
Children group Children's mode
Young male Male youth mode
Young female Female youth mode
Middle-aged man Middle-aged male model
Middle-aged women Female middle-aged mode
Male aged Male old age model
Female elderly Female old age model
Exemplary, as shown in table 1; if the user group corresponding to the user is a male middle-aged group, it can be determined that the interface mode corresponding to the user is a male middle-aged mode.
Determining a first sub-population corresponding to the user according to the gender of the user, and determining a second sub-population corresponding to the user according to the age of the user; the user group corresponding to the user can be determined according to the first sub-group and the second sub-group corresponding to the user, and then the interface mode corresponding to the user is determined. The system is more intelligent, and meets the individual requirements of users.
And step S40, sending the interface mode corresponding to the user to the television so that the television displays the interface mode.
Specifically, after the interface mode corresponding to the user is determined, the interface mode corresponding to the user may be sent to the television through the communication module.
Illustratively, if the interface mode corresponding to the user is a male youth mode, the male youth mode is sent to the television. And after receiving the data of the male youth mode, the television switches the interface of the display screen into the male youth mode.
The interface mode corresponding to the user is sent to the television, so that the television displays the interface mode, the operation of the user is simpler and more convenient, and the experience degree of the user in watching television programs is improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating steps of an interface mode display method according to an embodiment of the present application. The interface mode display method can be applied to the television, the interface mode corresponding to the user is determined by acquiring the image to be detected and sending the image to the cloud server, and the interface mode corresponding to the user is displayed in the television, so that the personalized requirement of the user is met, and the experience degree of the user is improved.
As shown in fig. 7, the interface mode display method includes steps S50 to S70.
And step S50, when it is detected that the user needs to watch the television, acquiring an image to be detected, wherein the image to be detected comprises at least one user.
Illustratively, an infrared sensor and a camera may be installed in the television.
It should be noted that, when a user enters the sensing range, the infrared sensor detects the change of the infrared spectrum of the human body, and the load is automatically connected; the user does not leave the sensing range and will continuously switch on the load; and after the user leaves, the load is automatically closed in a delayed way.
The shooting device can be arranged on a camera in a frame of the television and can also be an independent external shooting device. Images within the shooting range can be acquired by the shooting device. The imaging range is the maximum range in which the angle of view of the imaging device can image.
For example, the shooting device may shoot one image at intervals, or may shoot a video continuously, where the obtained video includes multiple images. For example, one image may be taken every 30S, or other times may be used, which is not limited herein.
For example, when the infrared sensor detects that the user exists, the infrared sensor can be continuously connected with the television; and when the infrared sensor detects that no user exists or the user leaves the sensing range, the infrared sensor is disconnected with the television.
When the user enters the sensing range of the infrared sensor, the user indicates that the user needs or wants to watch the television; at the moment, the infrared inductor is continuously communicated with the television, and the television can control the shooting device to collect the image to be detected containing the user. For example, the image to be detected acquired by the photographing device may include one user or may include a plurality of users.
In some embodiments, acquiring the image to be detected may include: and if the infrared sensor detects that the user exists, generating a shooting instruction, and controlling a shooting device to shoot the image to be detected according to the shooting instruction.
Illustratively, as shown in fig. 8, fig. 8 is a schematic view of a scene in which the infrared sensor detects a user. When the user is positioned in front of the television, the infrared inductor detects that the user exists and is continuously connected with the television; after the television is connected with the infrared sensor, a shooting instruction for controlling the shooting device to work is generated. The shooting device can acquire images in front of the television according to a shooting instruction generated by the television to obtain an image to be detected containing at least one user.
In some embodiments, acquiring the image to be detected may include: and if the starting signal of the television is detected, generating a shooting instruction according to the starting signal, and controlling a shooting device to shoot the image to be detected according to the shooting instruction.
It is understood that the turn-on signal refers to an electric signal generated when the television is turned on or a turn-on command received from the television to the remote controller.
For example, the user starts the television through a physical key switch in the television; or, the user transmits a starting instruction to the television by pressing a starting button in the remote controller so as to start the television.
Specifically, when the television receives the turn-on signal, a shooting instruction for controlling the shooting device to work is generated according to the turn-on signal. The shooting device can acquire images in front of the television according to a shooting instruction generated by the television to obtain an image to be detected containing at least one user.
In some embodiments, a fill light may be provided in the camera. When the shooting device collects images, the light supplement lamp can be turned on to control the light supplement lamp to supplement light for a camera in the shooting device; and then acquiring the image to be detected after the light supplement.
The light supplement lamp can be a white light lamp or a red light lamp which is arranged in the camera device, and can also be an independent white light lamp or an independent red light lamp.
For example, when the light of the surrounding environment of the shooting device is weak, the light supplement lamp is turned on to enhance the image effect of the image collected by the shooting device. Through the light filling effect of light filling lamp, can be so that the color reduction degree of the image of gathering is high, and the image is lifelike, and the signal-to-noise ratio is high, supports the compensation function in a poor light simultaneously.
Whether the user exists or not is detected through the infrared inductor, the image to be detected containing the user can be accurately acquired, and the method is more intelligent.
And step S60, sending the image to be detected to a cloud server so that the cloud server can determine the interface mode corresponding to the user according to the image to be detected.
After the image to be detected containing at least one user is acquired by the shooting device, the television can send the image to be detected to the cloud server through the communication module, so that the cloud server can determine an interface mode corresponding to the user according to the image to be detected.
The process of determining the interface mode corresponding to the user by the cloud server according to the image to be detected may refer to the detailed description of the above embodiment, which is not repeated herein.
And step S70, receiving the interface mode corresponding to the user and sent by the cloud server, and controlling the television to display the interface mode.
Specifically, after determining the interface mode corresponding to the user, the cloud server may actively send the interface mode corresponding to the user to the television through the communication module; or sending the interface mode corresponding to the user to the television through the communication module according to the request of the television.
For example, after the image to be detected is sent to the cloud server by the television, a request instruction for requesting the cloud server to send the interface mode corresponding to the user to the television may be generated, and the request instruction is sent to the cloud server through the communication module.
Specifically, after the interface mode corresponding to the user is received, the television is controlled to display the interface mode corresponding to the user.
In some embodiments, if multiple interface modes exist, an interface selection box is displayed to remind a user of selecting the interface mode; and determining the interface mode displayed by the television according to the selection operation of the user.
It can be understood that, if the image to be detected includes a plurality of users, the cloud server may determine an interface mode corresponding to each of the plurality of users. For example, the image to be detected includes a user a and a user b; the interface mode corresponding to the user A is a male youth mode, and the interface mode corresponding to the user B is a female youth mode. Therefore, the television can receive two interface modes of a male youth mode and a female youth mode, but the display screen of the television can only display a constant interface mode at the same time.
Specifically, when multiple interface modes occur, one interface selection box can be displayed in the display screen; the interface selection box includes options for two interface modes. As shown in fig. 9, fig. 9 is a schematic diagram showing an interface selection box.
For example, the user may perform a selection operation via a remote controller to determine an interface mode to be displayed. The television can determine the interface mode to be displayed by the television according to the selection operation of the user in the remote controller.
By displaying the interface selection frame in the display screen, the displayed interface mode can be determined according to the selection operation of the user, the method is more intelligent, and the experience degree of the user is improved.
In some embodiments, if multiple interface modes exist, the multiple interface modes are combined to obtain the interface mode displayed by the television.
For example, intersection processing may be performed on the same content in multiple interface modes, and union processing may be performed on different content to obtain a new interface mode.
Illustratively, if a male youth mode and a female youth mode exist, the male youth mode and the female youth mode are merged, intersection taking processing is performed on the same contents, and union processing is performed on the different contents; and obtaining interface modes corresponding to a plurality of users, such as a couple mode.
Illustratively, if there is a child mode, a female middle-aged mode, and a female elderly mode, then after merging, a whole-family mode may result.
For example, if there is a child mode and a male middle-aged mode, the parent-child mode can be obtained after the combination.
By combining the plurality of interface modes, the obtained target interface mode more comprehensively meets a plurality of users. The interface mode corresponding to the user is displayed in the television, so that the operation is simple, the user can select the television program more conveniently and quickly, and the experience degree of the user in watching the television program is improved.
According to the interface mode display method provided by the embodiment, the to-be-detected image containing the user and sent by the television is received, the identity information of the user can be determined based on the trained face recognition model, and the accuracy of subsequently determining the interface mode corresponding to the user can be improved; determining a first sub-population corresponding to the user according to the gender of the user, and determining a second sub-population corresponding to the user according to the age of the user; the user group corresponding to the user can be determined according to the first sub-group and the second sub-group corresponding to the user, and then the interface mode corresponding to the user is determined. The system is more intelligent, and meets the individual requirements of users. Whether the user exists or not is detected through the infrared inductor, the image to be detected containing the user can be accurately acquired, and the method is more intelligent. By displaying the interface selection frame in the display screen, the displayed interface mode can be determined according to the selection operation of the user, the method is more intelligent, and the experience degree of the user is improved. By combining the plurality of interface modes, the obtained target interface mode more comprehensively meets a plurality of users. The interface mode corresponding to the user is displayed in the television, so that the operation is simple, the user can select the television program more conveniently and quickly, and the experience degree of the user in watching the television program is improved.
The embodiment of the application further provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, the computer program comprises program instructions, and the processor executes the program instructions to realize any interface mode display method provided by the embodiment of the application. For example, the computer program is loaded by a processor and may perform the following steps:
receiving an image to be detected which is sent by a television and contains a user; determining the user characteristics of the user according to the image to be detected; determining an interface mode corresponding to the user according to the user characteristics of the user; and sending the interface mode corresponding to the user to the television so as to enable the television to display the interface mode.
For example, the computer program is loaded by a processor and may perform the following steps:
when detecting that a user needs to watch television, acquiring an image to be detected, wherein the image to be detected comprises at least one user; sending the image to be detected to a cloud server so that the cloud server determines an interface mode corresponding to the user according to the image to be detected; and receiving the interface mode corresponding to the user and sent by the cloud server, and controlling the television to display the interface mode.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The computer-readable storage medium may be an internal storage unit of the television and the cloud server described in the foregoing embodiment, for example, a hard disk or a memory of the television and the cloud server. The computer readable storage medium may also be an external storage device of the television and the cloud server, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital Card (SD), a Flash memory Card (Flash Card), and the like, which are equipped on the television and the cloud server.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. An interface mode display method is applied to a cloud server, and is characterized by comprising the following steps:
receiving an image to be detected which is sent by a television and contains a user;
determining the user characteristics of the user according to the image to be detected;
determining an interface mode corresponding to the user according to the user characteristics of the user;
and sending the interface mode corresponding to the user to the television so as to enable the television to display the interface mode.
2. The interface mode display method of claim 1, wherein the user characteristics include age and gender; the determining the user characteristics of the user according to the image to be detected comprises:
determining identity information corresponding to the user in the image to be detected based on the trained face recognition model;
and determining the age and the gender of the user according to the identity information corresponding to the user based on an identity database in the cloud server.
3. The interface mode display method according to claim 2, wherein the determining the identity information corresponding to the user in the image to be detected comprises:
inputting the image to be detected into the trained face recognition model, and outputting a prediction identity corresponding to a user in the image to be detected and a prediction probability corresponding to the prediction identity;
and if the prediction probability corresponding to the prediction identity is larger than a preset probability threshold, taking the prediction identity as the identity information of the user.
4. The interface mode display method according to claim 2, wherein the determining the interface mode corresponding to the user according to the user characteristic of the user includes:
determining a first sub-population corresponding to the user according to the gender of the user;
determining a second sub-population corresponding to the user according to the age of the user;
determining a user group corresponding to the user according to the first sub-group and the second sub-group corresponding to the user;
and determining the interface mode corresponding to the user based on the preset corresponding relation between the user group and the interface mode.
5. An interface mode display method applied to a television is characterized by comprising the following steps:
when detecting that a user needs to watch television, acquiring an image to be detected, wherein the image to be detected comprises at least one user;
sending the image to be detected to a cloud server so that the cloud server determines an interface mode corresponding to the user according to the image to be detected;
and receiving the interface mode corresponding to the user and sent by the cloud server, and controlling the television to display the interface mode.
6. The interface mode display method according to claim 5, wherein an infrared sensor and a camera are installed in the television, and the acquiring an image to be detected when it is detected that the user needs to watch the television comprises:
if the infrared sensor detects that the user exists, a shooting instruction is generated, and the shooting device is controlled to shoot an image to be detected according to the shooting instruction; or
And if the starting signal of the television is detected, generating a shooting instruction according to the starting signal, and controlling the shooting device to shoot the image to be detected according to the shooting instruction.
7. The interface mode display method according to claim 5, further comprising, after receiving the interface mode corresponding to the user sent by the cloud server:
if the plurality of interface modes exist, displaying an interface selection frame to remind the user of selecting the interface mode, and determining the interface mode displayed by the television according to the selection operation of the user; or
And if the plurality of interface modes exist, combining the plurality of interface modes to obtain the interface mode displayed by the television.
8. A cloud server, wherein the cloud server comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is used for executing the computer program and realizing the following when the computer program is executed:
the interface mode display method of any one of claims 1 to 4.
9. A television set, characterized in that the television set comprises a camera, a memory and a processor;
the shooting device is used for collecting an image to be detected;
the memory is used for storing a computer program;
the processor is used for executing the computer program and realizing the following when the computer program is executed:
the interface mode display method of any one of claims 5 to 7.
10. An interface mode display system is characterized by comprising a television and a cloud server;
the television is provided with a shooting device and a communication module;
the cloud server is provided with a communication module to establish communication connection with the television; wherein the content of the first and second substances,
the television is used for acquiring an image to be detected and displaying an interface mode corresponding to a user according to the shooting device, and the cloud server is used for realizing the interface mode display method according to any one of claims 1 to 4; or
The cloud server determines an interface mode corresponding to a user according to the image to be detected, and the television is used for realizing the interface mode display method according to any one of claims 5 to 7.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement:
the interface mode display method of any one of claims 1 to 4, or
The interface mode display method of any one of claims 5 to 7.
CN202010202821.9A 2020-03-20 2020-03-20 Interface mode display method, cloud server, television, system and storage medium Pending CN113497979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010202821.9A CN113497979A (en) 2020-03-20 2020-03-20 Interface mode display method, cloud server, television, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010202821.9A CN113497979A (en) 2020-03-20 2020-03-20 Interface mode display method, cloud server, television, system and storage medium

Publications (1)

Publication Number Publication Date
CN113497979A true CN113497979A (en) 2021-10-12

Family

ID=77993818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010202821.9A Pending CN113497979A (en) 2020-03-20 2020-03-20 Interface mode display method, cloud server, television, system and storage medium

Country Status (1)

Country Link
CN (1) CN113497979A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102395013A (en) * 2011-11-07 2012-03-28 康佳集团股份有限公司 Voice control method and system for intelligent television
US20170188103A1 (en) * 2015-12-29 2017-06-29 Le Holdings (Beijing) Co., Ltd. Method and device for video recommendation based on face recognition
CN107231569A (en) * 2017-07-12 2017-10-03 易视腾科技股份有限公司 Individual character interface creating method and system
CN107818110A (en) * 2016-09-13 2018-03-20 青岛海尔多媒体有限公司 A kind of information recommendation method, device
CN108419118A (en) * 2018-05-03 2018-08-17 深圳Tcl新技术有限公司 Generation method, television set and the readable storage medium storing program for executing of TV user interface
CN110636354A (en) * 2019-06-10 2019-12-31 青岛海信电器股份有限公司 Display device
CN110659412A (en) * 2019-08-30 2020-01-07 三星电子(中国)研发中心 Method and apparatus for providing personalized service in electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102395013A (en) * 2011-11-07 2012-03-28 康佳集团股份有限公司 Voice control method and system for intelligent television
US20170188103A1 (en) * 2015-12-29 2017-06-29 Le Holdings (Beijing) Co., Ltd. Method and device for video recommendation based on face recognition
CN107818110A (en) * 2016-09-13 2018-03-20 青岛海尔多媒体有限公司 A kind of information recommendation method, device
CN107231569A (en) * 2017-07-12 2017-10-03 易视腾科技股份有限公司 Individual character interface creating method and system
CN108419118A (en) * 2018-05-03 2018-08-17 深圳Tcl新技术有限公司 Generation method, television set and the readable storage medium storing program for executing of TV user interface
CN110636354A (en) * 2019-06-10 2019-12-31 青岛海信电器股份有限公司 Display device
CN110659412A (en) * 2019-08-30 2020-01-07 三星电子(中国)研发中心 Method and apparatus for providing personalized service in electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张铎: "《自动识别技术应用案例分析》", 武汉大学出版社, pages: 183 *

Similar Documents

Publication Publication Date Title
US11503205B2 (en) Photographing method and device, and related electronic apparatus
US9740916B2 (en) Systems and methods for persona identification using combined probability maps
EP3395066B1 (en) Depth map generation apparatus, method and non-transitory computer-readable medium therefor
WO2019120029A1 (en) Intelligent screen brightness adjustment method and apparatus, and storage medium and mobile terminal
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
KR101725884B1 (en) Automatic processing of images
WO2019120016A1 (en) Image processing method and apparatus, storage medium, and electronic device
US11977981B2 (en) Device for automatically capturing photo or video about specific moment, and operation method thereof
US11526704B2 (en) Method and system of neural network object recognition for image processing
CN103546803A (en) Image processing method, client side and image processing system
CN112312215B (en) Startup content recommendation method based on user identification, smart television and storage medium
US11563889B2 (en) Electronic device and method for controlling camera using external electronic device
CN111880640B (en) Screen control method and device, electronic equipment and storage medium
CN112492297A (en) Video processing method and related equipment
WO2022095854A1 (en) Image recognition method, apparatus, and device, and computer-readable storage medium
CN113497979A (en) Interface mode display method, cloud server, television, system and storage medium
CN114040108B (en) Auxiliary shooting method, device, terminal and computer readable storage medium
CN115002554A (en) Live broadcast picture adjusting method, system and device and computer equipment
CN115268285A (en) Device control method, device, electronic device, and storage medium
CN110673737B (en) Display content adjusting method and device based on intelligent home operating system
US20230156349A1 (en) Method for generating image and electronic device therefor
CN114387157A (en) Image processing method and device and computer readable storage medium
CN115509351B (en) Sensory linkage situational digital photo frame interaction method and system
CN113286200A (en) Program recommendation method, cloud server, television, system and storage medium
CN113497980A (en) Interface mode display method, cloud server, television, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211012