CN110933501B - Child eye protection method for TV device, TV device with child eye protection function and system - Google Patents

Child eye protection method for TV device, TV device with child eye protection function and system Download PDF

Info

Publication number
CN110933501B
CN110933501B CN202010072844.2A CN202010072844A CN110933501B CN 110933501 B CN110933501 B CN 110933501B CN 202010072844 A CN202010072844 A CN 202010072844A CN 110933501 B CN110933501 B CN 110933501B
Authority
CN
China
Prior art keywords
equipment
mobile terminal
eye protection
user
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010072844.2A
Other languages
Chinese (zh)
Other versions
CN110933501A (en
Inventor
李小波
谢程明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Shambala Culture Co ltd
Original Assignee
Hengxin Shambala Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Shambala Culture Co ltd filed Critical Hengxin Shambala Culture Co ltd
Priority to CN202010072844.2A priority Critical patent/CN110933501B/en
Publication of CN110933501A publication Critical patent/CN110933501A/en
Application granted granted Critical
Publication of CN110933501B publication Critical patent/CN110933501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/4424Monitoring of the internal components or processes of the client device, e.g. CPU or memory load, processing speed, timer, counter or percentage of the hard disk space used
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a child eye protection method of TV equipment, the TV equipment with the child eye protection function and a child eye protection system, and relates to the field of intelligent equipment, wherein the child eye protection method of the TV equipment comprises the following steps: step S1, setting parameters in advance for the TV equipment; step S2, resetting the TV device to initialize parameters; step S3, the TV device monitors the environment condition; step S4, the TV device adjusts the display parameters according to the environment. By adopting the technical scheme, due to the fact that the detailed eye protection setting is preset, when the TV is used, the display of the TV end can be correspondingly adjusted according to the using condition, the technical problem that the eyes are easily damaged due to the fact that the children cannot adjust the display according to the independent consciousness is solved, meanwhile, different environments can be corrected independently according to the environment threshold value, the display effect is better, and the display is healthier.

Description

Child eye protection method for TV device, TV device with child eye protection function and system
Technical Field
The invention relates to the field of intelligent equipment, in particular to a child eye protection method of TV equipment, the TV equipment with the child eye protection function and a system.
Background
With the development of smart televisions, television program contents are increasingly rich, eyesight is protected, people pay attention to the television watching time reasonably, particularly, eyesight protection of children and teenagers becomes a problem of long-term attention, and the eyesight of the children and the teenagers is greatly damaged when the children and the teenagers watch the television for a long time. At present, the eye protection function of the screen is mainly applied to the blue light prevention technology, and the fatigue of the blue light to eyes is reduced. The application method of the prior art mainly comprises the following steps: 1. the television screen is designed to prevent blue light. 2. The mobile phone screen is designed to prevent blue light. 3. And (4) designing blue-light-proof glasses.
However, the blue light prevention design used in the application method in the prior art generally requires manual setting by a user and cannot actively remind or automatically operate. The purpose of good eye protection cannot be achieved. Especially for children, the users lack the consciousness of autonomous control and can not change along with the change of the environment.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a child eye protection method of a TV device, the TV device with the child eye protection function and a system, and to avoid the technical problem that a child user cannot control the TV device by self to influence the eyesight.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method of eye protection for a child of a TV device, the method comprising:
step S1, setting parameters in advance for the TV equipment;
step S2, resetting the TV device to initialize parameters;
step S3, the TV device monitors the environment condition;
step S4, the TV device adjusts the display parameters according to the environment.
The preset parameters in the step S1 include eye protection duration and eye protection prompting music;
the eye protection duration can be set to be integral multiple of 30 seconds;
the eye protection prompting music comprises music preset in the TV equipment and music acquired in an external acquisition mode; setting an audio storage space in a memory of the TV device, wherein preset music is stored in the audio storage space;
the modes of the TV equipment for obtaining music through the outside comprise USB reading, online downloading and serial port communication downloading.
In step S1, the method for setting parameters of the TV device through the mobile terminal includes:
step S110, setting fixed value data and active value data in the mobile terminal;
step S120, the mobile terminal interacts with the TV equipment to obtain current activity data;
step S130, changing the activity value data at the mobile terminal;
and step S140, interacting with the TV equipment through the mobile terminal to change the parameters of the TV equipment.
Specifically, in step S110, the fixed value data includes: eye protection state, timing eye protection time, eye protection audio selection and audio classification;
the activity value data includes: whether the eye protection state is switched on or off, the specific time for regularly protecting the eyes, the playing mode, the name and the duration of the played audio.
In step S3, when the TV device monitors the environmental condition, a timer in the TV device is started, and the running time of the TV device and the video/audio playing time are recorded and determined by the timer.
Specifically, after the TV device performs step S4 and the display parameter is adjusted according to the environmental condition, if the timer exceeds the threshold, the following steps are performed:
step S5, starting a forced eye protection mode;
and step S6, playing preset audio and video.
More specifically, when step S5 is executed, the TV device sends an early warning signal to the mobile terminal at the same time, which is specifically as follows:
the early warning signal comprises a sound early warning and a pop-up window early warning.
More specifically, in performing step S6, the screen of the TV device is turned off while only the preset audio is played.
Specifically, in the step S3, the monitoring of the environmental condition includes whether the environment changes and the distance from the user to the screen.
More specifically, the distance from the user to the screen is calculated in said step S3 using the following formula:
Figure 160082DEST_PATH_IMAGE001
wherein:
Figure 508946DEST_PATH_IMAGE002
presentation microphone
Figure 173582DEST_PATH_IMAGE003
As a result of the fourier transform of the received signal,
Figure 504244DEST_PATH_IMAGE004
presentation microphone
Figure 516455DEST_PATH_IMAGE005
Receiving a signal
Figure 711113DEST_PATH_IMAGE006
The result of the Fourier transform of (1), a weighting function
Figure 416157DEST_PATH_IMAGE007
More specifically, the distance from the user to the screen is secondarily verified through the strength of the remote control level used by the user;
in step S4, the adjusting, by the TV device, the display parameter according to the environmental condition specifically includes:
s410, analyzing each environmental data;
step S420, calculating corresponding optimal display parameters for each environmental data;
step S430, a plurality of optimal display parameters are subjected to optimal values;
step S440, the TV device is adjusted using the optimal value of the display parameter.
Specifically, in the step S410, each environment data includes environment light data, user-to-screen distance data, and user viewing duration.
Specifically, in step S430, the optimal value is obtained by taking variance values of a plurality of optimal display parameters.
Specifically, in the step S430, the optimal value is a lowest value of the optimal display parameters.
A TV device with an eye-protection function for children, comprising: the system comprises a processor, a camera, a microphone array, a memory, a communication interface and a serial bus, wherein the camera, the microphone array, the memory and the communication interface are connected with the processor through the serial bus;
the camera is used for acquiring the position and image parameters of a user and sending the parameters to the processor for processing;
the memory is used for storing user setting conditions and songs preset in the TV equipment;
the microphone array is used for acquiring position parameters of a user and simultaneously acquiring a voice instruction of the user;
the communication interface is used for communication between the TV equipment and a mobile terminal or a screen;
the processor is used for analyzing parameters acquired by the camera and the microphone array and controlling other elements.
The system also comprises a timer which is electrically connected with the processor through a serial bus;
the timer is used for monitoring the use time of the user and the operation time of the equipment and sending data to the processor.
The device also comprises a plurality of sensors, wherein the sensors comprise a light sensor and a motion sensor;
the light sensor is used for detecting the light condition of the surrounding environment and converting the light condition of the surrounding environment into a control signal;
the motion sensor is used for detecting the motion situation around and converting the motion situation around into a control signal.
Wherein, the memory comprises a read-only memory and a random access memory;
the read-only memory is used for storing user setting conditions in the TV equipment;
the random access memory is used to store other files such as songs preset in the TV device.
Specifically, the rom is an Electrically Erasable Programmable Rom (EEPROM).
The system comprises a memory, a network interface and an external device, wherein the network interface is connected with the memory, is connected with the external device through the network interface and can download screen protection images and music through the external device;
and the TV equipment establishes connection with the mobile terminal through the network interface.
A children's eyeshield system which characterized in that: comprises a mobile terminal, a TV equipment terminal and a TV terminal, wherein the TV equipment terminal is respectively connected with the mobile terminal and the TV terminal,
the mobile terminal is connected with the TV equipment terminal and is used for setting the sum of user settings of the TV equipment terminal;
the TV end is connected with the TV equipment end and used for displaying the content in the TV equipment end;
the TV device side is used for detecting the surrounding environment and changing the display setting in the TV side according to the surrounding environment.
Adopt above-mentioned technical scheme, owing to set up the detailed setting of eyeshield in advance, when using, can carry out corresponding adjustment to the demonstration of TV end according to the in service behavior, overcome the technical problem that the regulation that the unable autonomic consciousness of juvenile user shows leads to easily causes the damage to eyes, simultaneously, can also independently revise different environment according to the environmental threshold, the display effect is better, shows more healthily.
Drawings
FIG. 1 is a method flow diagram of a method of protecting a child's eyes of a TV apparatus of the present invention;
FIG. 2 is a schematic diagram of a TV apparatus with eye protection function for children according to the present invention;
FIG. 3 is a schematic diagram of the connection of the TV apparatus with eye protection function for children according to the present invention;
fig. 4 is a schematic structural view of the child eye protection system of the present invention.
In the figure, 100-processor, 200-camera, 300-microphone array, 400-memory, 500-communication interface, 600-serial bus, 700-timer, 800-sensor, 810-light sensor, 820-motion sensor, 900-network interface, 1000-mobile terminal, 1010-TV device terminal, 1020-TV terminal.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
As a first embodiment of the present invention, there is provided a method of protecting eyes of a child of a TV device, as shown in fig. 1, the method including:
firstly, executing step S1, and carrying out preset parameter setting on TV equipment;
the preset parameters in step S1 include an eye protection duration and an eye protection prompting music, where the eye protection duration may be preset, for example, set to be an integral multiple of 30 seconds; preferably, the video or picture displayed in eye is also preset in the TV device. Specifically, in step S1, the preset parameter setting is implemented by the following steps:
step S110, firstly, fixed value data and movable value data are set in the mobile terminal; the setting data in step S110 is shown in table 1 below:
TABLE 1
Figure 234115DEST_PATH_IMAGE008
In table 1 above, the fixed value data includes: "eyeshield state", "regularly eyeshield time", "eyeshield audio playback sets up", eyeshield audio frequency is selected: "select 1", "select 2", "select 3", "select 4", and "select 5", audio classification: "type 1", "type 2", "type 3", "type 4".
Further, the audio classification in table 1 above may specifically be: story, famous songs, sons, poems, etc., which can be routinely selected by those skilled in the art according to conventional categorizing methods.
In table 1 above, the activity value data includes: whether the eye protection state is switched on or off, the specific time of the timed eye protection, the playing mode and the playing audio name are as follows: "song information 1", "song information 2", "song information 3", "song information 4", and "song information 5", and the time length: "time length 1", "time length 2", "time length 3", "time length 4", and "time length 5".
Specifically, the specific time setting of the timed eye protection is, for example, an integral multiple of 30 seconds, which is convenient for the user to set the system, for example, 30 seconds, 1 minute, 5 minutes, 10 minutes, 15 minutes, 30 minutes, 1 hour, and 2 hours.
The activity value data corresponds to fixed value data, namely fixed value data "eyeshield state" and activity value data: whether the switch of eyeshield state is corresponding, fixed value data "regularly eyeshield time" and activity value data: the specific time of regularly protecting the eyes is corresponding, and fixed value data "eyeshield audio playback sets up" and activity value data: the play modes correspond.
Further, the eye protection prompting music comprises music preset in the TV equipment or the mobile terminal and music obtained in an external obtaining mode; an audio storage space is arranged in a memory of the TV equipment or the mobile terminal, and preset music is stored in the audio storage space; the TV equipment or the mobile terminal adopts various modes of acquiring music, including USB reading, online downloading, serial port communication downloading and the like, and meets various personalized requirements of users.
Step S120, the mobile terminal interacts with the TV equipment to display current activity data in the TV equipment;
specifically, after the mobile terminal is connected to the TV device, for example, after HDMI high-definition connection, the mobile terminal transmits current activity data to the TV device through an HDMI high-definition data line, and displays content such as the corresponding activity data in the TV device, that is, after the TV device interacts with the mobile terminal, the current data of the mobile terminal can be completely displayed on the TV device during the interaction.
Step S130, changing the activity value data at the mobile terminal; namely, the controller can be used for adjusting various parameters at the mobile terminal, such as total control, eye protection state switching; different selection controls, eye protection duration selection, playing mode selection, playing content selection and the like.
And step S140, interacting with the TV equipment through the mobile terminal to change the parameters of the TV equipment. And sending the setting condition to the TV equipment for adjustment and display through the secondary interaction of the mobile terminal and the TV equipment.
Then step S2 is executed, the TV device is reset to initialize the parameters;
some set settings exist in the TV equipment and conflict with the settings in the mobile terminal, and when the mobile terminal is used for eye protection of the TV equipment, the automatic settings and self-protection in the TV equipment need to be closed in advance, so that the phenomena of misjudgment and the like in the eye protection process are avoided.
In the next step S3, the TV device monitors the environmental conditions;
the TV equipment is connected through the data of the mobile terminal, and the mobile terminal is used for monitoring the environmental conditions, particularly, the mobile terminal provides a plurality of groups of sensors and collectors for the TV equipment. Furthermore, the mobile terminal uses a video collector, a motion sensor, a light sensor and an audio collector to monitor the user distance, the environmental change and the real using environment and condition in the using process respectively.
The motion sensor of the mobile terminal can sense whether a person or an animal moves in the use space of the mobile terminal, so that the mobile terminal starts or stops a protection function; the light sensor at the mobile terminal can sense the sensitivity of natural light and artificial light source in the using environment, so that the output frequency and output power of the mobile terminal are enabled, parallel data (data 1 and data 2) converted by the light sensor through a serial bus respectively contain and do not contain visible light brightness values of the surrounding environment are converted into digital signals, in order to display the gray level of an image on a TV device, the color of a pixel on the image is correspondingly quantized according to the brightness of the pixel, the two groups of numbers are compared to obtain the brightness digital information of the surrounding visible light, the information is compared with the environment brightness value corresponding to the working state of the collected TV device, if the brightness value is in an adjusting interval, a new brightness value is sent to a control part of the TV device, namely the mobile terminal, so that the automatic adjustment of the brightness of the TV device is realized, furthermore, the adjustment interval is set to avoid waste of system resources caused by adjustment of small changes of the ambient brightness value. The video collector of the mobile terminal can collect the environmental conditions and the video information of the user, and the distance between the user and the TV equipment is calculated by the video information of the user through the mobile terminal, and the method specifically comprises the following steps:
the mobile terminal firstly needs to collect the user profile in the video information collected by the video collector, secondary modeling is carried out on the user profile information, and after the secondary modeling, the model is compared with the inherent model to obtain the position relation between the user and the TV equipment. In the two-dimensional image collected by the video collector, the outline of the target user can be described in the form of a point set. Points for describing the contour are generally selected at places capable of representing the contour features of the target object, such as corner points, T-shaped joints, and the like, and meanwhile, among the points having the contour features, other intermediate points are selected in an equidistant sampling manner, and together form a feature point set of the target contour, which can be represented as a multi-dimensional vector, as shown in the following formula (1):
Figure 446090DEST_PATH_IMAGE009
(1)
wherein the content of the first and second substances,
Figure 364761DEST_PATH_IMAGE010
is the coordinates of a single feature point,
Figure 105620DEST_PATH_IMAGE011
is the abscissa of the first feature point,
Figure 15992DEST_PATH_IMAGE012
extracting the contour feature points of all the images by using the method for the vertical coordinate of the first feature point to obtain a sample set
Figure 500499DEST_PATH_IMAGE013
Then, then
Figure 335993DEST_PATH_IMAGE014
Wherein the content of the first and second substances,
Figure 887235DEST_PATH_IMAGE015
is a multi-dimensional vector, QUOTE, in the above formula (1)
Figure 742101DEST_PATH_IMAGE016
Figure 200939DEST_PATH_IMAGE016
Is the number of samples. Then, because the statistical characteristics of the coordinates of the feature points in the sample set need to be obtained in the process of establishing the model, in order to enable the corresponding feature points in different samples to have comparability, any image is selected from the images as a reference image, and other images are transformed to be close to the reference image in the appearance of the target object.
Reference image
Figure 94202DEST_PATH_IMAGE017
And any image G to be transformed in
Figure 176864DEST_PATH_IMAGE013
Respectively, the corresponding vectors in (A) are recorded as
Figure 155577DEST_PATH_IMAGE018
And X, performing similarity transformation T to make error
Figure 8651DEST_PATH_IMAGE019
At a minimum, T is expressed in matrix form (2) as:
Figure 901486DEST_PATH_IMAGE020
(2)
wherein the content of the first and second substances,
Figure 420629DEST_PATH_IMAGE021
is the scaling factor of the signal to be transmitted,
Figure 922192DEST_PATH_IMAGE022
is the rotation factor of the optical fiber, and the optical fiber,
Figure 221367DEST_PATH_IMAGE023
is the factor of the displacement of the optical fiber,
Figure 829502DEST_PATH_IMAGE024
is the X-axis direction vector of the point i,
Figure 752502DEST_PATH_IMAGE025
is the Y-axis direction vector of the point i,
Figure 505475DEST_PATH_IMAGE026
is the amount of change in displacement of the X-axis,
Figure 543095DEST_PATH_IMAGE027
the displacement variation on the Y axis, and the transformation enables the distance square sum of the corresponding feature points of the two images to be minimum so as to achieve the aim of aligning the target object.
After alignment, the contour of the target object in the image tends to be normalized in shape and position, and then principal component analysis is performed on the sample, the model of the sample can be expressed by the following equation (3):
Figure 446075DEST_PATH_IMAGE028
(3)
wherein:
Figure 743238DEST_PATH_IMAGE029
is the average profile of the sample and,
Figure 42895DEST_PATH_IMAGE030
is pre-QUOTE obtained by principal component analysis
Figure 388600DEST_PATH_IMAGE031
Figure 505416DEST_PATH_IMAGE031
A matrix of eigenvectors corresponding to the individual eigenvalues, b is a parameter of possible variation over the mean profile,
Figure 796982DEST_PATH_IMAGE032
by varying parameters
Figure 934002DEST_PATH_IMAGE033
A new model instance can be obtained.
The existing model X is placed in an image to be calculated, iteration is carried out by utilizing the gray characteristic of the contour, and the parameter QUOTE is adjusted in each iteration process
Figure 969216DEST_PATH_IMAGE034
Figure 406450DEST_PATH_IMAGE034
And changing the position and the shape of the model to generate a new model instance Y, and finally completing the matching of the model and the outline of the test image. That is, the distance information between the user corresponding to the model and the TV device is the distance between the user corresponding to the current image and the TV device. The method comprises the steps that an audio collector at a mobile terminal can collect voice or instructions of a user in the environment, and the distance between the user and TV equipment is determined according to the direction and the size of the voice and the instructions, specifically, the audio collector adopts a microphone array, when the user sends voice, the direction and the distance between the user and the TV equipment can be clearly positioned through the microphone array, the mode of positioning the direction and the distance between the user and the TV equipment through the microphone array is mainly realized by calculating time delay difference between microphones, a sound source positioning algorithm based on the time delay difference is divided into two steps of time delay difference calculation and sound source positioning, and the precision of the sound source positioning algorithm is mainly dependent on the time delay difference calculation. In particular, the signal received by a single microphone of the microphone array can be expressed as the following equation (4):
Figure 184622DEST_PATH_IMAGE035
(4)
wherein the content of the first and second substances,
Figure 193292DEST_PATH_IMAGE036
is as followsiThe microphone signal being received relative to the secondjDelay time of sound source signal received by microphoneThe amount of change in the signal on the first side,
Figure 647843DEST_PATH_IMAGE037
is the firstiOther noise on each microphone.
The correlation function between any two associated microphones can then be expressed as the following equation (5):
Figure 58358DEST_PATH_IMAGE038
(5)
wherein the content of the first and second substances,
Figure 868226DEST_PATH_IMAGE039
is as followsiThe microphone signal being received relative to the secondjThe amount of signal variation over the delay time for each microphone to receive the sound source signal,
Figure 478373DEST_PATH_IMAGE040
is as followsjThe signals received by the microphones.
In order to reduce the calculation period and improve the calculation efficiency, discrete fourier transform is usually performed on the signals, and then correlation functions of the signals received by the two microphones are performed in the frequency domain. The correlation function of the associated arbitrary two microphone received signals in the frequency domain can be expressed as the following equation (6):
Figure 31888DEST_PATH_IMAGE041
(6)
wherein:
Figure 568348DEST_PATH_IMAGE042
is the time delay of the received signal of the microphone relative to the sound source signal,fis the frequency of the sound source signal and,
Figure 982406DEST_PATH_IMAGE043
is shown as
Figure 971527DEST_PATH_IMAGE044
A microphone for receiving signals
Figure 467939DEST_PATH_IMAGE045
As a result of the fourier transform of (a),
Figure 468692DEST_PATH_IMAGE046
is shown asjA microphone for receiving signals
Figure 178285DEST_PATH_IMAGE047
Conjugation of the fourier transform result of (1).
Further, in order to suppress the adverse effects caused by noise, reflection, etc., the frequency domain weighting is usually performed before the inverse fourier transform, and then the cross-correlation function of the signals can be expressed as the following formula (7):
Figure 28603DEST_PATH_IMAGE048
(7)
the weighting function in the above equation can be given empirically, preferably,
Figure 419527DEST_PATH_IMAGE049
making it more resistant to noise, reverberation, reflections, etc.
Due to the adoption of the algorithm, no weighting is used, the calculation is simple, and the tracking and positioning effects are greatly superior to those of the traditional positioning mode. By combining the positioning mode of the microphone array with the positioning mode of the video information, the distance between the user and the TV equipment can be accurately obtained, and the calculation precision is high.
Further, in order to verify the distance from the user to the TV device, the intensity of the remote control level used by the user is monitored to achieve the purpose of secondary verification. The method comprises the following specific steps: a TV device or a mobile terminal receives a remote control instruction sent by a remote controller or a user terminal; the TV equipment or the mobile terminal analyzes the remote control command to obtain the average signal intensity of the communication signal preloaded in the remote control command; and the distance between the TV equipment or the mobile terminal and the remote controller or the user terminal is calculated according to the average signal intensity.
Step S4, the TV device adjusts the display parameters according to the environment. The mobile terminal interacts data sensed by the sensor with data collected by the collector to obtain TV equipment setting suitable for a user, and interacts with the TV equipment to set the TV equipment.
In step S4, the TV device adjusting the display parameters according to the environmental situation specifically includes:
s410, analyzing each environmental data;
specifically, each environment data includes ambient light data, user-to-screen distance data, and user viewing duration.
Step S420, calculating corresponding optimal display parameters for each environmental data;
step S430, carrying out optimal value selection on the plurality of optimal display parameters;
specifically, the value of the optimum value includes one of a variance value of the optimal display parameters and a lowest value of the optimal display parameters.
Step S440, the TV device is adjusted using the optimal value of the display parameter.
Preferably, all areas in front of the TV device may also be divided into a plurality of blocks by using the mobile terminal, as an example in this embodiment, the mobile terminal divides the area in front of the TV device into 20 blocks, specifically divides the blocks in a manner of 5X4, the block closest to the TV device is block 1-block 5, the block further away is block 6-block 10, the block further away is block 11-block 15, and the block furthest away from the TV device is block 16-block 20, when the user is at block 1-block 5, the display setting adopted is brightness 50%, and the eye protection mode is performed every 20 minutes, and a person skilled in the art can perform empirical routine setting on the relationship between the display setting and the blocks according to actual experience.
As another example in this embodiment, when the ambient brightness is 100% of the threshold value, the display setting is set to be 100% of the brightness and the eye protection mode is performed every 20 minutes, and when the ambient brightness is 50% of the threshold value, the display setting is set to be 50% of the brightness and the eye protection mode is performed every 40 minutes, and a person skilled in the art can perform empirical routine setting on the relationship between the display setting and the block group according to actual experience.
Adopt above-mentioned technical scheme, owing to set up the detailed setting of eyeshield in advance, when using, can carry out corresponding adjustment to the demonstration of TV end according to the in service behavior, overcome the technical problem that the regulation that the unable autonomic consciousness of juvenile user shows leads to easily causes the damage to eyes, simultaneously, can also independently revise different environment according to the environmental threshold, the display effect is better, shows more healthily.
Example 2
As a second embodiment of the present invention, another method for protecting eyes of children of a TV device is proposed, in the first embodiment, when the TV device monitors the environmental condition in step S3, a timer in the TV device is started, and the running time and the video/audio playing time of the TV device are recorded and determined by the timer.
By monitoring the running time of the TV device, further eye protection mode setting can be performed in combination with the type of the user, for example, when the user type is children, the eye protection mode is forcibly executed after the single running time of the TV device reaches 20 minutes.
Meanwhile, when the video or audio of the eye protection mode is played, the playing time of the video or audio is recorded, and the eye protection mode is automatically closed after the playing time threshold is reached.
Further, after executing step S4 and the TV device adjusts the display parameters according to the environmental situation, if the timer exceeds the threshold, the following steps are executed:
step S5, starting a forced eye protection mode;
when the step S5 is executed to perform the forced eye protection mode, the TV device further sends an early warning signal to the mobile terminal, where the early warning signal includes a sound early warning and a pop-up window early warning, and the sound early warning is sent out through the mobile terminal or directly from the TV device to remind the user of paying attention. And the popup window early warning starts countdown in an eye protection mode, and the popup window enters the eye protection mode after countdown playing is finished.
Further, when the popup window early warning is started, the background playing of the video information is suspended.
And step S6, playing preset audio and video.
Further, in order to enable the user to completely relax the eye muscles, the screen of the TV device may be turned off or blacked while only the preset audio is played while the user is forced to look away from the TV device when performing step S6.
Through forcing the eyeshield mode, stopped not having the minor or other crowds of self-control ability and can't carry out the eyeshield operation according to the in-service use condition in the use, effectively promoted the eye health of this part of crowd in the use, user experience is good.
Example 3
As a third embodiment of the present invention, a TV device with an eye protection function for children is proposed, and a method for protecting eyes for children using the TV device in the first embodiment or the second embodiment is specifically as shown in fig. 2 and 3, and includes: the system comprises a processor 100, a camera 200, a microphone array 300, a memory 400, a communication interface 500 and a serial bus 600, wherein the camera 200, the microphone array 300, the memory 400 and the communication interface 500 are connected with the processor 100 through the serial bus 600;
the camera 200 is used for acquiring parameters such as the position and the image of a user and sending the parameters to the processor for processing;
the memory 400 is used to store user settings and songs preset in the TV device;
wherein, the memory comprises a read-only memory and a random access memory;
the read-only memory is used for storing user setting conditions in the TV device;
the random access memory is used to store songs and other files preset in the TV device.
Specifically, the rom is an Electrically Erasable Programmable Rom (EEPROM).
The microphone array 300 is used for acquiring the position parameters of the user and acquiring the voice instructions of the user;
the communication interface 500 is used for communication between the TV device and the mobile terminal or screen;
the processor 100 is used for analyzing the parameters collected by the camera 200 and the microphone array 300 and controlling other elements.
The processor analyzing the parameters of the camera 200 and the microphone array 300 specifically includes the following methods: firstly, the distance between the user and the TV device is calculated by collecting video information through the camera 200, which is specifically as follows:
the mobile terminal firstly needs to collect the user profile in the video information collected by the video collector, secondary modeling is carried out on the user profile information, and after the secondary modeling, the model is compared with the inherent model to obtain the position relation between the user and the TV equipment. In the two-dimensional image collected by the video collector, the outline of the target user can be described in the form of a point set. Points for describing the contour are generally selected at places capable of representing the contour features of the target object, such as corner points, T-shaped joints, and the like, and meanwhile, among the points having the contour features, other intermediate points are selected in an equidistant sampling manner, and together form a feature point set of the target contour, which can be represented as a multi-dimensional vector, as shown in the following formula (1):
Figure 665000DEST_PATH_IMAGE050
(1)
wherein the content of the first and second substances,
Figure 925254DEST_PATH_IMAGE051
is the coordinates of a single feature point,
Figure 883108DEST_PATH_IMAGE052
is the abscissa of the first feature point,
Figure 316538DEST_PATH_IMAGE053
extracting the contour feature points of all the images by using the method for the vertical coordinate of the first feature point to obtain a sample set
Figure 497801DEST_PATH_IMAGE054
Then, then
Figure 928957DEST_PATH_IMAGE055
Wherein the content of the first and second substances,
Figure 108527DEST_PATH_IMAGE056
is a multi-dimensional vector in the above formula (1),
Figure 50375DEST_PATH_IMAGE057
is the number of samples. Then, because the statistical characteristics of the coordinates of the feature points in the sample set need to be obtained in the process of establishing the model, in order to enable the corresponding feature points in different samples to have comparability, any image is selected from the images as a reference image, and other images are transformed to be close to the reference image in the appearance of the target object.
Reference image
Figure 936685DEST_PATH_IMAGE058
And any image to be transformed
Figure 663376DEST_PATH_IMAGE059
In that
Figure 270855DEST_PATH_IMAGE013
Respectively, the corresponding vectors in (A) are recorded as
Figure 750815DEST_PATH_IMAGE060
And
Figure 819527DEST_PATH_IMAGE061
performing similarity transformation T to make error
Figure 200DEST_PATH_IMAGE062
At the minimum, the temperature of the mixture is controlled,
Figure 154363DEST_PATH_IMAGE063
expressed in matrix form (2) as:
Figure 972102DEST_PATH_IMAGE064
(2)
wherein,
Figure 540661DEST_PATH_IMAGE065
Is the scaling factor of the signal to be transmitted,
Figure 370339DEST_PATH_IMAGE066
is the rotation factor of the optical fiber, and the optical fiber,
Figure 689761DEST_PATH_IMAGE067
is the factor of the displacement of the optical fiber,
Figure 505664DEST_PATH_IMAGE068
is the X-axis direction vector of the point i,
Figure 964282DEST_PATH_IMAGE069
is the Y-axis direction vector of the point i,
Figure 964862DEST_PATH_IMAGE070
is the amount of change in displacement of the X-axis,
Figure 379628DEST_PATH_IMAGE071
the displacement variation on the Y axis, and the transformation enables the distance square sum of the corresponding feature points of the two images to be minimum so as to achieve the aim of aligning the target object.
After alignment, the contour of the target object in the image tends to be normalized in shape and position, and then principal component analysis is performed on the sample, the model of the sample can be expressed by the following equation (3):
Figure 858276DEST_PATH_IMAGE072
(3)
wherein
Figure 664076DEST_PATH_IMAGE073
Is the average profile of the sample and,
Figure 250871DEST_PATH_IMAGE074
is pre-QUOTE obtained by principal component analysis
Figure 601343DEST_PATH_IMAGE031
Figure 764907DEST_PATH_IMAGE031
A matrix of eigenvectors corresponding to the individual eigenvalues, b is a parameter of possible variation over the mean profile,
Figure 782804DEST_PATH_IMAGE075
by varying the parameter QUOTE
Figure 504947DEST_PATH_IMAGE034
Figure 758029DEST_PATH_IMAGE034
A new model instance can be obtained.
The existing model X is placed in an image to be calculated, iteration is carried out by utilizing the gray characteristic of the contour, and the parameter QUOTE is adjusted in each iteration process
Figure 926807DEST_PATH_IMAGE034
Figure 38026DEST_PATH_IMAGE034
And changing the position and the shape of the model to generate a new model instance Y, and finally completing the matching of the model and the outline of the test image. That is, the distance information between the user corresponding to the model and the TV device is the distance between the user corresponding to the current image and the TV device. The method comprises the steps that an audio collector at a mobile terminal can collect voice or instructions of a user in the environment, and the distance between the user and TV equipment is determined according to the direction and the size of the voice and the instructions, specifically, the audio collector adopts a microphone array, when the user sends voice, the direction and the distance between the user and the TV equipment can be clearly positioned through the microphone array, the mode of positioning the direction and the distance between the user and the TV equipment through the microphone array is mainly realized by calculating time delay difference between microphones, a sound source positioning algorithm based on the time delay difference is divided into two steps of time delay difference calculation and sound source positioning, and the precision of the sound source positioning algorithm is mainly dependent on the time delay difference calculation. In particular single microphone of a microphone arrayThe signal can be represented by the following formula (4):
Figure 727807DEST_PATH_IMAGE076
(4)
wherein the content of the first and second substances,
Figure 226440DEST_PATH_IMAGE077
is a signal of a sound source and is,
Figure 731808DEST_PATH_IMAGE078
is the firstiThe attenuation coefficient of the received signal of the individual microphones with respect to the sound source signal,
Figure 458718DEST_PATH_IMAGE079
is the firstiThe delay time of the received signal of the individual microphones with respect to the sound source signal,
Figure 24128DEST_PATH_IMAGE080
is the firstiOther noise on each microphone.
The correlation function between any two associated microphones can then be expressed as the following equation (5):
Figure 836488DEST_PATH_IMAGE081
(5)
wherein the content of the first and second substances,
Figure 679634DEST_PATH_IMAGE082
is as followsiThe microphone signal being received relative to the secondjThe amount of signal variation over the delay time for each microphone to receive the sound source signal,
Figure 609378DEST_PATH_IMAGE083
is as followsjThe signals received by the microphones.
In order to reduce the calculation period and improve the calculation efficiency, discrete fourier transform is usually performed on the signals, and then correlation functions of the signals received by the two microphones are performed in the frequency domain. The correlation function of the associated arbitrary two microphone received signals in the frequency domain can be expressed as the following equation (6):
Figure 339830DEST_PATH_IMAGE084
(6)
wherein:
Figure 645346DEST_PATH_IMAGE085
is the time delay of the received signal of the microphone relative to the sound source signal,
Figure 752236DEST_PATH_IMAGE086
is the frequency of the sound source signal and,
Figure 364657DEST_PATH_IMAGE087
presentation microphone
Figure 396504DEST_PATH_IMAGE088
As a result of the fourier transform of the received signal,
Figure 557358DEST_PATH_IMAGE089
is shown asjA microphone for receiving signals
Figure 598431DEST_PATH_IMAGE090
Conjugation of the fourier transform result of (1).
Further, in order to suppress the adverse effects caused by noise, reflection, etc., the frequency domain weighting is usually performed before the inverse fourier transform, and then the cross-correlation function of the signals can be expressed as the following formula (7):
Figure 764227DEST_PATH_IMAGE091
(7)
the weighting function in the above equation can be given empirically, preferably,
Figure 282679DEST_PATH_IMAGE092
making it more resistant to noise, reverberation, reflections, etc.
Due to the adoption of the algorithm, no weighting is used, the calculation is simple, and the tracking and positioning effects are greatly superior to those of the traditional positioning mode. By combining the positioning mode of the microphone array with the positioning mode of the video information, the distance between the user and the TV equipment can be accurately obtained, and the calculation precision is high.
Further, in order to verify the distance from the user to the TV device, the intensity of the remote control level used by the user is monitored to achieve the purpose of secondary verification. The method comprises the following specific steps: a TV device or a mobile terminal receives a remote control instruction sent by a remote controller or a user terminal; the TV equipment or the mobile terminal analyzes the remote control command to obtain the average signal intensity of the communication signal preloaded in the remote control command; and the distance between the TV equipment or the mobile terminal and the remote controller or the user terminal is calculated according to the average signal intensity.
The system further comprises a timer 700, wherein the timer 700 is electrically connected with the processor 100 through a serial bus 600;
the timer 700 is used for monitoring the user's usage time and the device operation time and transmitting data to the processor.
Wherein, a plurality of sensors 800 are also included, and the sensors 800 include a light sensor 810 and a motion sensor 820;
the light sensor 810 is used for detecting the light condition of the surrounding environment and converting the light condition of the surrounding environment into a control signal;
the motion sensor 820 is used to detect the motion of the surroundings and convert the motion into a control signal.
The system further comprises a network interface 900, wherein the network interface 900 is connected with the memory 400, is connected with external equipment through the network interface 900, and can download screen protection images and music through the external equipment;
the TV device may also establish a connection with the mobile terminal through a network interface.
By adopting the eye protection method in the first embodiment or the second embodiment, the detailed eye protection setting is preset, when the eye protection method is used, the display of the TV end can be correspondingly adjusted according to the use condition, the technical problem that the eyes are easily damaged due to the fact that a child user cannot adjust the display according to the self-consciousness is solved, meanwhile, the different environments can be automatically corrected according to the environment threshold value, the display effect is better, and the display is healthier.
As a fourth embodiment of the present invention, a child eye protection system is proposed, as shown in fig. 4, which includes a mobile terminal 1000, a TV device terminal 1010 and a TV terminal 1020, wherein the TV device terminal 1010 is connected to the mobile terminal 1000 and the TV terminal 1020 respectively,
the mobile terminal 1000 is connected to the TV device terminal 1020 and is used for setting user settings and eye protection settings of the TV device terminal 1020;
the TV end 1020 is connected to the TV device end 1010 and is configured to display content in the TV device end 1010;
the TV device side 1010 is used to detect the surrounding environment and change the display setting in the TV side according to the surrounding environment.
Adopt above-mentioned technical scheme, owing to set up the detailed setting of eyeshield in advance, when using, can carry out corresponding adjustment to the demonstration of TV end according to the in service behavior, overcome the technical problem that the regulation that the unable autonomic consciousness of juvenile user shows leads to easily causes the damage to eyes, simultaneously, can also independently revise different environment according to the environmental threshold, the display effect is better, shows more healthily.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, and the scope of protection is still within the scope of the invention.

Claims (9)

1. A method of protecting a child of a TV device, the method comprising:
step S1, setting parameters in advance for the TV equipment;
in step S1, the method for setting parameters of the TV device through the mobile terminal includes:
step S110, setting fixed value data and active value data in the mobile terminal;
step S120, the mobile terminal interacts with the TV equipment to enable current activity data to be achieved in the TV equipment;
step S130, changing the data of the activity value at the mobile terminal, and selecting an eye protection state, an eye protection duration, a playing mode and playing contents at the mobile terminal by using a controller;
step S140, changing parameters of the TV equipment through interaction between the mobile terminal and the TV equipment, and sending the setting condition to the TV equipment for adjustment and display through secondary interaction between the mobile terminal and the TV equipment;
when eye protection is performed on the TV equipment by using the mobile terminal, automatic setting and self protection in the TV equipment are closed in advance;
step S2, resetting the TV device to initialize parameters;
step S3, the TV device monitors the environment condition to obtain the distance between the user and the TV device;
the method specifically comprises the following steps:
placing the existing model X in an image to be calculated, changing the position and the shape of the model by adjusting the parameter b, and generating a new model instance Y until the model is matched with the outline of the image to be calculated;
wherein, the existing model is:
Figure 727749DEST_PATH_IMAGE001
Figure 201455DEST_PATH_IMAGE002
is the average profile of the sample and,
Figure 904969DEST_PATH_IMAGE003
is a matrix formed by eigenvectors corresponding to the first t eigenvalues obtained by principal component analysis, b is a parameter which may change on the average profile,
Figure 715799DEST_PATH_IMAGE004
the distance information between the user corresponding to the matched model and the TV equipment is the distance between the user corresponding to the image to be calculated and the TV equipment;
step S4, the TV device adjusts the display parameters according to the environment.
2. A method for protecting eyes of children on a TV device according to claim 1, wherein the parameters preset in step S1 include eye protection duration and eye protection prompting music;
wherein the eye protection duration can be preset;
the eye protection prompting music comprises music preset in the TV equipment and music acquired in an external acquisition mode; setting an audio storage space in a memory of the TV device, wherein preset music is stored in the audio storage space;
the modes of the TV equipment for obtaining music through the outside comprise USB reading, online downloading and serial port communication downloading.
3. A method of protecting a child's eyes of a TV device according to claim 1, wherein: and in the step S3, when the TV device monitors the environmental condition, a timer in the TV device is started, and the running time of the TV device and the playing time of the video and audio are recorded and determined by the timer.
4. A method of protecting a child's eyes of a TV device according to claim 1, wherein: the distance from the user to the screen is calculated in said step S3 using the following formula:
Figure 453948DEST_PATH_IMAGE005
wherein:
Figure 98556DEST_PATH_IMAGE006
is the time delay of the received signal of the microphone relative to the sound source signal,
Figure 164732DEST_PATH_IMAGE007
being acoustic source signalsThe frequency of the radio frequency is set to be,
Figure 654619DEST_PATH_IMAGE008
is shown as
Figure 981695DEST_PATH_IMAGE009
A microphone for receiving signals
Figure 531625DEST_PATH_IMAGE010
As a result of the fourier transform of (a),
Figure 599944DEST_PATH_IMAGE011
is shown as
Figure 893523DEST_PATH_IMAGE012
A microphone for receiving signals
Figure 340684DEST_PATH_IMAGE013
The result of the Fourier transform of (1), a weighting function
Figure 671303DEST_PATH_IMAGE014
5. A method of protecting a child's eyes of a TV device according to claim 1, wherein: in step S4, the adjusting, by the TV device, the display parameter according to the environmental condition specifically includes:
s410, analyzing each environmental data;
step S420, calculating corresponding optimal display parameters for each environmental data;
step S430, selecting the optimal values of the optimal display parameters;
step S440, the TV device is adjusted using the optimal value of the display parameter.
6. A TV device with an eye-protection function for children, comprising: the system comprises a processor, a camera, a microphone array, a memory, a communication interface and a serial bus, wherein the camera, the microphone array, the memory and the communication interface are connected with the processor through the serial bus;
the camera is used for acquiring the position and image parameters of a user and sending the parameters to the processor for processing;
the memory is used for storing user setting conditions and songs preset in the TV equipment;
the microphone array is used for acquiring position parameters of a user and simultaneously acquiring a voice instruction of the user;
the communication interface is used for communication between the TV equipment and a mobile terminal or a screen;
receiving fixed value data and active value data set in a mobile terminal; the mobile terminal is used for interacting with the TV equipment so as to enable current activity data to be achieved in the TV equipment; the mobile terminal is used for receiving the activity value data changed by the mobile terminal, and the controller is used for selecting the eye protection state, the eye protection duration, the playing mode and the playing content; the mobile terminal is used for interacting with the TV equipment to change parameters of the TV equipment, and sending the setting condition to the TV equipment for adjustment and display through secondary interaction of the mobile terminal and the TV equipment; when eye protection is performed on the TV equipment by using the mobile terminal, automatic setting and self protection in the TV equipment are closed in advance;
the processor is used for analyzing parameters acquired by the camera and the microphone array to obtain the distance between the user and the TV equipment;
the method specifically comprises the following steps:
placing the existing model X in an image to be calculated, changing the position and the shape of the model by adjusting the parameter b, and generating a new model instance Y until the model is matched with the outline of the image to be calculated;
wherein, the existing model is:
Figure 836705DEST_PATH_IMAGE015
Figure 668395DEST_PATH_IMAGE016
is the average of the samplesThe profile of the profile is such that,
Figure 625855DEST_PATH_IMAGE017
is a matrix formed by eigenvectors corresponding to the first t eigenvalues obtained by principal component analysis, b is a parameter which may change on the average profile,
Figure 252009DEST_PATH_IMAGE018
the distance information between the user corresponding to the matched model and the TV equipment is the distance between the user corresponding to the image to be calculated and the TV equipment;
the processor controls other elements simultaneously.
7. The TV apparatus with eye protection function for children as claimed in claim 6, wherein: the system also comprises a timer which is electrically connected with the processor through a serial bus;
the timer is used for monitoring the use time of the user and the operation time of the equipment and sending data to the processor.
8. The TV apparatus with eye protection function for children as claimed in claim 6, wherein: the device also comprises a plurality of sensors, wherein the sensors comprise a light sensor and a motion sensor;
the light sensor is used for detecting the light condition of the surrounding environment and converting the light condition of the surrounding environment into a control signal;
the motion sensor is used for detecting the motion situation around and converting the motion situation around into a control signal.
9. A children's eyeshield system which characterized in that: comprising a mobile terminal, a TV terminal and the TV device terminal of any of the above claims 6-8, said TV device terminal being connected to said mobile terminal and said TV terminal respectively,
the mobile terminal is connected with the TV equipment terminal and is used for setting user settings and eye protection settings of the TV equipment terminal;
the TV end is connected with the TV equipment end and used for displaying the content in the TV equipment end;
the TV device side is used for detecting the surrounding environment and changing the display setting in the TV side according to the surrounding environment.
CN202010072844.2A 2020-01-22 2020-01-22 Child eye protection method for TV device, TV device with child eye protection function and system Active CN110933501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010072844.2A CN110933501B (en) 2020-01-22 2020-01-22 Child eye protection method for TV device, TV device with child eye protection function and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010072844.2A CN110933501B (en) 2020-01-22 2020-01-22 Child eye protection method for TV device, TV device with child eye protection function and system

Publications (2)

Publication Number Publication Date
CN110933501A CN110933501A (en) 2020-03-27
CN110933501B true CN110933501B (en) 2020-05-22

Family

ID=69854403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010072844.2A Active CN110933501B (en) 2020-01-22 2020-01-22 Child eye protection method for TV device, TV device with child eye protection function and system

Country Status (1)

Country Link
CN (1) CN110933501B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709533A (en) * 2021-08-20 2021-11-26 深圳市酷开网络科技股份有限公司 Eye protection processing method and device based on television, intelligent terminal and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780934A (en) * 2012-10-23 2014-05-07 华为技术有限公司 Method of controlling television content by mobile terminal and related device
WO2014088971A1 (en) * 2012-12-06 2014-06-12 Microsoft Corporation Multi-touch interactions on eyewear
CN104052871A (en) * 2014-05-27 2014-09-17 上海电力学院 Eye protecting device and method for mobile terminal
CN105242894A (en) * 2015-09-24 2016-01-13 徐向霞 Smart device display method and system
CN106331809A (en) * 2016-08-31 2017-01-11 北京酷云互动科技有限公司 Television control method and television control system
CN107122150A (en) * 2017-04-19 2017-09-01 北京小米移动软件有限公司 Display control method and device, electronic equipment, computer-readable recording medium
CN207020811U (en) * 2017-07-25 2018-02-16 北华航天工业学院 A kind of intelligent children with ocular insurance system
CN108156495A (en) * 2016-12-06 2018-06-12 宋杰 A kind of smart television eyes protecting system and eye care method
CN108156516A (en) * 2016-12-06 2018-06-12 宋杰 It is a kind of to pass through smart mobile phone and the eyes protecting system and method for smart television interaction
CN108490797A (en) * 2018-03-20 2018-09-04 北京百度网讯科技有限公司 The search result methods of exhibiting and device of smart machine
CN108810623A (en) * 2017-05-03 2018-11-13 深圳市创行智能科技有限公司 A kind of intelligent control method and control system of display terminal
CN208780947U (en) * 2017-11-30 2019-04-23 苏州腾茂电子科技有限公司 A kind of eyeshield health type liquid crystal TV set

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780934A (en) * 2012-10-23 2014-05-07 华为技术有限公司 Method of controlling television content by mobile terminal and related device
WO2014088971A1 (en) * 2012-12-06 2014-06-12 Microsoft Corporation Multi-touch interactions on eyewear
CN104052871A (en) * 2014-05-27 2014-09-17 上海电力学院 Eye protecting device and method for mobile terminal
CN105242894A (en) * 2015-09-24 2016-01-13 徐向霞 Smart device display method and system
CN106331809A (en) * 2016-08-31 2017-01-11 北京酷云互动科技有限公司 Television control method and television control system
CN108156495A (en) * 2016-12-06 2018-06-12 宋杰 A kind of smart television eyes protecting system and eye care method
CN108156516A (en) * 2016-12-06 2018-06-12 宋杰 It is a kind of to pass through smart mobile phone and the eyes protecting system and method for smart television interaction
CN107122150A (en) * 2017-04-19 2017-09-01 北京小米移动软件有限公司 Display control method and device, electronic equipment, computer-readable recording medium
CN108810623A (en) * 2017-05-03 2018-11-13 深圳市创行智能科技有限公司 A kind of intelligent control method and control system of display terminal
CN207020811U (en) * 2017-07-25 2018-02-16 北华航天工业学院 A kind of intelligent children with ocular insurance system
CN208780947U (en) * 2017-11-30 2019-04-23 苏州腾茂电子科技有限公司 A kind of eyeshield health type liquid crystal TV set
CN108490797A (en) * 2018-03-20 2018-09-04 北京百度网讯科技有限公司 The search result methods of exhibiting and device of smart machine

Also Published As

Publication number Publication date
CN110933501A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN109361865B (en) Shooting method and terminal
CN103369274A (en) Intelligent television regulating system and television regulating method thereof
CN109889901A (en) Control method for playing back, device, equipment and the storage medium of playback terminal
TWI729983B (en) Electronic device, system and method for adjusting display device
CN105187712B (en) Image pickup method applied to mobile terminal
CN103414952A (en) Display apparatus, control apparatus, television receiver, method of controlling display apparatus, program, and recording medium
CN104656257A (en) Information processing method and electronic equipment
CN104780466A (en) Method and device for adjusting television display brightness
CN112666705A (en) Eye movement tracking device and eye movement tracking method
CN111442464B (en) Air conditioner and control method thereof
CN111667798A (en) Screen adjusting method and device
CN105657500A (en) Video playing control method and device
CN110933501B (en) Child eye protection method for TV device, TV device with child eye protection function and system
CN105208443A (en) Method, device and system for achieving television volume adjustment
CN105007415B (en) A kind of image preview method and apparatus
CN117032612A (en) Interactive teaching method, device, terminal and medium based on high beam imaging learning machine
CN108876731A (en) Image processing method and device
CN112333541B (en) Method, device and equipment for controlling startup and shutdown of display terminal and readable storage medium
CN103686009A (en) Method and device for intelligent perception on television
CN112133261A (en) Brightness adjusting method, device and system of display equipment
CN106231079A (en) A kind of video playback automatic brightness adjustment method and mobile terminal
CN110708600A (en) Method and apparatus for identifying valid viewers of a television
CN115145525A (en) Screen brightness adjustment model training method and device, storage medium and electronic equipment
CN113038257B (en) Volume adjusting method and device, smart television and computer readable storage medium
CN104683867A (en) Method and device for reconfiguring video playing parameter and video playing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant