Television monitoring platform
The application is a divisional application of a patent with the application number of 2017111424307 and the application date of 2017, 11 and 17, and the name of the patent is an intelligent television monitoring platform.
Technical Field
The invention relates to the field of televisions, in particular to an intelligent television monitoring platform.
Background
The TV signal system includes three parts of common signal channel, sound channel and video-playing final stage circuit, and their main functions are to amplify and process the high-frequency signal (including image signal and sound signal) received by antenna, finally to reproduce image on the fluorescent screen and to reproduce sound in loudspeaker. The high-frequency mixer consists of three parts, namely a high-frequency amplifier, a mixer and a local oscillator.
The satellite television is used for selecting and amplifying high-frequency television program signals received by an antenna receiving television high-rated tuner, and the audio surface of the audio medium-frequency (first medium-frequency) signals of 38MHz and 31.5MHz obtained by frequency mixing is used for forming amplitude-frequency characteristics amplified in the images; the effect of pre-neutralization is as follows: amplifying the signal (20dB amplification amount) and compensating the loss of the surface acoustic wave filter to the signal; the surface filter achieves impedance matching between the high-rated tuner and the mid-image amplifier. ACC (automatic gain control) circuit: the gain of the intermediate amplifier and the high amplifier circuit is controlled, so that the voltage amplitude of the video signals output by the detector AGC and ANC is kept basically stable; ANC (automatic noise suppression) circuit: the influence and interference of the noise signals outside the television on the television are reduced.
In the prior art, a television only focuses on self structural design and signal processing, an effective detection mechanism is lacked for the current state of a user watching the television, an anti-addiction system is only limited to carry out restriction through time, and the design idea is too simple.
Disclosure of Invention
In order to solve the problems, the invention provides an intelligent television monitoring platform, which is used for modifying the existing structure of a television, arranging field shooting equipment on an outer frame of the television, carrying out viewing environment image data acquisition facing a viewer to obtain and output a viewing environment image, and carrying out various targeted image processing and image recognition of a self-adaptive deep neural network on the viewing environment image, thereby accurately knowing the current state of the viewer.
According to an aspect of the present invention, there is provided an intelligent tv monitoring platform, comprising:
the on-site shooting equipment is arranged on the outer frame of the television and used for carrying out watching environment image data acquisition facing to audiences so as to obtain and output a watching environment image;
and the brightness measuring equipment is arranged on the outer frame of the television and near the field shooting equipment and is used for detecting the light brightness of the environment where the field shooting equipment is located in real time so as to obtain and output the real-time light brightness.
And the illumination light source is arranged on the outer frame of the television and near the field shooting equipment, is connected with the brightness measuring equipment, is used for receiving the real-time light brightness, and provides auxiliary illumination light for the image data acquisition of the watching environment of the field shooting equipment when the real-time light brightness exceeds the limit.
The scene detection device is connected with the field shooting device, is positioned on an integrated circuit board of the television and is used for receiving a watching environment image, acquiring an R channel pixel value, a G channel pixel value and a B channel pixel value of each pixel point in the watching environment image, determining the gradient of each direction of the R channel pixel value of each pixel point to be used as an R channel gradient, determining the gradient of each direction of the G channel pixel value of each pixel point to be used as a G channel gradient, determining the gradient of each direction of the B channel pixel value of each pixel point to be used as a B channel gradient, and determining the scene complexity corresponding to the watching environment image based on the R channel gradient, the G channel gradient and the B channel gradient of each pixel point.
The recognition decision device is connected with the scene detection device and used for selecting training images with the number corresponding to the scene complexity as the preset training number when the received scene complexity is larger than or equal to the preset complexity threshold, wherein the training images with the number corresponding to the scene complexity are more numerous when the scene complexity is higher, and selecting a fixed number of training images as the preset training number when the received scene complexity is smaller than the preset complexity threshold;
the training image acquisition equipment is connected with the recognition decision equipment, and for each type of scene, images with preset training quantity are selected as training images, and the training images of all types of scenes are converted into YUV color space to obtain a plurality of training color images;
the image preprocessing equipment is connected with the training image acquisition equipment and used for receiving the training color images and respectively carrying out normalization processing on the training color images to obtain a plurality of standard training images with fixed sizes;
the feature extraction device is respectively connected with the scene detection device and the image preprocessing device, determines the input quantity type of the selected model according to the scene complexity, and performs feature extraction on each standard training image according to the selected input quantity type to obtain the training feature quantity corresponding to the standard training image and conforming to the selected input quantity type, wherein the higher the scene complexity is, the more the data processing quantity corresponding to the input quantity type of the selected model is;
the model training device is connected with the feature extraction device and used for receiving each training feature quantity corresponding to each standard training image and outputting each training feature quantity to the model respectively to finish the training of model parameters, wherein the model comprises an input layer, a hidden layer and an output layer, and the output quantity of the output layer of the model is an eye curtain image;
and the model execution device is respectively connected with the feature extraction device and the scene detection device and is used for receiving the image of the watching environment, sequentially carrying out YUV color space conversion and normalization processing on the image of the watching environment, extracting features according to the selected input quantity type to obtain the identification feature quantity corresponding to the image of the watching environment, which accords with the selected input quantity type, taking the identification feature quantity corresponding to the image of the watching environment as the input of the input layer of the trained model to obtain the eye curtain image of the audience, and determining the eye curtain sagging amplitude of the audience based on the position and the occupation proportion of the eye curtain image of the audience in the image of the watching environment and the size of the eye curtain image of the audience.
The invention has at least the following three important points:
(1) the scene complexity of the image is determined through the R channel gradient, the G channel gradient and the B channel gradient of each pixel point, and the measurement precision of the scene complexity is improved;
(2) a training scheme of the neural network based on scene complexity is set up, so that the effectiveness of each parameter of the neural network is ensured;
(3) the hardware structure of the existing television is improved, and the functions of the television are enriched.
Drawings
Embodiments of the invention will now be described with reference to the accompanying drawings, in which:
fig. 1 is a schematic structural diagram of a field shooting device of an intelligent television monitoring platform according to an embodiment of the present invention.
Fig. 2 is a block diagram illustrating the structure of an intelligent tv monitoring platform according to an embodiment of the present invention.
Reference numerals: 1, a camera; 2, a long focal length lens; 3 a focus transmission unit; 4 a lens converter; 5 a focusing rotating motor; 6 a motor drive unit; 7 a calculation processing unit; 21 focus the ring.
Detailed Description
The following describes embodiments of the intelligent tv monitoring platform according to the present invention in detail with reference to the accompanying drawings.
The current direction of intelligence of television is limited to upgrading of self-structure and lacking of state detection mechanism for audience of opposite faces. In order to overcome the defects, the invention builds an intelligent television monitoring platform, and the specific implementation scheme is as follows.
Fig. 1 is a schematic structural diagram of a field shooting device of an intelligent television monitoring platform according to an embodiment of the present invention.
The field shooting equipment comprises the following parts: the device comprises a camera 1, a long-focus lens 2, a focusing transmission unit 3, a lens converter 4, a focusing rotating motor 5, a motor driving unit 6 and a calculation processing unit 7. The camera 1 and the lens 2 are connected through a lens converter 4; the focusing transmission unit 3 is connected with a focusing ring 21 on the lens 2 and a focusing rotating motor 5; the focusing rotating motor 5 is electrically connected with the motor driving unit 6; the calculation processing unit 7 is in signal connection with the motor driving unit 6, and the rotation of the focusing rotating motor 5 can be controlled through the motor driving unit 6; the calculation processing unit 7 is connected to the camera 1 and processes the image from the camera 1.
Fig. 2 is a block diagram illustrating a structure of an intelligent tv monitoring platform according to an embodiment of the present invention, where the platform includes:
the on-site shooting equipment is arranged on the outer frame of the television and used for carrying out watching environment image data acquisition facing to audiences so as to obtain and output a watching environment image;
and the brightness measuring equipment is arranged on the outer frame of the television and near the field shooting equipment and is used for detecting the light brightness of the environment where the field shooting equipment is located in real time so as to obtain and output the real-time light brightness.
Next, the detailed structure of the intelligent tv monitoring platform of the present invention will be further described.
The intelligent television monitoring platform can further comprise:
and the illumination light source is arranged on the outer frame of the television and near the field shooting equipment, is connected with the brightness measuring equipment, is used for receiving the real-time light brightness, and provides auxiliary illumination light for the image data acquisition of the watching environment of the field shooting equipment when the real-time light brightness exceeds the limit.
The intelligent television monitoring platform can further comprise:
the scene detection device is connected with the field shooting device, is positioned on an integrated circuit board of the television and is used for receiving a watching environment image, acquiring an R channel pixel value, a G channel pixel value and a B channel pixel value of each pixel point in the watching environment image, determining the gradient of each direction of the R channel pixel value of each pixel point to be used as an R channel gradient, determining the gradient of each direction of the G channel pixel value of each pixel point to be used as a G channel gradient, determining the gradient of each direction of the B channel pixel value of each pixel point to be used as a B channel gradient, and determining the scene complexity corresponding to the watching environment image based on the R channel gradient, the G channel gradient and the B channel gradient of each pixel point.
The intelligent television monitoring platform can further comprise:
the recognition decision device is connected with the scene detection device and used for selecting training images with the number corresponding to the scene complexity as the preset training number when the received scene complexity is larger than or equal to the preset complexity threshold, wherein the training images with the number corresponding to the scene complexity are more numerous when the scene complexity is higher, and selecting a fixed number of training images as the preset training number when the received scene complexity is smaller than the preset complexity threshold;
the training image acquisition equipment is connected with the recognition decision equipment, and for each type of scene, images with preset training quantity are selected as training images, and the training images of all types of scenes are converted into YUV color space to obtain a plurality of training color images;
the image preprocessing equipment is connected with the training image acquisition equipment and used for receiving the training color images and respectively carrying out normalization processing on the training color images to obtain a plurality of standard training images with fixed sizes;
the feature extraction device is respectively connected with the scene detection device and the image preprocessing device, determines the input quantity type of the selected model according to the scene complexity, and performs feature extraction on each standard training image according to the selected input quantity type to obtain the training feature quantity corresponding to the standard training image and conforming to the selected input quantity type, wherein the higher the scene complexity is, the more the data processing quantity corresponding to the input quantity type of the selected model is;
the model training device is connected with the feature extraction device and used for receiving each training feature quantity corresponding to each standard training image and outputting each training feature quantity to the model respectively to finish the training of model parameters, wherein the model comprises an input layer, a hidden layer and an output layer, and the output quantity of the output layer of the model is an eye curtain image;
and the model execution device is respectively connected with the feature extraction device and the scene detection device and is used for receiving the image of the watching environment, sequentially carrying out YUV color space conversion and normalization processing on the image of the watching environment, extracting features according to the selected input quantity type to obtain the identification feature quantity corresponding to the image of the watching environment, which accords with the selected input quantity type, taking the identification feature quantity corresponding to the image of the watching environment as the input of the input layer of the trained model to obtain the eye curtain image of the audience, and determining the eye curtain sagging amplitude of the audience based on the position and the occupation proportion of the eye curtain image of the audience in the image of the watching environment and the size of the eye curtain image of the audience.
The intelligent television monitoring platform can further comprise:
and the SD memory card is connected with the identification decision equipment, and is used for pre-storing a preset complexity threshold value and also used for storing a preset training number output by the identification decision equipment.
In the intelligent television monitoring platform:
when the real-time light brightness exceeds the limit, the lighting source provides auxiliary lighting light for the data acquisition of the image of the watching environment of the field shooting equipment, and the auxiliary lighting light comprises the following components: and providing corresponding auxiliary illumination light with different intensities based on the real-time light brightness overrun degree.
And in the intelligent television monitoring platform:
the model executing device is also connected with a display screen of the television and is used for transmitting the determined eye curtain sag amplitude of the audience to the display screen of the television for real-time display.
By adopting the intelligent television monitoring platform, aiming at the technical problem that the intelligent direction of the television is limited in the prior art, the eye curtain image of the audience is obtained in an image identification mode, the eye curtain droop amplitude of the audience is determined based on the position and the occupation proportion of the eye curtain image of the audience in the image and the size of the eye curtain image of the audience, and the determined eye curtain droop amplitude of the audience is sent to the display screen of the television to be displayed in real time, so that the technical problem is solved.
It is to be understood that while the present invention has been described in conjunction with the preferred embodiments thereof, it is not intended to limit the invention to those embodiments. It will be apparent to those skilled in the art from this disclosure that many changes and modifications can be made, or equivalents modified, in the embodiments of the invention without departing from the scope of the invention. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.