CN112399235B - Camera shooting effect enhancement method and display device of intelligent television - Google Patents

Camera shooting effect enhancement method and display device of intelligent television Download PDF

Info

Publication number
CN112399235B
CN112399235B CN202010403233.1A CN202010403233A CN112399235B CN 112399235 B CN112399235 B CN 112399235B CN 202010403233 A CN202010403233 A CN 202010403233A CN 112399235 B CN112399235 B CN 112399235B
Authority
CN
China
Prior art keywords
image
display
compensation value
camera
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010403233.1A
Other languages
Chinese (zh)
Other versions
CN112399235A (en
Inventor
王大勇
于文钦
朱铄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Publication of CN112399235A publication Critical patent/CN112399235A/en
Application granted granted Critical
Publication of CN112399235B publication Critical patent/CN112399235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/142Edging; Contouring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/57Control of contrast or brightness

Abstract

In the embodiment of the application, by analyzing frame images with different compensation values in preview images, the compensation value of the corresponding frame image when the shooting effect is optimal is determined, and the display is controlled to adjust the brightness by using the compensation value, so that the brightness of the screen of the intelligent television meets the shooting requirement, and the problem that the shooting effect of the camera is influenced by scattered light due to the fact that the screen of the intelligent television is larger in the prior art is solved.

Description

Camera shooting effect enhancement method and display device of intelligent television
The present application claims priority from the chinese patent office, chinese patent application number 201910762187.1, filed at 18, 08, 2019, the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the technical field of display, in particular to a camera shooting effect enhancement method and display equipment of an intelligent television.
Background
Currently, a display device such as a smart tv receives a great deal of attention from a user because it can provide a user with a play screen such as audio, video, pictures, etc.
With the development of big data and artificial intelligence, the functional demands of users on display devices are increasing. For example, while the user wants to play the display screen, the user presents multiple paths of video chat screens; or, when the user is in the game scene, displaying that the participant is a real picture in real time; or, in the education application program, the user learns the current picture content and performs remote audio/video interaction with parents/teachers in real time.
Accordingly, it is desirable to provide a display device capable of implementing the above functions to present a user's screen on the display device.
Disclosure of Invention
Based on the technical problems, an application object of the present application is to provide a camera shooting effect enhancement method and display device of an intelligent television.
A first aspect of an embodiment of the present application shows a display device, including:
a camera configured to: collecting image data;
a display configured to: displaying a user interface, and/or the image data;
a controller configured to: after receiving a camera opening instruction, acquiring a plurality of frame images with different compensation values in a preview image acquired by a camera; determining an optimal compensation value according to a plurality of frame images; and controlling the display to adjust the brightness according to the optimal compensation value.
With reference to the first aspect, in a first possible implementation manner of the first aspect,
the controller is configured to determine an optimal compensation value according to a plurality of frame images according to the following steps:
extracting characteristic images of each frame image respectively;
calculating the edge definition of the characteristic image;
and determining an optimal compensation value according to the edge definition.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect,
the controller is configured to perform the calculating of the edge sharpness of the feature image according to the steps of:
executing graying operation on the characteristic image to obtain a gray characteristic image;
and determining the edge definition of the gray feature image through Sobel edge detection.
With reference to the first possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect,
the controller is configured to determine an optimal compensation value according to the edge definition according to the following steps:
comparing the edge definition and determining the maximum edge definition;
and determining the compensation value of the frame image corresponding to the maximum edge definition as an optimal compensation value.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect,
the controller is configured to perform the steps of controlling the display to adjust brightness according to the following steps:
the display is controlled to adjust brightness through the UI of the OSD layer.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, the controller is further configured to:
and after receiving a photographing instruction, obtaining a photographed preview image, and executing the step of obtaining a plurality of frame images with different compensation values in the preview image obtained by the camera.
A second aspect of the embodiment of the present application shows a method for enhancing a photographing effect of a camera of an intelligent television, including:
after receiving a camera opening instruction, acquiring a plurality of frame images with different compensation values in a preview image acquired by a camera;
determining an optimal compensation value according to a plurality of frame images;
and controlling the display to adjust the brightness according to the optimal compensation value.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the determining, according to a plurality of frame images, an optimal compensation value includes:
extracting characteristic images of each frame image respectively;
calculating the edge definition of the characteristic image;
And determining an optimal compensation value according to the edge definition.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the calculating the edge sharpness of the feature image includes:
executing graying operation on the characteristic image to obtain a gray characteristic image;
and determining the edge definition of the gray feature image through Sobel edge detection.
With reference to the first possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the step of determining an optimal compensation value according to the edge sharpness includes:
comparing the edge definition and determining the maximum edge definition;
and determining the compensation value of the frame image corresponding to the maximum edge definition as an optimal compensation value.
As can be seen from the foregoing embodiments, the embodiments of the present application show a method for enhancing a photographing effect of a camera of an intelligent television and a display device, where the method includes: after receiving a camera opening instruction, acquiring a plurality of frame images with different compensation values in a preview image acquired by a camera; determining an optimal compensation value according to a plurality of frame images; and controlling the display to adjust the brightness according to the optimal compensation value. In the embodiment of the application, through analyzing the frame images with different compensation values in the preview image, the compensation value of the corresponding frame image when the shooting effect is optimal is determined, and the display is controlled to adjust the brightness by using the compensation value, so that the brightness of the screen of the intelligent television is adapted to the shooting requirement, and the problem that the shooting effect of the camera is influenced by the emitted light due to the larger screen of the intelligent television in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the implementations in the prior art, the drawings that are needed in the embodiments will be briefly described below, it being apparent that the drawings in the following description are only some of the embodiments of the present application and that other drawings may be obtained from these drawings by persons of ordinary skill in the art without inventive effort.
A schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment is exemplarily shown in fig. 1;
a hardware configuration block diagram of the display device 200 in accordance with the embodiment is exemplarily shown in fig. 2;
a hardware configuration block diagram of the control device 100 in accordance with the embodiment is exemplarily shown in fig. 3;
a functional configuration diagram of the display device 200 according to the embodiment is exemplarily shown in fig. 4;
a schematic diagram of the software configuration in the display device 200 according to an embodiment is exemplarily shown in fig. 5 a;
a schematic configuration of an application in the display device 200 according to an embodiment is exemplarily shown in fig. 5 b;
a schematic diagram of a user interface in a display device 200 according to an embodiment is exemplarily shown in fig. 6;
fig. 7 is a flowchart of a method for enhancing a photographing effect of a camera of an intelligent television according to an alternative embodiment;
Fig. 8 is a flowchart of a method for enhancing a photographing effect of a camera of a smart television according to still another alternative embodiment;
FIG. 9 is a frame image taken by a camera according to an alternative embodiment;
fig. 10 is a face image extracted from the frame image in fig. 9;
FIG. 11 is a gray feature image corresponding to the facial image of FIG. 10;
fig. 12 is a display effect diagram of a smart tv screen with a UI of an OSD layer according to an alternative embodiment;
fig. 13 is a display effect diagram of a smart tv screen with a UI of an OSD layer according to still another alternative embodiment;
fig. 14 is a schematic structural diagram of a camera shooting effect enhancing device of an intelligent television according to an alternative embodiment.
Detailed Description
For purposes of making the objects, embodiments and advantages of the exemplary embodiments of the present application more apparent, the embodiments of the present application will be described clearly and fully hereinafter with reference to the accompanying drawings, in which it is shown, however, that the exemplary embodiments described are only some, but not all of the exemplary embodiments of the present application.
All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are intended to be within the scope of the present application based on the exemplary embodiments shown in the present application. Furthermore, while the disclosure is presented in the context of an exemplary embodiment or embodiments, it should be appreciated that the various aspects of the disclosure may, separately, comprise a complete embodiment.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such as where appropriate, for example, implementations other than those illustrated or described in accordance with embodiments of the present application.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this application refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
The term "remote control" as used in this application refers to a component of an electronic device (such as a display device as disclosed in this application) that can typically be controlled wirelessly over a relatively short distance. Typically, the electronic device is connected to the electronic device using infrared and/or Radio Frequency (RF) signals and/or bluetooth, and may also include functional modules such as WiFi, wireless USB, bluetooth, motion sensors, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in a general remote control device with a touch screen user interface.
The term "gesture" as used herein refers to a user behavior by which a user expresses an intended idea, action, purpose, and/or result through a change in hand shape or movement of a hand, etc.
A schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment is exemplarily shown in fig. 1. As shown in fig. 1, a user may operate the display apparatus 200 through a control device.
The control device may be a remote controller 100A, including infrared protocol communication or bluetooth protocol communication, other short-distance communication modes, and the like, and controls the display device 200 in a wireless or other wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc. Such as: the user can input corresponding control instructions through volume up-down keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, on-off keys, etc. on the remote controller to realize the functions of the control display device 200.
The control device may also be an intelligent device, such as a mobile terminal 100B, a tablet computer, a notebook computer, or the like. For example, the display device 200 is controlled using an application running on a smart device. The application may provide various controls to the user through an intuitive User Interface (UI) on a screen associated with the smart device.
By way of example, the mobile terminal 100B may install a software application with the display device 200, implement connection communication through a network communication protocol, and achieve the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B and the display device 200 may be caused to establish a control instruction protocol, synchronize a remote control keyboard to the mobile terminal 100B, and control the functions of the display device 200 by controlling a user interface on the mobile terminal 100B. The audio/video content displayed on the mobile terminal 100B may also be transmitted to the display device 200, so as to implement a synchronous display function.
As shown in fig. 1, the display device 200 also communicates data with the server 300 through a variety of communication means. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The servers 300 may be one group, may be multiple groups, and may be one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 300.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be subject to some changes in performance and configuration as desired.
The display device 200 may additionally provide an intelligent network television function of a computer support function in addition to the broadcast receiving television function. Examples include web televisions, display devices, internet Protocol Televisions (IPTV), and the like.
As shown in fig. 1, a camera may be connected to or disposed on the display device, so as to present a picture surface captured by the camera on a display interface of the display device or other display devices, so as to implement interactive chat between users. Specifically, the picture shot by the camera can be displayed in a full screen, a half screen or any optional area on the display device.
As a connection mode in an exemplary embodiment, the camera is connected with the display rear shell through the connection plate, and is fixedly installed in the middle of the upper side of the display rear shell, and as a mountable mode, the camera can be fixedly installed at any position of the display rear shell, and it is only required that an image acquisition area of the camera is not blocked by the rear shell, for example, the display orientation of the image acquisition area is the same as that of the display device.
As another connection mode in an exemplary embodiment, the camera is connected with the display back shell in a liftable manner through a connection plate or other conceivable connector, and a lifting motor is installed on the connector, so that the camera can be lifted out of the display when a user wants to use the camera or has an application program to use the camera, and can be embedded behind the back shell when the camera is not needed to be used, so as to protect the camera from damage.
As an embodiment, the camera adopted in the application can be 1600 ten thousand pixels, so as to achieve the purpose of ultra-high definition display. In practical use, cameras higher or lower than 1600 ten thousand pixels may also be used.
After the camera is installed on the display device, the contents displayed in different application scenes of the display device can be fused in a plurality of different modes, so that the function which cannot be realized by the traditional display device is achieved.
For example, a user may conduct a video chat with at least one other user while watching a video program. The presentation of the video program may be a background picture over which a window of video chat is displayed. The function is visual and can be called as 'chat while watching'.
In an exemplary embodiment, in a "watch while chat" scenario, at least one video chat is performed across terminals while live video or network video is being viewed.
In other embodiments, the user may engage in video chat with at least one other user while entering the educational application study. For example, students may be able to achieve remote interaction with teachers while learning content in educational applications. The function is visual and can be called as 'learning while boring'.
In other embodiments, the user performs a video chat with a player entering the game while playing the card game. For example, a player may enable remote interaction with other players when entering a gaming application to participate in a game. The function is visual and can be called 'play while watching'.
In an exemplary embodiment, the game scene is fused with the video picture, the portrait in the video picture is scratched, and the portrait is displayed in the game picture, so that the user experience is improved.
In an exemplary embodiment, in a somatosensory game (such as playing ball, boxing, running, dancing, etc.), the body posture and action are obtained through a camera, limb detection and tracking, detection of key point data of the skeleton of the body, and then the body posture and action are fused with animation in the game, so that the game of scenes such as sports, dance, etc. is realized.
In other embodiments, the user may interact with at least one other user in a karaoke application, both video and voice. The function is visual and can be called 'watch and sing'. Preferably, when at least one user enters the application in the chat scene, a plurality of users can jointly complete recording of one song.
In other embodiments, the user may open the camera locally to take pictures and video, and the function may be referred to as "looking at the mirror".
In other examples, more functions may be added or the above functions may be reduced. The function of the display device is not particularly limited in this application.
A hardware configuration block diagram of the display device 200 according to an exemplary embodiment is illustrated in fig. 2. As shown in fig. 2, a modem 220, a communicator 230, a detector 240, an external device interface 250, a controller 210, a memory 290, a user input interface, a video processor 260-1, an audio processor 260-2, a display 280, an audio output interface 270, a power supply may be included in the display apparatus 200.
The modem 220 receives broadcast television signals through a wired or wireless manner, and may perform modulation and demodulation processes such as amplification, mixing, and resonance for demodulating an audio/video signal carried in a frequency of a television channel selected by a user and additional information (e.g., an EPG data signal) from among a plurality of wireless or wired broadcast television signals.
The tuning demodulator 220 is responsive to the user selected television channel frequency and television signals carried by that frequency, as selected by the user, and as controlled by the controller 210.
The tuning demodulator 220 can receive signals in various ways according to broadcasting systems of television signals, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; according to different modulation types, the digital modulation mode and the analog modulation mode can be adopted; and the analog signal and the digital signal can be demodulated according to the kind of the received television signal.
In other exemplary embodiments, the modem 220 may also be in an external device, such as an external set-top box, or the like. In this way, the set-top box outputs the television audio and video signals after modulation and demodulation, and inputs the television audio and video signals to the display device 200 through the external device interface 250.
Communicator 230 is a component for communicating with external devices or external servers according to various communication protocol types. For example: the communicator 230 may include a WIFI module 231, a bluetooth communication protocol module 232, a wired ethernet communication protocol module 233, and other network communication protocol modules or a near field communication protocol module.
The display device 200 may establish a connection of control signals and data signals with an external control device or a content providing device through the communicator 230. For example, the communicator may receive a control signal of the remote controller 100A according to the control of the controller.
The detector 240 is a component of the display device 200 for collecting signals of an external environment or interaction with the outside. The detector 240 may include a light receiver 242, a sensor for capturing the intensity of ambient light, a display parameter change that may be adapted by capturing ambient light, etc.; the system can also comprise an image collector 241, such as a camera, a video camera and the like, which can be used for collecting external environment scenes, collecting attributes of a user or interacting gestures with the user, adaptively changing display parameters and identifying the gestures of the user so as to realize the interaction function with the user.
In some exemplary embodiments, the image collector 241 is a camera, and when the camera is in an open state, image data around the display device may be collected and the collected image data may be displayed on the display. The user can trigger a shooting instruction through voice, gestures or a remote controller and the like, and data of one frame are obtained from the camera. And the user can trigger an instruction for video recording through voice, gestures or a remote controller and the like, and multiframe data are stored in the camera.
In some exemplary embodiments, the controller controls the raising or lowering of the camera by a lift power device, which may be a motor, for example.
In other exemplary embodiments, the detector 240 may further include a temperature sensor, such as by sensing ambient temperature, and the display device 200 may adaptively adjust the display color temperature of the image. Illustratively, the display device 200 may be adjusted to display a colder color temperature shade of the image when the temperature is higher than ambient; when the temperature is low, the display device 200 may be adjusted to display a color temperature-warm tone of the image.
In other exemplary embodiments, the detector 240 may further include a sound collector, such as a microphone, that may be used to receive a user's sound, including a voice signal of a control instruction of the user controlling the display device 200, or collect an ambient sound for identifying an ambient scene type, and the display device 200 may adapt to ambient noise.
An external device interface 250 provides a component for the controller 210 to control data transmission between the display apparatus 200 and external other apparatuses. The external device interface may be connected to an external device such as a set-top box, a game device, a notebook computer, etc., in a wired/wireless manner, and may receive data such as a video signal (e.g., a moving image), an audio signal (e.g., music), additional information (e.g., an EPG), etc., of the external device.
Among other things, the external device interface 250 may include: any one or more of a High Definition Multimedia Interface (HDMI) terminal 251, a Composite Video Blanking Sync (CVBS) terminal 252, an analog or digital component terminal 253, a Universal Serial Bus (USB) terminal 254, a Red Green Blue (RGB) terminal (not shown), and the like.
The controller 210 controls the operation of the display device 200 and responds to the user's operations by running various software control programs (e.g., an operating system and various application programs) stored on the memory 290.
As shown in fig. 2, the controller 210 includes a random access memory RAM214, a read only memory ROM213, a graphics processor 216, a CPU processor 212, a communication interface, and a communication bus. The RAM214 and the ROM213, and the graphics processor 216, the CPU processor 212, and the communication interface are connected by a bus.
A ROM213 for storing instructions for various system starts. When the power of the display device 200 starts to be started when the power-on signal is received, the CPU processor 212 executes a system start instruction in the ROM, and copies the operating system stored in the memory 290 into the RAM214 to start to run the start-up operating system. When the operating system is started, the CPU processor 212 copies various applications in the memory 290 to the RAM214, and then starts running the various applications.
A graphics processor 216 for generating various graphical objects, such as: icons, operation menus, user input instruction display graphics, and the like. The device comprises an arithmetic unit, wherein the arithmetic unit is used for receiving various interaction instructions input by a user to carry out operation and displaying various objects according to display attributes. And a renderer that generates various objects based on the results of the operator, and displays the results of rendering on the display 280.
CPU processor 212 is operative to execute operating system and application program instructions stored in memory 290. And executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 212 may include multiple processors. The plurality of processors may include one main processor and a plurality or one sub-processor. A main processor for performing some operations of the display apparatus 200 in the pre-power-up mode and/or displaying a picture in the normal mode. A plurality of or a sub-processor for performing an operation in a standby mode or the like.
The communication interfaces may include first interface 218-1 through nth interface 218-n. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 210 may control the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: operations to connect to a hyperlink page, document, image, etc., or operations to execute a program corresponding to an icon are displayed. The user command for selecting the UI object may be an input command through various input means (e.g., mouse, keyboard, touch pad, etc.) connected to the display device 200 or a voice command corresponding to a voice uttered by the user.
Memory 290 includes memory for storing various software modules for driving and controlling display device 200. Such as: various software modules stored in memory 290, including: a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like.
The base module is a bottom software module for signal communication between the various hardware in the display device 200 and for sending processing and control signals to the upper modules. The detection module is a management module for collecting various information from various sensors or user input interfaces, and performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is a module for controlling the display 280 to display image content, and can be used for playing multimedia image content, UI interface and other information. The communication module is used for controlling and data communication with external equipment. The browser module is a module for performing data communication between the browsing servers. The service module is used for providing various services and various application programs.
Meanwhile, the memory 290 is also used to store received external data and user data, images of various items in various user interfaces, visual effect maps of focus objects, and the like.
A user input interface for transmitting an input signal of a user to the controller 210 or transmitting a signal output from the controller to the user. Illustratively, the control device (e.g., mobile terminal or remote control) may send input signals such as power switch signals, channel selection signals, volume adjustment signals, etc., input by the user to the user input interface, which may then be forwarded to the controller; alternatively, the control device may receive an output signal such as audio, video, or data, which is output from the user input interface via the controller, and display the received output signal or output the received output signal in the form of audio or vibration.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 280, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
The video processor 260-1 is configured to receive a video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image composition according to a standard codec protocol of an input signal, so as to obtain a video signal that is directly displayed or played on the display 280.
The video processor 260-1, by way of example, includes a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio/video data stream, such as the input MPEG-2, and demultiplexes the input audio/video data stream into video signals, audio signals and the like.
And the video decoding module is used for processing the demultiplexed video signal, including decoding, scaling and the like.
And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display.
A frame rate conversion module, configured to convert a frame rate of an input video, such as converting a frame rate of an input 24Hz, 25Hz, 30Hz, 60Hz video to a frame rate of 60Hz, 120Hz, or 240Hz, where the input frame rate may be related to a source video stream and the output frame rate may be related to an update rate of a display. The input is carried out in a usual format such as a frame inserting mode.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format such as a display, for example, format converting the signal output by the frame rate conversion module to output an RGB data signal.
A display 280 for receiving image signals from the video processor 260-1 for displaying video content and images and a menu manipulation interface. The display 280 includes a display assembly for presenting pictures and a drive assembly for driving the display of images. The video content may be displayed from a video in a broadcast signal received by the modem 220 or may be displayed from a video input from a communicator or an external device interface. And a display 220 simultaneously displaying a user manipulation interface UI generated in the display device 200 and used to control the display device 200.
And, depending on the type of display 280, a drive assembly for driving the display. Alternatively, if the display 280 is a projection display, a projection device and projection screen may be included.
The audio processor 260-2 is configured to receive the audio signal, decompress and decode according to the standard codec protocol of the input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing, so as to obtain an audio signal that can be played in the speaker 272.
An audio output interface 270 for receiving the audio signal output from the audio processor 260-2 under the control of the controller 210, where the audio output interface may include a speaker 272 or an external audio output terminal 274 for outputting to a generating device of an external device, such as: external sound terminals or earphone output terminals, etc.
In other exemplary embodiments, video processor 260-1 may include one or more chip components. The audio processor 260-2 may also include one or more chip components.
And, in other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips or integrated with the controller 210 in one or more chips.
And a power supply for providing power supply support for the display device 200 with power inputted from an external power supply under the control of the controller 210. The power supply may include a built-in power circuit installed inside the display apparatus 200, or may be a power supply installed outside the display apparatus 200, such as a power interface providing an external power supply in the display apparatus 200.
A block diagram of the configuration of the control device according to an exemplary embodiment is exemplarily shown in fig. 3. As shown in fig. 3, the control device includes a controller 110, a communicator 130, a user input/output interface 140, a memory 190, and a power supply 180.
A schematic diagram of the functional configuration of the display device 200 according to an exemplary embodiment is illustrated in fig. 4. As shown in fig. 4, the memory 290 is used to store an operating system, application programs, contents, user data, and the like, and performs system operations for driving the display device 200 and various operations in response to a user under the control of the controller 210. Memory 290 may include volatile and/or nonvolatile memory.
The memory 290 is specifically used for storing an operation program for driving the controller 210 in the display device 200, and storing various application programs built in the display device 200, various application programs downloaded by a user from an external device, various graphic user interfaces related to the application programs, various objects related to the graphic user interfaces, user data information, and various internal data supporting the application programs. The memory 290 is used to store system software such as an Operating System (OS) kernel, middleware and applications, and to store input video data and audio data, as well as other user data.
The memory 290 is specifically configured to store drivers and related data for the video processor 260-1 and the audio processor 260-2, the display 280, the communicator 230, the modem 220, the detector 240, the input/output interface, and the like.
In some embodiments, memory 290 may store software and/or programs, the software programs used to represent an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (such as the middleware, APIs, or application programs), and the kernel may provide interfaces to allow the middleware and APIs, or applications to access the controller to implement control or management of system resources.
By way of example, the memory 290 includes a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. The controller 210 executes various software programs in the memory 290 such as: broadcast television signal receiving and demodulating functions, television channel selection control functions, volume selection control functions, image control functions, display control functions, audio control functions, external instruction recognition functions, communication control functions, optical signal receiving functions, power control functions, software control platforms supporting various functions, browser functions and other applications.
A block diagram of the configuration of the software system in the display device 200 according to an exemplary embodiment is illustrated in fig. 5 a.
As shown in FIG. 5a, operating system 2911, which includes executing operating software for handling various basic system services and for performing hardware-related tasks, acts as a medium for completing data processing between application programs and hardware components.
In some embodiments, portions of the operating system kernel may contain a series of software to manage display device hardware resources and to serve other programs or software code.
In other embodiments, portions of the operating system kernel may contain one or more device drivers, which may be a set of software code in the operating system that helps operate or control the devices or hardware associated with the display device. The driver may contain code to operate video, audio and/or other multimedia components. Examples include a display, camera, flash, wiFi, and audio drivers.
Wherein, accessibility module 2911-1 is configured to modify or access an application program to realize accessibility of the application program and operability of display content thereof.
The communication module 2911-2 is used for connecting with other peripheral devices via related communication interfaces and communication networks.
User interface module 2911-3 is configured to provide an object for displaying a user interface for access by each application program, so as to implement user operability.
Control applications 2911-4 are used to control process management, including runtime applications, and the like.
The event delivery system 2914 may be implemented within the operating system 2911 or in the application 2912. In some embodiments, one aspect is implemented within the operating system 2911, while the application 2912 is implemented to monitor various user input events, and to refer to a process program that implements one or more sets of predefined operations in response to recognition results of various events or sub-events, based on the various events.
The event monitoring module 2914-1 is configured to monitor a user input interface to input an event or a sub-event.
The event recognition module 2914-2 is configured to input definitions of various events to various user input interfaces, recognize various events or sub-events, and transmit them to a process for executing one or more corresponding sets of processes.
The event or sub-event refers to an input detected by one or more sensors in the display device 200, and an input of an external control device (such as a control apparatus). Such as: various sub-events are input through voice, gesture input sub-events of gesture recognition, sub-events of remote control key instruction input of a control device and the like. By way of example, one or more sub-events in the remote control may include a variety of forms including, but not limited to, one or a combination of key press up/down/left/right/, ok key, key press, etc. And operations of non-physical keys, such as movement, holding, releasing, etc.
The interface layout management module 2913 directly or indirectly receives the user input events or sub-events from the event transmission system 2914, and is used for updating the layout of the user interface, including but not limited to the positions of the controls or sub-controls in the interface, and various execution operations related to the interface layout, such as the size or position of the container, the level, and the like.
As shown in fig. 5a, the application layer 2912 contains various applications that may be executed on the display device 200. Applications may include, but are not limited to, one or more applications such as: live television applications, video on demand applications, media center applications, application centers, gaming applications, etc.
Live television applications can provide live television through different signal sources. For example, a live television application may provide television signals using inputs from cable television, radio broadcast, satellite services, or other types of live television services. And, the live television application may display video of the live television signal on the display device 200.
Video on demand applications may provide video from different storage sources. Unlike live television applications, video-on-demand provides video displays from some storage sources. For example, video-on-demand may come from the server side of cloud storage, from a local hard disk storage containing stored video programs.
The media center application may provide various applications for playing multimedia content. For example, a media center may be a different service than live television or video on demand, and a user may access various images or audio through a media center application.
An application center may be provided to store various applications. The application may be a game, an application, or some other application associated with a computer system or other device but operable on a display device. The application center may obtain these applications from different sources, store them in local storage, and then be run on the display device 200.
A schematic diagram of a user interface in the display device 200 according to an exemplary embodiment is schematically shown in fig. 6. As shown in fig. 6, the user interface includes a plurality of view display areas, illustratively, a first view display area 201 and a play screen 202, wherein the play screen includes a layout of one or more different items. And a selector in the user interface indicating that an item is selected, the position of the selector being movable by user input to change selection of a different item.
It should be noted that the multiple view display areas may present different levels of display images. For example, a first view display region may present system layer item content (e.g., current attributes, etc.), and a second view display region may present application layer item content (e.g., web video, VOD presentation, application screen, etc.).
In one exemplary embodiment, the presentation of the different view display areas has a priority difference, and the display priorities of the view display areas are different between the view display areas with different priorities. If the priority of the system layer is higher than that of the application layer, when the user uses the acquisition selector and the picture switching in the application layer, the picture display of the view display area of the system layer is not blocked; and when the size and the position of the view display area of the application layer are changed according to the selection of the user, the size and the position of the view display area of the system layer are not affected.
The same level of display may be presented, in which case the selector may switch between the first view display region and the second view display region, and the size and position of the second view display region may change as the size and position of the first view display region changes.
In the related art, various applications can be run on the intelligent television, wherein the applications with photographing functions can photograph the surrounding environment of the intelligent television through the built-in camera on the intelligent television or the external camera of the intelligent television through the USB interface.
The screen size of the intelligent television is large relative to the screen size of the mobile terminal equipment, so that when the camera is used for photographing, the brightness of a photographing object of the camera is easily affected by light emitted by the screen of the intelligent television, and the photographing effect is poor. Illustrating: when the camera of the intelligent television is used for photographing, a user stands in the photographing range of the camera, and as the light emitted by the screen of the intelligent television can irradiate on the user, the brightness on the user is affected by the light emitted by the screen of the intelligent television, so that the photographing effect of the camera is poor when the camera photographs. Therefore, how to make the camera of the smart television have a better photographing effect becomes a problem to be solved urgently by those skilled in the art.
In order to solve the technical problem existing in the prior art, a first aspect of an embodiment of the present application shows a method for enhancing a photographing effect of a camera of an intelligent television, as shown in fig. 7, including:
S100, after receiving a camera opening instruction, acquiring a plurality of frame images with different compensation values in a preview image acquired by a camera;
in the intelligent television, a camera can be opened through the intelligent television own application or the third party application with the photographing function, and a camera opening instruction is sent.
It should be noted that, the preview image is acquired through a camera in the process of continuously changing the brightness of the screen of the smart television, and the preview image includes a plurality of frames of images. In the embodiment of the application, the continuous change of the screen brightness of the intelligent television can be realized by adjusting the compensation value of the screen brightness.
In the process of continuously changing the compensation value, the change frequency of the compensation value and the photographing frequency of the camera may be different, for example, the compensation value is changed once for 0.25s, and the camera 1s photographs 20 frame images, so that there is a case that a plurality of frame images are corresponded under one compensation value. For example, 1-5 frame images have corresponding compensation values of 0,6-10 frame images have corresponding compensation values of 0.3, 11-15 frame images have corresponding compensation values of 0.5, and 16-20 frame images have corresponding compensation values of 0.8. Therefore, in the embodiment of the application, a plurality of frame images with different compensation values are screened out from the preview images with the continuously changed compensation values, so that the preview images with the same compensation values are prevented from being repeatedly analyzed. For example, one frame image may be selected from the 1 st to 5 th frame images, one frame image may be selected from the 6 th to 10 th frame images, one frame image may be selected from the 11 th to 15 th frame images, and one frame image may be selected from the 16 th to 20 th frame images.
S200, determining an optimal compensation value according to a plurality of frame images;
according to the embodiment of the application, the optimal compensation value is determined by analyzing the frame images corresponding to different compensation values, and the display is controlled to adjust the brightness by utilizing the optimal compensation value, so that the photographing effect of the camera is optimal.
In an exemplary embodiment, as shown in fig. 8, the step of determining the optimal compensation value according to the plurality of frame images includes:
s201, respectively extracting characteristic images of each frame image;
the characteristic image is an image which needs to be clearly displayed, and if the characteristic image is clearly shot, the shooting effect is better. As an example, if the user is located within the shooting range of the camera, in one case, the user needs to shoot a clear face, so the face image is set as the feature image. Fig. 9 is a frame image photographed by the camera, and fig. 10 is a face image extracted from the frame image in fig. 9. Of course, the feature image may be an article, and is not limited herein, and the determination of the specific content of the feature image is not limited herein.
S202, calculating the edge definition of the characteristic image;
in the embodiment of the application, the photographing effect is measured through edge definition. The greater the edge definition, the better the photographing effect.
In an exemplary embodiment, the step of calculating the edge sharpness of the feature image includes:
executing graying operation on the characteristic image to obtain a gray characteristic image;
and determining the edge definition of the gray feature image through Sobel edge detection.
Specifically, taking a face image as an example, the process of calculating the edge sharpness of the feature image is described, and the face image is first subjected to a graying operation, as shown in fig. 11, and fig. 11 is a gray feature image corresponding to the face image in fig. 10.
The gray feature image is detected by Sobel edges, and the process of determining the edge definition is as follows:
taking the face image size of 64 pixels by 48 pixels as an example, the gray value corresponding to each pixel is a. According to Sobel edge detection, an array of 64 x 28 dimensions is provided, and elements in the array correspond to pixel values of each pixel in the gray feature image after convolution.
a[0][0]=G00=sqrt(Gx0,0*Gx0,0+Gy0,0*Gy0,0);
a[63][27]=G63,27=sqrt(Gx63,27*Gx63*27+Gy63,27*Gy63*27);
Wherein sqrt represents an open root number;
Figure SMS_1
G x representing convolution in the x-direction of the gray feature image;
Figure SMS_2
G y representing convolution in the y direction of the gray feature image; />
Figure SMS_3
G represents the pixel value after each pixel in the gray feature image has been convolved.
Figure SMS_4
Wherein s represents edge sharpness, x 1 =G00,x 2 =G01…x n =G63,27;n=64*48=3072;m=(G00+G01+...+G63,27)/3072。
In the embodiment of the application, the larger the edge definition s is, the clearer the feature image is, and the better the photographing effect is.
And S203, determining an optimal compensation value according to the edge definition.
In an exemplary embodiment, determining the optimal compensation value according to the edge definition includes:
comparing the edge definition and determining the maximum edge definition;
and determining the compensation value of the frame image corresponding to the maximum edge definition as an optimal compensation value.
It should be noted that, the greater the edge definition is, the better the shooting effect is, so in this embodiment of the present application, the edge definition of the feature image in each frame image is compared, and the compensation value of the frame image corresponding to the maximum edge definition is determined to be the optimal compensation value.
S300, controlling the display to adjust brightness according to the optimal compensation value.
In order to make the photographing effect of the camera be best, the optimal compensation value is set as the compensation value of the screen brightness of the intelligent television. And dynamically controlling the display to adjust the brightness by utilizing the optimal compensation value.
In an exemplary embodiment, the step of controlling the display to adjust the brightness includes: the display is controlled to adjust brightness through the UI of the OSD layer.
In the embodiment of the present application, the brightness of the screen of the smart tv is adjusted by using the OSD layer, for example, the UI of the OSD layer may be displayed in gray, and at this time, the brightness of the screen of the smart tv is dimmed; the brightness of the screen of the intelligent television can also be changed by setting the color of the UI part position of the OSD layer and setting the rest part to be transparent. As shown in fig. 12 and 13, fig. 12 and 13 are respectively display effect diagrams of a smart tv screen with UIs of OSD layers corresponding to different compensation values, and a frame image in fig. 9 is a display effect of the smart tv screen without the compensation value. It should be noted that, in the embodiment of the present application, the smart tv screen is a display.
The circular image in fig. 12 is not an image of a photographing object, but an image in the UI of the OSD layer in the embodiment of the present application, and the color and number of the circular image are determined according to the optimal compensation value. The large-area image of the user's body, which is approximately white, in fig. 13 is also not an image of the subject to be photographed, but an image in the UI of the OSD layer in the embodiment of the present application. In addition, when the color of the part of the position of the OSD layer UI is changed, the color around the characteristic image is not changed at the position corresponding to the characteristic image, so that the user can not be influenced to watch the image on the screen. As in fig. 12, the feature image is a face image, and therefore, a color is not set at a position corresponding to the face.
According to the embodiment of the application, the screen brightness of the intelligent television can be adjusted by adjusting the backlight of the intelligent television. Any form of method for adjusting the brightness of the screen can be used without departing from the spirit and scope of the present application.
In an exemplary embodiment, the method further comprises: and after receiving a photographing instruction, obtaining a photographed preview image, and executing the step of obtaining a plurality of frame images with different compensation values in the preview image obtained by the camera.
Because the shooting object may be continuously changed, when the shooting object is an indoor object in one scene, the brightness of the shooting object is changed when the indoor object is turned on or turned off, and if the optimal compensation value determined after the camera is turned on is always used, the display is controlled to adjust the brightness, so that the shooting effect may be affected. Therefore, in the embodiment of the present application, after each photographing, a plurality of frame images with different compensation values in the preview image acquired by the camera are acquired again.
It is noted that, after receiving the photographing instruction, the obtained preview image is a preview image obtained by the camera after the camera photographs, and the preview image can further illustrate the characteristics of the current photographing object. According to the embodiment of the application, according to a plurality of frame images with different compensation values in the acquired preview image, the latest optimal compensation value is determined by analyzing the plurality of frame images. And controlling the display to adjust the brightness according to the latest optimal compensation value.
In one possible implementation manner, by using the deep learning technology, an optimal compensation value can be quickly determined, so that the photographing effect is best when a user photographs on the smart television.
A second aspect of the embodiments of the present application shows a display device, including:
a camera configured to: collecting image data;
a display configured to: displaying a user interface, and/or the image data;
a controller configured to: after receiving a camera opening instruction, acquiring a plurality of frame images with different compensation values in a preview image acquired by a camera; determining an optimal compensation value according to a plurality of frame images; and controlling the display to adjust the brightness according to the optimal compensation value.
It should be noted that, the controller in the embodiment of the present application may be the controller 210 in fig. 2, and in addition, the display device in the embodiment of the present application may include a plurality of controllers, and one or more of the plurality of controllers may be configured to obtain a plurality of frame images with different compensation values in the preview images obtained by the camera after receiving the camera opening instruction; determining an optimal compensation value according to a plurality of frame images; and controlling the display to adjust the brightness according to the optimal compensation value.
In an exemplary embodiment, the controller is configured to perform the determining the optimal compensation value from the plurality of frame images according to the following steps:
extracting characteristic images of each frame image respectively;
Calculating the edge definition of the characteristic image;
and determining an optimal compensation value according to the edge definition.
In an exemplary embodiment, the controller is configured to perform the calculating the edge sharpness of the feature image according to the following steps:
executing graying operation on the characteristic image to obtain a gray characteristic image;
and determining the edge definition of the gray feature image through Sobel edge detection.
In an exemplary embodiment, the controller is configured to determine an optimal compensation value according to the edge definition according to the following steps:
comparing the edge definition and determining the maximum edge definition;
and determining the compensation value of the frame image corresponding to the maximum edge definition as an optimal compensation value.
In an exemplary embodiment, the controller is configured to perform the step of controlling the display to adjust the brightness according to the following steps:
the display is controlled to adjust brightness through the UI of the OSD layer.
In an exemplary embodiment, the controller is further configured to:
and after receiving a photographing instruction, obtaining a photographed preview image, and executing the step of obtaining a plurality of frame images with different compensation values in the preview image obtained by the camera.
A third aspect of the present embodiment shows a camera shooting effect enhancing device of an intelligent television, as shown in fig. 14, including:
the obtaining unit 400 is configured to obtain a plurality of frame images with different compensation values in the preview image obtained by the camera after receiving the camera opening instruction;
a determining unit 500 for determining an optimal compensation value according to a plurality of the frame images;
the adjusting unit 600 is used for controlling the display to adjust the brightness according to the optimal compensation value.
In an exemplary embodiment, the determining unit includes:
an extracting unit for extracting a characteristic image of each frame image respectively;
a calculation unit for calculating edge definition of the feature image;
and the optimal compensation value determining unit is used for determining an optimal compensation value according to the edge definition.
In an exemplary embodiment, the computing unit includes:
the gray characteristic image determining unit is used for executing graying operation on the characteristic image to obtain a gray characteristic image;
and the edge definition determining unit is used for determining the edge definition of the gray characteristic image through Sobel edge detection.
In an exemplary embodiment, the optimal compensation value determining unit includes:
The comparing unit is used for comparing the edge definition and determining the maximum edge definition;
and the optimal compensation value determining unit is used for determining the compensation value of the frame image corresponding to the maximum edge definition as the optimal compensation value.
In an exemplary embodiment, the brightness of the smart tv screen is adjusted through the UI of the OSD layer.
In an exemplary embodiment, the apparatus further comprises: and the repeated execution unit is used for acquiring the photographed preview image after receiving the photographing instruction and executing the step of acquiring a plurality of frame images with different compensation values in the preview image acquired by the camera.
As can be seen from the foregoing embodiments, the embodiments of the present application show a method for enhancing a photographing effect of a camera of an intelligent television and a display device, where the method includes: after receiving a camera opening instruction, acquiring a plurality of frame images with different compensation values in a preview image acquired by a camera; determining an optimal compensation value according to a plurality of frame images; and controlling the display to adjust the brightness according to the optimal compensation value. In the embodiment of the application, through analyzing the frame images with different compensation values in the preview image, the compensation value of the corresponding frame image when the shooting effect is optimal is determined, and the display is controlled to adjust the brightness by using the compensation value, so that the brightness of the screen of the intelligent television is adapted to the shooting requirement, and the problem that the shooting effect of the camera is influenced by the emitted light due to the larger screen of the intelligent television in the prior art is solved.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. The method for enhancing the photographing effect of the camera of the intelligent television is characterized by comprising the following steps of:
after receiving a camera opening instruction, acquiring a plurality of frame images with different compensation values in a preview image acquired by a camera;
determining an optimal compensation value according to the plurality of frame images;
and controlling the display to adjust the color around the characteristic image corresponding to the frame image through the UI of the OSD layer according to the optimal compensation value so as to adjust the brightness of the display.
2. The method of claim 1, wherein the step of determining an optimal compensation value based on the plurality of frame images comprises:
extracting characteristic images of each frame image respectively;
calculating the edge definition of the characteristic image;
and determining an optimal compensation value according to the edge definition.
3. The method of claim 2, wherein the step of calculating edge sharpness of the feature image comprises:
executing graying operation on the characteristic image to obtain a gray characteristic image;
and determining the edge definition of the gray feature image through Sobel edge detection.
4. The method of claim 2, wherein the step of determining an optimal compensation value based on the edge definition comprises:
comparing the edge definition and determining the maximum edge definition;
and determining the compensation value of the frame image corresponding to the maximum edge definition as an optimal compensation value.
5. The method according to claim 1, wherein after receiving the photographing instruction, obtaining a photographed preview image, and performing the step of obtaining a plurality of frame images with different compensation values in the preview image obtained by the camera.
6. A display device, characterized by comprising: a camera, a display, a controller for performing the method steps of claim 1, wherein:
the camera is configured to: collecting image data;
the display is configured to: displaying a user interface, and/or the image data;
the controller is configured to: after receiving a camera opening instruction, acquiring a plurality of frame images with different compensation values in a preview image acquired by a camera; determining an optimal compensation value according to the plurality of frame images; and controlling the display to adjust the color around the characteristic image corresponding to the frame image through the UI of the OSD layer according to the optimal compensation value so as to adjust the brightness of the display.
7. The display device of claim 6, wherein the controller is configured to perform the determining the optimal compensation value from the plurality of frame images according to:
extracting characteristic images of each frame image respectively;
calculating the edge definition of the characteristic image;
and determining an optimal compensation value according to the edge definition.
8. The display device of claim 7, wherein the controller is configured to perform the computing the edge sharpness of the feature image according to the steps of:
Executing graying operation on the characteristic image to obtain a gray characteristic image;
and determining the edge definition of the gray feature image through Sobel edge detection.
9. The display device of claim 7, wherein the controller is configured to determine an optimal compensation value based on the edge definition according to:
comparing the edge definition and determining the maximum edge definition;
and determining the compensation value of the frame image corresponding to the maximum edge definition as an optimal compensation value.
CN202010403233.1A 2019-08-18 2020-05-13 Camera shooting effect enhancement method and display device of intelligent television Active CN112399235B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019107621871 2019-08-18
CN201910762187 2019-08-18

Publications (2)

Publication Number Publication Date
CN112399235A CN112399235A (en) 2021-02-23
CN112399235B true CN112399235B (en) 2023-05-09

Family

ID=74603789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010403233.1A Active CN112399235B (en) 2019-08-18 2020-05-13 Camera shooting effect enhancement method and display device of intelligent television

Country Status (1)

Country Link
CN (1) CN112399235B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302200A (en) * 2021-06-28 2022-04-08 海信视像科技股份有限公司 Display device and photographing method based on user posture triggering

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004040272A (en) * 2002-07-01 2004-02-05 Cosmo:Kk Network camera, remote monitor / control system, and control method employing the same
CN106331714A (en) * 2010-05-18 2017-01-11 索尼公司 Image processing device and image processing method
CN107591134A (en) * 2017-08-16 2018-01-16 深圳创维-Rgb电子有限公司 The compensation method of MURA phenomenons, TV and computer-readable recording medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8988578B2 (en) * 2012-02-03 2015-03-24 Honeywell International Inc. Mobile computing device with improved image preview functionality
KR101917650B1 (en) * 2012-08-03 2019-01-29 삼성전자 주식회사 Method and apparatus for processing a image in camera device
CN104581099B (en) * 2014-12-04 2018-03-02 广东欧珀移动通信有限公司 A kind of image pickup method and terminal
CN105872402A (en) * 2016-06-15 2016-08-17 深圳Tcl新技术有限公司 Photographing method and system of smart television
CN107172364B (en) * 2017-04-28 2021-01-26 努比亚技术有限公司 Image exposure compensation method and device and computer readable storage medium
CN108377336A (en) * 2018-04-26 2018-08-07 青岛海信移动通信技术股份有限公司 A kind of method and apparatus that screen light filling is taken pictures
CN108900780A (en) * 2018-06-28 2018-11-27 努比亚技术有限公司 A kind of screen light compensation method, mobile terminal and storage medium
CN109302564B (en) * 2018-10-31 2021-01-08 维沃移动通信(杭州)有限公司 Shooting method and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004040272A (en) * 2002-07-01 2004-02-05 Cosmo:Kk Network camera, remote monitor / control system, and control method employing the same
CN106331714A (en) * 2010-05-18 2017-01-11 索尼公司 Image processing device and image processing method
CN107591134A (en) * 2017-08-16 2018-01-16 深圳创维-Rgb电子有限公司 The compensation method of MURA phenomenons, TV and computer-readable recording medium

Also Published As

Publication number Publication date
CN112399235A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN113330736B (en) Display and image processing method
CN112073797B (en) Volume adjusting method and display device
CN112073788B (en) Video data processing method and device and display equipment
CN112073774A (en) Image quality processing method and display device
CN112399213A (en) Display device and remote controller key multiplexing method
CN112073795B (en) Video data processing method and device and display equipment
CN116260999A (en) Display device and video communication data processing method
CN112399220B (en) Camera physical switch locking state display method and display equipment
CN112068741B (en) Display device and display method for Bluetooth switch state of display device
WO2021031620A1 (en) Display device and backlight brightness adjustment method
CN111385631B (en) Display device, communication method and storage medium
CN112399235B (en) Camera shooting effect enhancement method and display device of intelligent television
CN112399233A (en) Display device and position self-adaptive adjusting method of video chat window
CN111078926A (en) Method for determining portrait thumbnail image and display equipment
CN112783380A (en) Display apparatus and method
CN113448529B (en) Display apparatus and volume adjustment method
CN112073663B (en) Audio gain adjusting method, video chat method and display device
CN112073803B (en) Sound reproduction method and display device
CN112073777A (en) Voice interaction method and display device
CN112073776A (en) Voice control method and display device
CN112073808A (en) Color space switching method and display device
CN112073763B (en) Display equipment
CN113645502B (en) Method for dynamically adjusting control and display device
CN112399223B (en) Method for improving moire fringe phenomenon and display device
CN113630633B (en) Display device and interaction control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant