CN113973216A - Video collection generation method and display device - Google Patents
Video collection generation method and display device Download PDFInfo
- Publication number
- CN113973216A CN113973216A CN202010710550.8A CN202010710550A CN113973216A CN 113973216 A CN113973216 A CN 113973216A CN 202010710550 A CN202010710550 A CN 202010710550A CN 113973216 A CN113973216 A CN 113973216A
- Authority
- CN
- China
- Prior art keywords
- video
- controller
- display device
- preset time
- segments
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 230000011664 signaling Effects 0.000 claims abstract description 6
- 230000004044 response Effects 0.000 claims description 9
- 230000033001 locomotion Effects 0.000 claims description 8
- 230000005236 sound signal Effects 0.000 claims description 8
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000004891 communication Methods 0.000 description 44
- 230000006870 function Effects 0.000 description 28
- 238000010586 diagram Methods 0.000 description 20
- 238000003860 storage Methods 0.000 description 17
- 238000012545 processing Methods 0.000 description 12
- 241000282326 Felis catus Species 0.000 description 10
- 230000002452 interceptive effect Effects 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 5
- 238000012216 screening Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000003203 everyday effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present application relates to the field of intelligent devices and video detection technologies, and in particular, to a method for generating a video album and a display device. The problems that video segments cannot be automatically acquired, the editing speed is low, splicing is prone to making mistakes, and video highlights cannot be intelligently generated can be solved to a certain extent. The display device includes: a camera; a microphone; a display screen for displaying a user interface; a first controller configured to: responding to the received trigger signal, and controlling the camera to acquire a video clip; performing image identification on the video clips in a preset time period to obtain a first video set, wherein the first video set is a video clip set containing the same elements; and splicing the video segments in the first video set into a first video collection file.
Description
Technical Field
The present application relates to the field of intelligent devices and video detection technologies, and in particular, to a method for generating a video album and a display device.
Background
Video highlights are the assembly of different videos or video segments with similar content into one video. For example, a video collection about a pet may be obtained by stitching videos of activities of the pet at different periods of time indoors.
In some implementations of video highlights, a user needs to use a professional photography tool to obtain video clips, and after watching and screening each video, the videos are manually intercepted and then spliced into the video highlights.
However, when the user cannot operate the camera, the number of video segments is large, and the video scene is complex, the user cannot acquire the video segments, the editing speed is slow, and the splicing omission of the video segments is wrong.
Disclosure of Invention
In order to solve the problems that video clips cannot be automatically acquired, the editing speed is low, splicing is prone to errors, and video highlights cannot be intelligently generated, the application provides a video highlight generation method and display equipment.
The embodiment of the application is realized as follows:
a first aspect of an embodiment of the present application provides a display device, including: a camera; a microphone; a display screen for displaying a user interface; a first controller configured to: responding to the received trigger signal, and controlling the camera to acquire a video clip; performing image identification on the video clips in a preset time period to obtain a first video set, wherein the first video set is a video clip set containing the same elements; and splicing the video segments in the first video set into a first video collection file.
A second aspect of the embodiments of the present application provides a method for generating a video album, where the method includes: responding to the received trigger signal, and controlling the camera to acquire a video clip; performing image identification on the video clips in a preset time period to obtain a first video set, wherein the first video set is a video clip set containing the same elements; and splicing the video segments in the first video set into a first video collection file.
A third aspect of embodiments of the present application provides a display device, including: a camera; a microphone; a display screen for displaying a user interface; a first controller configured to: responding to the received trigger signal, controlling a camera to acquire a video clip containing a shooting target in a preset time period to obtain a first video set; and splicing the video segments in the first video set to generate a first video collection.
A fourth aspect of the embodiments of the present application provides a method for generating a video album, where the method includes: responding to the received trigger signal, controlling the camera to acquire a video clip containing a shooting target in a preset time period so as to obtain a first video set; and splicing the video segments in the first video set to generate a first video collection.
The beneficial effect of this application: by constructing the first video set, the classification of video segments can be realized; furthermore, the screening of video clips meeting the requirements can be realized by constructing a first video sequence; further, through the construction of a target video, the secondary screening of video clips can be realized; furthermore, the optimization of the television storage resources can be realized by deleting the video segments after the video collection file is obtained; further, by constructing a preset time point and a preset time period, the automation of the video collection file acquisition can be realized; furthermore, the camera is controlled to dynamically track the shot target, so that the effective acquisition of the video clip can be realized; further, the length of the video collection file can be controlled by constructing the first threshold and the second threshold, so that the video clip can be automatically acquired, the editing speed can be increased, the video splicing error rate can be reduced, and the video collection file can be intelligently generated.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus according to an embodiment;
fig. 2 is a block diagram exemplarily showing a hardware configuration of a display device 200 according to an embodiment;
fig. 3 is a block diagram exemplarily showing a hardware configuration of the control apparatus 100 according to the embodiment;
fig. 4 is a diagram exemplarily showing a functional configuration of the display device 200 according to the embodiment;
fig. 5a schematically shows a software configuration in the display device 200 according to an embodiment;
fig. 5b schematically shows a configuration of an application in the display device 200 according to an embodiment;
FIG. 6A is a schematic diagram of a television application UI according to an embodiment of the application;
FIG. 6B shows a UI diagram for selecting a slave video highlight application according to an embodiment of the present application;
FIG. 6C is a schematic diagram illustrating a UI after video highlights are generated according to an embodiment of the application;
FIG. 6D shows a schematic diagram of a UI after generation of a video highlight according to another embodiment of the present application;
FIG. 6E is a schematic diagram of a UI for playing a first video album according to an embodiment of the present application;
fig. 7 is a schematic flow chart illustrating a video highlight generation method according to an embodiment of the present application;
fig. 8 shows a method for generating a video album according to another embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module" as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
The term "remote control" as used in this application refers to a component of an electronic device, such as the display device disclosed in this application, that is typically wirelessly controllable over a short range of distances. Typically using infrared and/or Radio Frequency (RF) signals and/or bluetooth to connect with the electronic device, and may also include WiFi, wireless USB, bluetooth, motion sensor, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in the common remote control device with the user interface in the touch screen.
The term "gesture" as used in this application refers to a user's behavior through a change in hand shape or an action such as hand motion to convey a desired idea, action, purpose, or result.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the mobile terminal 300 and the control apparatus 100.
The control device 100 may control the display device 200 in a wireless or other wired manner by using a remote controller, including infrared protocol communication, bluetooth protocol communication, other short-distance communication manners, and the like. The user may input a user command through a key on a remote controller, voice input, control panel input, etc. to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
In some embodiments, mobile terminals, tablets, computers, laptops, and other smart devices may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device. The application, through configuration, may provide the user with various controls in an intuitive User Interface (UI) on a screen associated with the smart device.
For example, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 300 and the display device 200 can establish a control instruction protocol, synchronize a remote control keyboard to the mobile terminal 300, and control the display device 200 by controlling a user interface on the mobile terminal 300. The audio and video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so as to realize the synchronous display function.
As also shown in fig. 1, the display apparatus 200 also performs data communication with the server 400 through various communication means. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The servers 400 may be a group or groups of servers, and may be one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide an intelligent network tv function that provides a computer support function in addition to the broadcast receiving tv function. Examples include a web tv, a smart tv, an Internet Protocol Tv (IPTV), and the like.
A hardware configuration block diagram of a display device 200 according to an exemplary embodiment is exemplarily shown in fig. 2. As shown in fig. 2, the display device 200 includes a controller 210, a tuning demodulator 220, a communication interface 230, a detector 240, an input/output interface 250, a video processor 260-1, an audio processor 60-2, a display 280, an audio output 270, a memory 290, a power supply, and an infrared receiver.
A display 280 for receiving the image signal from the video processor 260-1 and displaying the video content and image and components of the menu manipulation interface. The display 280 includes a display screen assembly for presenting a picture, and a driving assembly for driving the display of an image. The video content may be displayed from broadcast television content, or may be broadcast signals that may be received via a wired or wireless communication protocol. Alternatively, various image contents received from the network communication protocol and sent from the network server side can be displayed.
Meanwhile, the display 280 simultaneously displays a user manipulation UI interface generated in the display apparatus 200 and used to control the display apparatus 200.
And, a driving component for driving the display according to the type of the display 280. Alternatively, in case the display 280 is a projection display, it may also comprise a projection device and a projection screen.
The communication interface 230 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communication interface 230 may be a vvifii chip 231, a bluetooth communication protocol chip 232, a wired ethernet communication protocol chip 233, or other network communication protocol chips or near field communication protocol chips, and an infrared receiver (not shown).
The display apparatus 200 may establish control signal and data signal transmission and reception with an external control apparatus or a content providing apparatus through the communication interface 230. And an infrared receiver, an interface device for receiving an infrared control signal for controlling the apparatus 100 (e.g., an infrared remote controller, etc.).
The detector 240 is a signal used by the display device 200 to collect an external environment or interact with the outside. The detector 240 includes a light receiver 242, a sensor for collecting the intensity of ambient light, and parameters such as parameter changes can be adaptively displayed by collecting the ambient light.
The image acquisition device 241, such as a camera and a camera, may be used to acquire an external environment scene, acquire attributes of a user or interact gestures with the user, adaptively change display parameters, and recognize gestures of the user, so as to implement an interaction function with the user.
In some other exemplary embodiments, the detector 240, a temperature sensor, etc. may be provided, for example, by sensing the ambient temperature, and the display device 200 may adaptively adjust the display color temperature of the image. For example, the display apparatus 200 may be adjusted to display a cool tone when the temperature is in a high environment, or the display apparatus 200 may be adjusted to display a warm tone when the temperature is in a low environment.
In other exemplary embodiments, the detector 240, and a sound collector, such as a microphone, may be used to receive a user's voice, a voice signal including a control instruction from the user to control the display device 200, or collect an ambient sound for identifying an ambient scene type, and the display device 200 may adapt to the ambient noise.
The input/output interface 250 controls data transmission between the display device 200 of the controller 210 and other external devices. Such as receiving video and audio signals or command instructions from an external device.
Input/output interface 250 may include, but is not limited to, the following: any one or more of high definition multimedia interface HDMI interface 251, analog or data high definition component input interface 253, composite video input interface 252, USB input interface 254, RGB ports (not shown in the figures), etc.
In some other exemplary embodiments, the input/output interface 250 may also form a composite input/output interface with the above-mentioned plurality of interfaces.
The tuning demodulator 220 receives the broadcast television signals in a wired or wireless receiving manner, may perform modulation and demodulation processing such as amplification, frequency mixing, resonance, and the like, and demodulates the television audio and video signals carried in the television channel frequency selected by the user and the EPG data signals from a plurality of wireless or wired broadcast television signals.
The tuner demodulator 220 is responsive to the user-selected television signal frequency and the television signal carried by the frequency, as selected by the user and controlled by the controller 210.
The tuner-demodulator 220 may receive signals in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcast, cable broadcast, satellite broadcast, or internet broadcast signals, etc.; and according to different modulation types, the modulation mode can be digital modulation or analog modulation. Depending on the type of television signal received, both analog and digital signals are possible.
In other exemplary embodiments, the tuner/demodulator 220 may be in an external device, such as an external set-top box. In this way, the set-top box outputs television audio/video signals after modulation and demodulation, and the television audio/video signals are input into the display device 200 through the input/output interface 250.
The video processor 260-1 is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, and the like according to a standard codec protocol of the input signal, so as to obtain a signal that can be displayed or played on the direct display device 200.
Illustratively, the video processor 260-1 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio and video data stream, and if the input MPEG-2 is input, the demultiplexing module demultiplexes the input audio and video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert an input video frame rate, such as a 60Hz frame rate into a 120Hz frame rate or a 240Hz frame rate, and the normal format is implemented in, for example, an interpolation frame mode.
The display format module is used for converting the received video output signal after the frame rate conversion, and changing the signal to conform to the signal of the display format, such as outputting an RGB data signal.
The audio processor 260-2 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, amplification processing, and the like to obtain an audio signal that can be played in the speaker.
In other exemplary embodiments, video processor 260-1 may comprise one or more chips. The audio processor 260-2 may also comprise one or more chips.
And, in other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips or may be integrated together with the controller 210 in one or more chips.
An audio output 272, which receives the sound signal output from the audio processor 260-2 under the control of the controller 210, such as: the speaker 272, and the external sound output terminal 274 that can be output to the generation device of the external device, in addition to the speaker 272 carried by the display device 200 itself, such as: an external sound interface or an earphone interface and the like.
The power supply provides power supply support for the display device 200 from the power input from the external power source under the control of the controller 210. The power supply may include a built-in power supply circuit installed inside the display device 200, or may be a power supply interface installed outside the display device 200 to provide an external power supply in the display device 200.
A user input interface for receiving an input signal of a user and then transmitting the received user input signal to the controller 210. The user input signal may be a remote controller signal received through an infrared receiver, and various user control signals may be received through the network communication module.
For example, the user inputs a user command through the remote controller 100 or the mobile terminal 300, the user input interface responds to the user input through the controller 210 according to the user input, and the display device 200 responds to the user input.
In some embodiments, a user may enter a user command on a Graphical User Interface (GUI) displayed on the display 280, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
The controller 210 controls the operation of the display apparatus 200 and responds to the user's operation through various software control programs stored in the memory 290.
As shown in fig. 2, the controller 210 includes a RAM213 and a ROM214, and a graphic processor 216, a CPU processor 212, a communication interface 218, such as: a first interface 218-1 through an nth interface 218-n, and a communication bus. The RAM213 and the ROM214, the graphic processor 216, the CPU processor 212, and the communication interface 218 are connected via a bus.
A ROM213 for storing instructions for various system boots. If the display apparatus 200 starts power-on upon receipt of the power-on signal, the CPU processor 212 executes a system boot instruction in the ROM, copies the operating system stored in the memory 290 to the RAM213, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 212 copies the various application programs in the memory 290 to the RAM213, and then starts running and starting the various application programs.
A graphics processor 216 for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And a renderer for generating various objects based on the operator and displaying the rendered result on the display 280.
A CPU processor 212 for executing operating system and application program instructions stored in memory 290. And executing various application programs, data and contents according to various interactive instructions received from the outside so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 212 may include a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some operations of the display apparatus 200 in a pre-power-up mode and/or operations of displaying a screen in a normal mode. A plurality of or one sub-processor for one operation in a standby mode or the like.
The controller 210 may control the overall operation of the display apparatus 100. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon. The user command for selecting the UI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch pad, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
The memory 290 includes a memory for storing various software modules for driving the display device 200. Such as: various software modules stored in memory 290, including: the system comprises a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like.
Wherein the basic module is a bottom layer software module for signal communication among the various hardware in the postpartum care display device 200 and for sending processing and control signals to the upper layer module. The detection module is used for collecting various information from various sensors or user input interfaces, and the management module is used for performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is a module for controlling the display 280 to display image content, and may be used to play information such as multimedia image content and UI interface. And the communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing a module for data communication between browsing servers. And the service module is used for providing various services and modules including various application programs.
Meanwhile, the memory 290 is also used to store visual effect maps and the like for receiving external data and user data, images of respective items in various user interfaces, and a focus object.
A block diagram of the configuration of the control apparatus 100 according to an exemplary embodiment is exemplarily shown in fig. 3. As shown in fig. 3, the control apparatus 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control device 100 is configured to control the display device 200 and may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200. Such as: the user responds to the channel up and down operation by operating the channel up and down keys on the control device 100.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications that control the display apparatus 200 according to user demands.
In some embodiments, as shown in fig. 1, a mobile terminal 300 or other intelligent electronic device may function similar to the control device 100 after installing an application that manipulates the display device 200. Such as: the user may implement the functions of controlling the physical keys of the device 100 by installing applications, various function keys or virtual buttons of a graphical user interface available on the mobile terminal 300 or other intelligent electronic device.
The controller 110 includes a processor 112 and RAM113 and ROM114, a communication interface 218, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components for communication and coordination and external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display apparatus 200. The communication interface 130 may include at least one of a WiFi chip, a bluetooth module, an NFC module, and other near field communication modules.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touch pad 142, a sensor 143, keys 144, and other input interfaces. Such as: the user can realize a user instruction input function through actions such as voice, touch, gesture, pressing, and the like, and the input interface converts the received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the instruction signal to the display device 200.
The output interface includes an interface that transmits the received user instruction to the display apparatus 200. In some embodiments, the interface may be an infrared interface or a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
In some embodiments, the control device 100 includes at least one of a communication interface 130 and an output interface. The control device 100 is provided with a communication interface 130, such as: the WiFi, bluetooth, NFC, etc. modules may transmit the user input command to the display device 200 through the WiFi protocol, or the bluetooth protocol, or the NFC protocol code.
A memory 190 for storing various operation programs, data and applications for driving and controlling the control apparatus 200 under the control of the controller 110. The memory 190 may store various control signal commands input by a user.
And a power supply 180 for providing operational power support to the various elements of the control device 100 under the control of the controller 110. A battery and associated control circuitry.
Fig. 4 is a diagram schematically illustrating a functional configuration of the display device 200 according to an exemplary embodiment. As shown in fig. 4, the memory 290 is used to store an operating system, an application program, contents, user data, and the like, and performs system operations for driving the display device 200 and various operations in response to a user under the control of the controller 210. The memory 290 may include volatile and/or nonvolatile memory.
The memory 290 is specifically configured to store an operating program for driving the controller 210 in the display device 200, and to store various application programs installed in the display device 200, various application programs downloaded by a user from an external device, various graphical user interfaces related to the applications, various objects related to the graphical user interfaces, user data information, and internal data of various supported applications. The memory 290 is used to store system software such as an OS kernel, middleware, and applications, and to store input video data and audio data, and other user data.
The memory 290 is specifically used for storing drivers and related data such as the audio/video processors 260-1 and 260-2, the display 280, the communication interface 230, the tuning demodulator 220, the input/output interface of the detector 240, and the like.
In some embodiments, memory 290 may store software and/or programs, software programs for representing an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (e.g., the middleware, APIs, or applications), and the kernel may provide interfaces to allow the middleware and APIs, or applications, to access the controller to implement controlling or managing system resources.
The memory 290, for example, includes a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. The controller 210 performs functions such as: a broadcast television signal reception demodulation function, a television channel selection control function, a volume selection control function, an image control function, a display control function, an audio control function, an external instruction recognition function, a communication control function, an optical signal reception function, an electric power control function, a software control platform supporting various functions, a browser function, and the like.
A block diagram of a configuration of a software system in a display device 200 according to an exemplary embodiment is exemplarily shown in fig. 5 a.
As shown in fig. 5a, an operating system 2911, including executing operating software for handling various basic system services and for performing hardware related tasks, acts as an intermediary for data processing performed between application programs and hardware components. In some embodiments, portions of the operating system kernel may contain a series of software to manage the display device hardware resources and provide services to other programs or software code.
In other embodiments, portions of the operating system kernel may include one or more device drivers, which may be a set of software code in the operating system that assists in operating or controlling the devices or hardware associated with the display device. The drivers may contain code that operates the video, audio, and/or other multimedia components. Examples include a display screen, a camera, Flash, WiFi, and audio drivers.
The accessibility module 2911-1 is configured to modify or access the application program to achieve accessibility and operability of the application program for displaying content.
A communication module 2911-2 for connection to other peripherals via associated communication interfaces and a communication network.
The user interface module 2911-3 is configured to provide an object for displaying a user interface, so that each application program can access the object, and user operability can be achieved.
Control applications 2911-4 for controllable process management, including runtime applications and the like.
The event transmission system 2914, which may be implemented within the operating system 2911 or within the application program 2912, in some embodiments, on the one hand, within the operating system 2911 and on the other hand, within the application program 2912, is configured to listen for various user input events, and to refer to handlers that perform one or more predefined operations in response to the identification of various types of events or sub-events, depending on the various events.
The event monitoring module 2914-1 is configured to monitor an event or a sub-event input by the user input interface.
The event identification module 2914-1 is configured to input definitions of various types of events for various user input interfaces, identify various events or sub-events, and transmit the same to a process for executing one or more corresponding sets of processes.
The event or sub-event refers to an input detected by one or more sensors in the display device 200 and an input of an external control device (e.g., the control device 100). Such as: the method comprises the following steps of inputting various sub-events through voice, inputting gestures through gesture recognition, inputting sub-events through remote control key commands of the control equipment and the like. Illustratively, the one or more sub-events in the remote control include a variety of forms including, but not limited to, one or a combination of key presses up/down/left/right/, ok keys, key presses, and the like. And non-physical key operations such as move, hold, release, etc.
The interface layout manager 2913, directly or indirectly receiving the input events or sub-events from the event transmission system 2914, monitors the input events or sub-events, and updates the layout of the user interface, including but not limited to the position of each control or sub-control in the interface, and the size, position, and level of the container, and other various execution operations related to the layout of the interface.
As shown in fig. 5b, the application layer 2912 contains various applications that may also be executed at the display device 200. The application may include, but is not limited to, one or more applications such as: live television applications, video-on-demand applications, media center applications, application centers, gaming applications, and the like.
The live television application program can provide live television through different signal sources. For example, a live television application may provide television signals using input from cable television, radio broadcasts, satellite services, or other types of live television services. And, the live television application may display video of the live television signal on the display device 200.
A video-on-demand application may provide video from different storage sources. Unlike live television applications, video on demand provides a video display from some storage source. For example, the video on demand may come from a server side of the cloud storage, from a local hard disk storage containing stored video programs.
The media center application program can provide various applications for playing multimedia contents. For example, a media center, which may be other than live television or video on demand, may provide services that a user may access to various images or audio through a media center application.
The application program center can provide and store various application programs. The application may be a game, an application, or some other application associated with a computer system or other device that may be run on the smart television. The application center may obtain these applications from different sources, store them in local storage, and then be operable on the display device 200.
The embodiment of the application can be applied to various types of display devices (including but not limited to smart televisions, set-top boxes and the like). The technical solution will be explained below in relation to the UI related to the generation of video highlights at the tv-side.
Fig. 6A to 6E show schematic diagrams of a video highlight operation interface in a television according to an embodiment of the present application.
Fig. 6A shows a schematic diagram of a UI of a television application according to an embodiment of the present application.
The figure shows the application UI interface of the display screen showing the tv, for example, the application UI interface includes 4 applications installed on the tv, respectively news headlines, movie theater on demand, video highlights, karaoke, and the like. By moving the focus on the display screen using a controller such as a remote control, different applications, or other function buttons, may be selected.
In some embodiments, the television display, while presenting the application UI interface, is also configured to present other interactive elements, which may include, for example, television home page controls, search controls, message button controls, mailbox controls, browser controls, favorites controls, signal bar controls, and the like.
To improve the convenience and the image of the UI of the television, in some embodiments, the first controller of the display device in the embodiments of the present application controls the UI of the television in response to the operation of the interactive element. For example, a user clicking on a search control through a controller such as a remote control may expose the search UI on top of other UIs, i.e., the UI controlling the application components to which the interactive elements are mapped can be made large, or run and displayed full screen.
In some embodiments, the interactive element may also be operated by a sensor, which may be, but is not limited to, an acoustic input sensor, such as a microphone, which may detect a voice command including an indication of the desired interactive element. For example, a user may identify a desired interactive element, such as a search control, using a "video highlights" or any other suitable identification, and may also describe a desired action to be performed in relation to the desired interactive element. The first controller may recognize the voice command and submit data characterizing the interaction to the UI or its processing component or engine.
Fig. 6B shows a UI diagram of a slave program for selecting a video highlight application according to an embodiment of the present application.
In some embodiments, a user may control the focus of a display screen via a remote control, select a video highlight application, engage in having its icons highlighted on the display screen; then, by clicking the highlighted icon, the opening of the icon-mapped application program can be realized.
It should be noted that the icons and the texts on the UI interface in the embodiment of the present application are only used as examples to describe the video highlight generation technical solution, and the icons and the texts on the UI interface in the drawings may also be implemented as other contents, and the drawings in the present application are not specifically limited.
Fig. 6C shows a UI diagram after the video highlights are generated according to the embodiment of the present application.
In some embodiments, the first controller of the display device provided herein generates a plurality of video highlights according to the difference of the elements, as shown in fig. 6C.
The first video collection file is generated by taking pet elements, the second video collection file is generated by taking wonderful moments as elements, the third video collection file is generated by taking smiles as elements, and the fourth video collection file is generated by taking other contents as elements.
In some embodiments, the video highlight icon includes a sketch and an edit area defined to display an element type, or generation time, or other information of the video highlight. For the selected first video highlight file, the first controller will highlight it and lock the other video highlights to be grey.
It should be noted that the element types of the video highlights in the drawings are only examples, the technical scheme and the display device for generating the video highlight files are explained by taking pets, wonderful moments and smiles as topics, and the video highlights of other element types except the above elements are summarized to the fourth video highlight files of other types. In some embodiments, the subject type of the first video highlight file may also be "father", or "stranger", or "person assigned", the first controller of the display device configuring the video highlight file by means of an image recognition algorithm.
Fig. 6D shows a UI diagram after generation of a video highlight according to another embodiment of the present application.
And a first controller of the display equipment generates a video collection file according to the selected part of videos which are obtained from the video clips and have uniform time sequence.
In some embodiments, the date of generation of the first video album file is 6 months and 2 days, the date of generation of the second video album file is 6 months and 3 days, and the date of generation of the third video album file is 6 months and 4 days. The first controller automatically generates and displays the video highlight file on the UI at regular time intervals, which may be implemented as 8 am 30 min each day, for example.
In some embodiments, the element type of the first video highlight file may be preset, or a constituent video clip of the first video highlight file is a moving target video randomly acquired by a television camera by default. For example, a video clip of an active person captured by a camera and an active pet may constitute a first video album file.
In some embodiments, the display device is configured such that its display screen displays the latest 30 video highlights, and when the number of generated video highlight files exceeds 30, the first controller deletes the video highlight file whose generation time is earlier.
In some embodiments, the first controller actively deletes the video segment to which the video highlight file relates after the video highlight file of the current day is generated, so as to optimize the storage resource of the television.
In some embodiments, the video clips collected by the display device, and the generated video album file may be stored in a NAS (Network Attached Storage). For example, it may be implemented to allow storage of up to 30 video album files on the NAS to optimize storage resources. In some embodiments, if the NAS space is not sufficient, the video album file with an earlier generation time may be deleted.
Fig. 7 shows a schematic flow chart of a video highlight generation method according to an embodiment of the present application.
In step 701, the camera is controlled to acquire a video clip in response to the received trigger signal.
The application provides a display device includes camera, microphone, display screen and first controller. The camera is a television image collector and can be used for collecting external environment scenes; the display screen is used for displaying a user interface; and the first controller controls the camera to acquire a video clip within the monitoring range of the camera according to the trigger signal received by the display equipment.
In some embodiments, the trigger signal comprises a signal of movement of an object in a detection range as monitored by the camera or a sound signal in a detection range as monitored by the microphone.
The first controller controls the camera to capture a preset moving target and record to obtain a corresponding video clip. And when the camera of the television monitors that a movable target exists, starting video recording to obtain a video clip. For example, the camera and the microphone are used for monitoring the movement of an object and the sound in the detection range, and the first controller adjusts the camera so that the shooting target is located at the center of the picture of the video clip. After the camera captures the moving image in the visual field, the first controller can adjust the steering and the angle of the camera, so that automatic focus following for the shot target is realized, the shot target is always dynamically positioned in the central position of the picture of the video clip in the recording stage, and the image quality of the video clip is improved.
As another example, a microphone is a sound collector of a television for receiving a user's voice, control instructions, or ambient sound. For the sound detected by the microphone, the first controller adjusts the steering and the angle of the camera according to a sound source positioning algorithm, so that the shooting target enters the visual field range of the camera, and the shooting target is accurately captured by the camera through the method. In some embodiments, different users are first determined by collecting voiceprint characteristics of the users. The pronunciation capacity and the pronunciation frequency of each person are different, so that each person can form a unique voiceprint.
For another example, the voiceprint feature can be extracted according to the audio collected by the microphone, such as by Mel-Frequency Cepstral coeffients (MFCCs), short-time energy, short-time average amplitude, short-time average zero-crossing rate, formants, Linear Predictive Cepstrum Coefficients (LPCCs), and so on. The speaker's position in space is then determined using sound source localization techniques. The position of the speaker corresponding to the audio can be determined by specifically acquiring the time delay of the audio through a plurality of sound acquisition modules. Finally, according to the positioned position, the first controller adjusts the camera, so that the speaker corresponding to the audio is located in the center of a shooting picture of the camera, and the adjustment comprises adjusting the shooting angle of the camera and/or adjusting the focal length of the camera. According to the positioned position, the direction and the distance of the speaker corresponding to the audio relative to the camera can be determined.
In the case of image acquisition, particularly image acquisition in which a speaker is a target, the camera is adjusted to acquire a clear and easily recognized image of the speaker. The adjustment can be to adjust the shooting angle of the camera so that the adjusted camera is aligned with the speaker corresponding to the audio; the focal length of the camera can also be adjusted, so that the proportion of the portrait of the speaker in the collected image is ensured, and the fact that a viewer can accurately identify the speaker through the image is ensured; the shooting angle and the focal length of the camera can be adjusted simultaneously, and the adjustment is determined according to actual conditions, namely whether the adjustment of the shooting angle and the focal length is needed or not is judged according to the determined distance and the determined direction. The shooting target can be a pet or other elements.
In some embodiments, the video clip is configured to be stopped for the first controller when the camera is configured to be invoked by another application.
When the camera is called by other applications, the first controller stops acquiring the video clip; when the video collection function is started, the first controller starts to control the camera to acquire the video clip. When the video collection function of the television is in an open state, the first controller can control the camera to automatically collect and store video clips. When the first controller detects that the camera is called by other applications, the video clip is stopped or paused to be acquired. For example, when a camera is used for a video call, the video highlights application pauses, or stops capturing video clips. If the video collection function of the television is in a closed state, the camera is not occupied immediately, the first controller cannot control the camera to record video clips, and the first controller stops the camera to perform monitoring activities.
With continued reference to fig. 7, in step 702, image recognition is performed on the video segments within a preset time period to obtain a first video set, where the first video set is a set of video segments containing the same elements.
Taking the first video collection file of the pet cat type as an example for explanation, the first controller performs image recognition on a plurality of video segments acquired by the camera based on an image recognition algorithm, and takes all the video segments containing the pet cat as a first video set, wherein the contained pet is also called an element, that is, the first video set is a video segment set containing the pet cat with the same element.
The first controller identifying the video segments may generate a plurality of video collections corresponding to different video compilation files. As shown in fig. 6C, a first video collection file of pet type, corresponding to the first video collection; the second video collection file of the wonderful moment type corresponds to a second video collection; a smile-type third video collection file corresponding to the third video collection; and the other type of fourth video collection file corresponds to the fourth video collection.
The first video set is a set of video segments that contain the same elements, which may be implemented as highlights, or pets, or smiles, or characters, or others. For example, when the element of the first video collection file is a person, the first controller will identify a video clip containing the person and construct a first video set, wherein the first video set can be implemented as a video set of the same person or a video set of different persons.
In some embodiments, the video segment whose element is the highlight moment is identified, for example, by detecting the sound in the video segment, i.e., determining the video segment containing the cheering or applause content of the viewer, whose element is considered the highlight moment; for example, whether the key frame in the video segment contains a character or whether the key motion in the motion or change of the object contains a preset motion track, such as flip, preset standing posture and the like, can be detected to determine that the element of the video segment is a highlight moment; for another example, it can be determined that the element of the video segment is the highlight moment by detecting whether the voice contained in the video segment matches a preset voice model, such as the model setting is "too excellent", "true" or the like.
In some embodiments, the video segment whose element is a pet is identified, for example, by identifying whether there is an active pet in the video segment; identifying the video segment with the element being the person, for example, identifying and classifying the video segment by identifying whether the active person exists in the video segment; identifying the video segment with the smile element, for example, identifying and classifying the video segment by identifying whether a smiling face exists in the video segment; the video segments with other elements can be identified, for example, by classifying video segments that do not belong to the pet, smile, character, and highlight elements into other video sets.
In some embodiments, the stitching the video segments in the first video set into a first video compilation file by the first controller comprises the first controller performing importance scoring on the video segments in the first video set to obtain a first video sequence; sequencing the first video sequence from high to low according to the importance value, and setting video clips with the preset splicing number as target videos; and splicing the target videos in the first video sequence to generate a first video collection file.
The first controller evaluates the importance degree of the video clips in the first video set through a recognition algorithm, evaluates the importance scores of the video clips to obtain a first video sequence, then performs sequencing from high to low according to the importance scores of the video clips, and sets the video clips with the preset splicing number as target videos.
Taking the first video set with the element as the highlight moment as an example, the first video set comprises 8 video segments, in some embodiments, the first controller is configured to evaluate the importance score of the cheering sound by identifying the length of the cheering sound in the 8 video segments, the longer the cheering sound is, the higher the importance score is, then determine the video segments with the preset splicing number in the first video sequence as the target videos, and splice the determined target videos to generate the first video highlight file.
For another example, the first controller inputs the video segments into the trained neural network model, analyzes the video dimensional characteristics of the video segments, and inputs the video dimensional characteristics into the two-classification algorithm model to obtain the highlight probability distribution and the non-highlight probability distribution of the video segments, so as to obtain the first video sequence. The neural network model can be implemented as an I3D model, and has a model pre-trained on the largest 400-class data set kinetics in the identification field, wherein the kinetics 400-class data set has about 25 ten thousand video segments, so that the neural network can be trained fully and effectively, and the network has certain generalization capability. The output of the binary classification algorithm model is 2-dimensional data, which is the probability distribution of the highlight and the non-highlight of each video segment, for example, the output 2-dimensional data is [0.8, 0.2], then 0.8 indicates that the probability of the corresponding video segment being the highlight moment is 0.8, and 0.2 indicates that the probability of the corresponding video segment being the non-highlight moment is 0.2.
In some embodiments, taking the first video set whose elements are other as an example, the importance scores of the video segments, the first controller may be implemented to perform importance scoring by identifying color feature values of the video segments, wherein the higher the image quality feature value is, the higher the importance score is, and the first video sequence with the importance scores arranged from high to low is obtained. For example, the quality feature value is implemented as a weighted sum of chrominance, luminance, and saturation of the image in the video segment. The chroma is the sum of the background chroma and the foreground chroma of the image, the background chroma refers to the chroma of a background area of the image, and the foreground chroma refers to the chroma of a foreground area of the image. The weight can be preset according to actual conditions, for example: the weights of the chroma, the brightness and the saturation are respectively set to be 0.5, 0.3 and 0.2; and then carrying out weighted summation on the chroma, the brightness and the saturation of the image to obtain the image quality characteristic value of the video clip.
For another example, taking the first video set with the element as a character, or a pet, or a smile as an example, the importance score of the video segment may be obtained by identifying the number of frames of the character, or the pet, or the smile in the video segment and performing importance scoring, and the higher the number of frames is, the higher the importance score is, the video segments with the preset number of previous splices may be determined to be set as the target video, and the determined target video may be spliced to generate the first video collection file.
The processing performed by the first controller for the first set of videos, and subsequent processing steps, may be performed at fixed time intervals to enable the television to obtain sufficient video clip material. It may also be implemented to trigger the generation of the first video compilation immediately upon receiving a user instruction. For example, the first controller processes the video clips to generate a first video album within a preset time period after the user starts up for the first time every day; for another example, the first controller may be configured to generate the first video highlights at a fixed time of day; for another example, the first controller may be configured to perform the generation of the first video highlights at a preset point in time each day in the powered-on state of the television.
In some embodiments, the first controller performs image recognition on the video segments within a preset time period to obtain a first video set, and the first controller receives an input bright-screen instruction; when the time for receiving the screen lightening instruction is earlier than a preset time point, continuously controlling the camera to acquire a video clip; when the time of receiving the screen-lighting instruction is later than a preset time point, performing image recognition on the video clips in a preset time period to obtain a first video set, and splicing the video clips in the first video set into a first video collection file.
For example, when the preset time point is configured to be 20:00 and the preset time period is configured to be 6:00-22:00, when a user turns on the television 19:00 at night by using a remote controller, the first controller receives an input screen-up instruction, and as the time for the display device to receive the screen-up instruction is earlier than the preset time point 20:00, the first controller continues to control the camera of the display device to acquire a video clip; when a user turns on the television at 20:10 night by using a remote controller, the first controller receives an input screen lightening instruction, and because the time for the display equipment to receive the screen lightening instruction is 20:00 later than the preset time point, the first controller performs image recognition on the video clips within the preset time period of 6:00-22:00 to obtain a first video set, and the video clips in the first video set are spliced into a first video collection file.
In some embodiments, after the first controller receives the bright screen instruction, the first controller is further configured to: after the generated first video collection file is monitored, controlling a display to display a video collection interface, wherein a control used for skipping a playing interface of the first video collection file is arranged in the video collection interface; and when the generated first video collection file is not monitored, the display is not controlled to display a video collection interface.
For example, after the display device is powered on, the first controller detects a video highlight file produced the previous day, the first controller will display a video highlight UI, the video highlight UI comprising a plurality of video highlight files of different types of elements, or the video highlight UI may comprise a plurality of video highlight files generated on different dates. In some embodiments, as shown in fig. 6C, a control for jumping to the first video album file playing interface is further disposed in the video album UI, for example, by clicking a highlight icon of the first video album file, the playing UI of the first video album file can be accessed.
In some embodiments, the image recognition of the video segments within a preset time period by the first controller to obtain the first video set includes image recognition of the video segments within a preset time period by the first controller at a preset time point to obtain the first video set, where the preset time point is located after the preset time period.
For example, when the preset time point is configured to be 20:00 and the preset time period is configured to be 8:00-17:00, when the television is in a power-on state at 20:00 night, the first controller performs image recognition on the video clips acquired within the preset time period of 8:00-17:00 to obtain a first video set, and splices the video clips in the first video set into a first video collection file.
Configuring a display device to be started by a user to generate a first video collection, splicing the video segments in the first video collection into a first video collection file by a first controller, wherein the first controller receives an input bright screen instruction; splicing the video segments in the first video set into a first video highlight file in response to the highlight instruction.
For example, the preset time period is configured to be 8:00-17:00, when a user opens a television at any time, the first controller receives an input bright screen instruction, the first controller performs image recognition on the video segments within the preset time period of 8:00-17:00 to obtain a first video set, and the video segments in the first video set are spliced into a first video collection file. When the starting time of a user is earlier than the preset time period, the first controller performs image recognition on video clips in the previous preset time period of 8:00-17:00 to obtain a first video set, and the video clips in the first video set are spliced into a first video collection file; when the starting time of a user is within the preset time period, the first controller performs image recognition on video clips collected within the time period from 8:00 to the current starting time to obtain a first video set, and the video clips in the first video set are spliced into a first video collection file; when the starting time of the user is later than the preset time period, the first controller performs image recognition on the video clips within the preset time period of 8:00-17:00 the current day to obtain a first video set, and the video clips in the first video set are spliced into a first video collection file.
With continued reference to fig. 7, in step 703, the video segments in the first video set are spliced into a first video highlight file.
The method comprises the steps of obtaining a first video sequence with high-to-low importance score in a first video set, setting video clips with the preset splicing number as target videos, and splicing the target videos to obtain a first video collection file. For example, if the first video album file is composed of 5 video segments and the first video sequence includes 8 video segments, the 5 video segments constituting the final first video album are the target videos. By splicing the target videos, a final first video collection file can be generated.
In some embodiments, the first controller orders the first video sequence from high to low according to the importance score, and sets a top preset number of spliced video segments as the target video. Wherein the preset number of stitches is configured to constitute a number of video segments of the first video compilation. For example, the preset number of splices is set to 5, the first video sequence contains 8 video segments, and the video segments are sorted from high to low according to the importance scores. The first controller selects the first 5 video clips, and splices the video clips to generate a first video collection file.
In some embodiments, the first controller deletes the video clip after the display device generates the first video highlight file. After the first video sequence is generated, if the preset splicing number is determined, in order to save the storage resource of the television, the first controller deletes other video segments except the target video, so that the first video set and the first video sequence only contain the target video; or after the first video collection file is generated, in order to save the storage resources of the television, the first controller deletes all the video segments in the first video collection.
In some embodiments, the first controller deletes the video clip after the display device generates the first video highlight file. After the first video collection file is generated, in order to save storage resources of the television, the first controller deletes all video segments in the first video collection.
In some embodiments, the first controller configures different background music for the first video highlight file according to different factors. When the first video collection is generated, the first controller configures different background music for the first video collection according to different elements, namely different theme types and different video segment material types. For example, the first video album file for a pet in FIG. 6C, with background music configured as light music; with respect to the second video album file at the highlight, the background music thereof is configured as rock music or the like.
In some embodiments, after generating the first video album file, the first controller pushes the first video album file to be displayed on the display screen and deletes the video clips in the first video collection. The first controller pushes the generated first video collection to the user at the television end, and it should be noted that the pushed video collection file received by the user every day is not necessarily the video collection file generated at the last time, and can also be implemented as randomly pushing the generated and stored video collection file. And after the first video collection file is obtained, the first controller deletes the corresponding video segment.
Fig. 6E shows a UI diagram of playing the first video album according to the embodiment of the present application.
And clicking and confirming the playing control of the selected high-brightness video collection file to play the video. The first video highlights as played in the figure, the elements of the first video highlights being pets. In some embodiments, the playing area of the first video album file is relatively large, for example, the playing area occupies at least two thirds or more of the display UI, so that the content can be displayed more clearly, and the UI of the tv can be defined as the corresponding application component.
In some embodiments, the playing area of the first video album file may further define a timeline component, and the user may control the playing progress of the first video album file by operating the timeline component.
Based on the embodiments described above, the present application further provides a display device and a method for generating a video album, where the same points as the above embodiments are not described in detail in the following embodiments, and the differences from the above embodiments will be explained below.
Fig. 8 shows a method for generating a video album according to another embodiment of the present application.
In step 801, in response to a received trigger signal, controlling the camera to acquire a video clip containing a shooting target within a preset time period to obtain a first video set.
The first controller responds to the received trigger signal and controls the camera to acquire a video clip containing a shooting target in a preset time period so as to obtain a first video set.
The first control unit controls the camera to acquire a video clip containing a shooting target within a preset time period to obtain a first video set. The shooting target can be preset, such as a pet, a person, a smile, and the like. The detection time period can be set according to actual conditions, and is not specifically limited in the present application, wherein the preset time period can be, for example, 0:00-22:00, and in the preset time period, the camera always performs a monitoring action under the condition that the camera is not occupied.
When the shooting target is defined as the pet cat, the first controller recognizes the video picture as the pet cat through the camera, and performs video recording to obtain different video segments in a plurality of time periods, as shown in fig. 6D.
In some embodiments, the first controller configures a length of time of the video segment to be equal to or less than a second threshold; and the first controller configures the time length of the first video album file to a fixed value.
For example, when the second threshold value is 6 seconds, the length of the video segment recorded by the first controller is less than or equal to 6 seconds from the moment the pet cat is detected, and when the pet cat continuously exceeds 6 seconds in the camera video acquisition range, the first controller only acquires the video segment of the first 6 seconds, wherein the length of the video segment is 6 seconds; when the duration of the pet cat in the camera video acquisition range is less than 6 seconds, namely the pet cat leaves the camera acquisition range within 6 seconds, the first controller only acquires a video clip of the pet cat in the camera acquisition range, and the length of the video clip is less than 6 seconds.
For another example, when the fixed value is 30 seconds, the first controller performs length detection on the generated first video highlight file, and removes content clips exceeding 30 seconds in the video segment, so as to ensure that the first video highlight file does not exceed the preset fixed time length.
In some embodiments, the first controller adds a temporal watermark to the video segment. By adding the time watermark to the video segment, the finally generated first video collection file also contains the corresponding time watermark, and a user can know the occurrence time of the video content when watching the video collection file.
In step 802, the video segments in the first video set are stitched to generate a first video album.
For video clips in the first video set, the first controller determines that part or all of the video clips are target videos, and stitches the target videos to generate a first video collection.
In some embodiments, when the number of video segments in the first video set is greater than a first threshold, the first controller approximately and uniformly selects the video segments with the first threshold as the target video for splicing according to the shooting sequence; otherwise, the first controller splices all the video clips according to the shooting sequence.
For example, when the first threshold is 5, the first controller selects 5 video segments from the first video set to be spliced after the television is turned on for 2 minutes on the next day, so as to generate a first video album file. The 5 video clips are uniformly selected according to the shooting time, the shooting time span is increased as much as possible, and the first video collection file is expected to reflect the activity content of the shooting target in different time periods of a day; when the number of the video clips in the first video set is less than 5, for example, 3, the first controller directly splices all 3 video clips. In some embodiments, the first controller stitches the video segments in the first video collection at fixed time intervals, generating a first video compilation.
Based on the above explanation of the video highlight generation method, the present application also provides a display device, including: a camera; a microphone; a display screen for displaying a user interface; a first controller configured to: responding to the received trigger signal, and controlling the camera to acquire a video clip; performing image identification on the video clips in a preset time period to obtain a first video set, wherein the first video set is a video clip set containing the same elements; and splicing the video segments in the first video set into a first video collection file. The specific operation method and steps of the display device have been described in detail in the above corresponding video album generation method, and are not described herein again.
Based on the above explanation of the video highlight generation method, the present application also provides another display device, including: a camera; a microphone; a display screen for displaying a user interface; a first controller configured to: responding to the received trigger signal, controlling the camera to acquire a video clip containing a shooting target in a preset time period to obtain a first video set; and splicing the video segments in the first video set to generate a first video collection. The specific operation method and steps of the display device have been described in detail in the above corresponding video album generation method, and are not described herein again.
The method and the device have the advantages that the classification of the video clips can be realized by constructing the first video set; furthermore, the screening of video clips meeting the requirements can be realized by constructing a first video sequence; further, through the construction of a target video, the secondary screening of video clips can be realized; furthermore, the optimization of the television storage resources can be realized by deleting the video segments after the video collection file is obtained; further, by constructing a preset time point and a preset time period, the automation of the video collection file acquisition can be realized; furthermore, the camera is controlled to dynamically track the shot target, so that the effective acquisition of the video clip can be realized; further, the length of the video collection file can be controlled by constructing the first threshold and the second threshold, so that the video clip can be automatically acquired, the editing speed can be increased, the video splicing error rate can be reduced, and the video collection file can be intelligently generated.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block", "controller", "engine", "unit", "component", or "system". Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Claims (15)
1. A display device, comprising:
a camera;
a microphone;
a display screen for displaying a user interface;
a first controller configured to:
responding to the received trigger signal, and controlling the camera to acquire a video clip;
performing image identification on the video clips in a preset time period to obtain a first video set, wherein the first video set is a video clip set containing the same elements;
and splicing the video segments in the first video set into a first video collection file.
2. The display device of claim 1, wherein the first controller to stitch the video segments in the first video collection into a first video compilation file comprises the first controller to
Performing importance scoring on the video clips in the first video set to obtain a first video sequence;
sequencing the first video sequence from high to low according to the importance value, and setting video clips with the preset splicing number as target videos;
and splicing the target videos in the first video sequence to generate a first video collection file.
3. The display device of claim 2, wherein the first controller deletes the video clip after the first video highlight file is generated.
4. The display device as claimed in claim 1, wherein the first controller performing image recognition on the video segments within a preset time period to obtain a first video set comprises the first controller
When a preset time point is reached, carrying out image recognition on the video clips in a preset time period to obtain a first video set, wherein the preset time point is positioned after the preset time period;
the first controller stitching the video segments in the first video collection into a first video compilation file comprises, the first controller
Receiving an input screen lightening instruction;
splicing the video segments in the first video set into a first video highlight file in response to the highlight instruction.
5. The display device as claimed in claim 1, wherein the first controller performs image recognition on the video segments within a preset time period to obtain a first video set, including the first controller
Receiving an input screen lightening instruction;
when the time for receiving the screen lightening instruction is earlier than a preset time point, continuously controlling the camera to acquire a video clip;
when the time of receiving the screen-lighting instruction is later than a preset time point, performing image recognition on the video clips in a preset time period to obtain a first video set, and splicing the video clips in the first video set into a first video collection file.
6. The display device of claim 4 or 5, wherein after receiving the bright screen instruction, the first controller is further configured to:
after the generated first video collection file is monitored, controlling a display to display a video collection interface, wherein a control used for skipping a playing interface of the first video collection file is arranged in the video collection interface;
and when the generated first video collection file is not monitored, the display is not controlled to display a video collection interface.
7. The display device of claim 1, wherein the trigger signal comprises a signal of movement of an object in a detection range monitored by the camera or a sound signal in a detection range monitored by the microphone.
8. The display device of claim 1, wherein the video clip stops being acquired for the first controller when the camera is configured to be invoked by other applications.
9. A method for generating a video album, the method comprising:
responding to the received trigger signal, and controlling the camera to acquire a video clip;
performing image identification on the video clips in a preset time period to obtain a first video set, wherein the first video set is a video clip set containing the same elements;
and splicing the video segments in the first video set into a first video collection file.
10. The method of generating a video highlight according to claim 9, wherein stitching said video segments of said first video set into a first video highlight file comprises,
performing importance scoring on the video clips in the first video set to obtain a first video sequence;
sequencing the first video sequence from high to low according to the importance value, and setting video clips with the preset splicing number as target videos;
and splicing the target videos in the first video sequence to generate a first video collection file.
11. The method for generating the video album as recited in claim 9, wherein the image recognition of the video segments within the preset time period to obtain the first video set comprises:
when a preset time point is reached, carrying out image recognition on the video clips in a preset time period to obtain a first video set, wherein the preset time point is positioned after the preset time period;
stitching the video segments in the first video collection into a first video compilation file comprises:
receiving an input screen lightening instruction;
splicing the video segments in the first video set into a first video highlight file in response to the highlight instruction.
12. A display device, comprising:
a camera;
a microphone;
a display screen for displaying a user interface;
a first controller configured to:
responding to the received trigger signal, controlling the camera to acquire a video clip containing a shooting target in a preset time period so as to obtain a first video set;
and splicing the video segments in the first video set to generate a first video collection.
13. The display device as claimed in claim 12, wherein when the number of the video segments in the first video set is greater than a first threshold, the first controller approximately uniformly selects the video segments of the first threshold according to the shooting order thereof for splicing; otherwise, the first controller splices all the video clips according to the shooting sequence.
14. The display device of claim 12, wherein the first controller stitches video segments of the first video collection at fixed time intervals to generate a first video compilation.
15. A method for generating a video album, the method comprising:
responding to the received trigger signal, controlling the camera to acquire a video clip containing a shooting target in a preset time period to obtain a first video set;
and splicing the video segments in the first video set to generate a first video collection.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010710550.8A CN113973216A (en) | 2020-07-22 | 2020-07-22 | Video collection generation method and display device |
PCT/CN2021/097699 WO2022007545A1 (en) | 2020-07-06 | 2021-06-01 | Video collection generation method and display device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010710550.8A CN113973216A (en) | 2020-07-22 | 2020-07-22 | Video collection generation method and display device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113973216A true CN113973216A (en) | 2022-01-25 |
Family
ID=79584903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010710550.8A Pending CN113973216A (en) | 2020-07-06 | 2020-07-22 | Video collection generation method and display device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113973216A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101454811A (en) * | 2006-06-06 | 2009-06-10 | 三星电子株式会社 | Home security applications for television with digital video cameras |
CN104038705A (en) * | 2014-05-30 | 2014-09-10 | 无锡天脉聚源传媒科技有限公司 | Video producing method and device |
WO2018098884A1 (en) * | 2016-11-29 | 2018-06-07 | 深圳Tcl新技术有限公司 | Network video playback information acquisition method and system of mart television |
CN108288475A (en) * | 2018-02-12 | 2018-07-17 | 成都睿码科技有限责任公司 | A kind of sports video collection of choice specimens clipping method based on deep learning |
CN111432124A (en) * | 2020-03-30 | 2020-07-17 | 深圳创维-Rgb电子有限公司 | Photographing method, television and storage medium |
-
2020
- 2020-07-22 CN CN202010710550.8A patent/CN113973216A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101454811A (en) * | 2006-06-06 | 2009-06-10 | 三星电子株式会社 | Home security applications for television with digital video cameras |
CN104038705A (en) * | 2014-05-30 | 2014-09-10 | 无锡天脉聚源传媒科技有限公司 | Video producing method and device |
WO2018098884A1 (en) * | 2016-11-29 | 2018-06-07 | 深圳Tcl新技术有限公司 | Network video playback information acquisition method and system of mart television |
CN108288475A (en) * | 2018-02-12 | 2018-07-17 | 成都睿码科技有限责任公司 | A kind of sports video collection of choice specimens clipping method based on deep learning |
CN111432124A (en) * | 2020-03-30 | 2020-07-17 | 深圳创维-Rgb电子有限公司 | Photographing method, television and storage medium |
Non-Patent Citations (2)
Title |
---|
CNMO宅秘: "记录每一幅面孔 360智能摄像机云台AI版 让你出门在外更放心", Retrieved from the Internet <URL:https://baijiahao.baidu.com/s?id=1664476332813999618&wfr=spider&for=pc> * |
MIN: "智能摄像机如何开启家人相册", Retrieved from the Internet <URL:https://jingyan.baidu.com/article/08b6a5911fe82d54a9092228.html, min, 20191103> * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111405318B (en) | Video display method and device and computer storage medium | |
CN112333509B (en) | Media asset recommendation method, recommended media asset playing method and display equipment | |
WO2021031623A1 (en) | Display apparatus, file sharing method, and server | |
CN112399213B (en) | Display device and remote controller key multiplexing method | |
CN113259741B (en) | Demonstration method and display device for classical viewpoint of episode | |
CN111343489B (en) | Display device and method for playing music in terminal | |
CN112543359B (en) | Display device and method for automatically configuring video parameters | |
CN111343512B (en) | Information acquisition method, display device and server | |
CN111836109A (en) | Display device, server and method for automatically updating column frame | |
CN111277884A (en) | Video playing method and device | |
CN114079829A (en) | Display device and generation method of video collection file watermark | |
CN111787379B (en) | Interactive method for generating video collection file, display device and intelligent terminal | |
CN111866568B (en) | Display device, server and video collection acquisition method based on voice | |
CN111405221A (en) | Display device and display method of recording file list | |
CN111787376A (en) | Display device, server and video recommendation method | |
CN112788422A (en) | Display device | |
CN112473121A (en) | Display device and method for displaying dodging ball based on limb recognition | |
CN114501158B (en) | Display device, external sound equipment and audio output method of external sound equipment | |
CN111263223A (en) | Media volume adjusting method and display device | |
CN112118476B (en) | Method for rapidly displaying program reservation icon and display equipment | |
CN112929717B (en) | Focus management method and display device | |
CN113973216A (en) | Video collection generation method and display device | |
CN113542878A (en) | Awakening method based on face recognition and gesture detection and display device | |
CN112040299A (en) | Display device, server and live broadcast display method | |
CN113573126A (en) | Display device, mobile terminal and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |