CN112351323A - Display device and video collection file generation method - Google Patents

Display device and video collection file generation method Download PDF

Info

Publication number
CN112351323A
CN112351323A CN202011148295.9A CN202011148295A CN112351323A CN 112351323 A CN112351323 A CN 112351323A CN 202011148295 A CN202011148295 A CN 202011148295A CN 112351323 A CN112351323 A CN 112351323A
Authority
CN
China
Prior art keywords
video
camera
display device
controller
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011148295.9A
Other languages
Chinese (zh)
Inventor
鲍姗娟
于文钦
杨鲁明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Publication of CN112351323A publication Critical patent/CN112351323A/en
Priority to PCT/CN2021/098617 priority Critical patent/WO2022007568A1/en
Priority to CN202180046688.5A priority patent/CN116391358A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present application relates to the field of image recognition technologies, and in particular, to a display device and a method for generating a video album file. The problems that video segments cannot be automatically acquired, splicing is easy to make mistakes due to low manual editing speed, and video collection files cannot be intelligently generated can be solved to a certain extent. The display device includes: a camera; a display screen for displaying a user interface; a controller configured to: when the target user is in the monitoring range of the display equipment, receiving a video clip collected by a camera; and generating a video collection file based on the acquired video clips, wherein the controller controls the user interface to play the video collection file after the video collection file receives a confirmation operation.

Description

Display device and video collection file generation method
The present application claims priority from the chinese patent application entitled "a display device" filed by the chinese patent office on 21/8/2020, application number 202010850122.5, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a display device and a method for generating a video album file.
Background
The video collection file is a video file formed by splicing different videos or video clips with similar contents or the same elements.
In the implementation of some video collection files, a user needs to use a professional photographic tool to obtain video clips, and after watching and screening each video, the video clips are manually intercepted and then spliced into a video collection.
However, when a child in a family needs to make a video collection file for a target user, that is, a child of the target user, the user cannot operate a photographing tool for a long time, the number of video clips is large, the video scene is complex, the user cannot acquire the video clips, the editing speed is slow, and errors are missed in splicing the video clips.
Disclosure of Invention
In order to solve the problems that video segments cannot be automatically acquired, splicing is easy to make mistakes due to low manual editing speed, and a video collection file cannot be intelligently generated, the application provides display equipment and a method for generating the video collection file.
The embodiment of the application is realized as follows:
a first aspect of an embodiment of the present application provides a display device, including: a camera; a display screen for displaying a user interface; a controller configured to: when the target user is in the monitoring range of the display equipment, receiving a video clip collected by a camera; and generating a video collection file based on the acquired video clips, wherein the controller controls the user interface to play the video collection file after the video collection file receives a confirmation operation.
A second aspect of the embodiments of the present application provides a method for generating a video highlight file, where the method includes: recording a video clip when a target user is in a monitoring range; and generating a video collection file based on the acquired video clips, wherein the video collection file is played in a user interface after receiving a confirmation operation.
The beneficial effect of this application: the classification of the video clips can be realized by constructing face detection and recording the video clips; further, the video collection file is generated by constructing preset time, so that the video collection file can be automatically acquired; further, by constructing a second preset time length, the situation that the display equipment is started to occupy resource blocks can be avoided; further, the length of the video collection file can be controlled by constructing the first preset time length, so that the video clips are automatically acquired, and the video collection file is intelligently generated.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus according to an embodiment;
fig. 2 is a block diagram exemplarily showing a hardware configuration of a display device 200 according to an embodiment;
fig. 3 is a block diagram exemplarily showing a hardware configuration of the control apparatus 100 according to the embodiment;
fig. 4 is a diagram exemplarily showing a functional configuration of the display device 200 according to the embodiment;
fig. 5 is a diagram exemplarily showing a software configuration in the display device 200 according to the embodiment;
fig. 6 is a diagram exemplarily showing a configuration of an application program in the display device 200 according to the embodiment;
FIG. 7A is a diagram illustrating a display device application interface according to an embodiment of the present application;
FIG. 7B is a schematic diagram illustrating a display device application interface according to another embodiment of the present application;
FIG. 8 illustrates a novice guidance UI diagram of a display device small-spot at-home application according to an embodiment of the application;
FIG. 9 shows a start-up UI diagram of a one-day application of a display device of an embodiment of the present application;
FIG. 10 shows a one-day NAS prompt diagram showing a device budding doll according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a video album file of a display device according to an embodiment of the present application;
FIG. 12 is a logic diagram illustrating a one-day video segment recording and synthesizing process of a display device of a doll according to an embodiment of the present application;
FIG. 13 illustrates a UI diagram for operation of a video highlight file of a display device according to an embodiment of the present application;
FIG. 14 shows a recommended view of a video highlight file on a display device according to an embodiment of the present application;
15A-15B show schematic diagrams of a day of activation of a mobile phone end germinating doll according to an embodiment of the application;
fig. 15C is a schematic diagram illustrating operation of a mobile phone end germinating doll for one day according to an embodiment of the present application;
16A-16B illustrate a one-day operation of a mobile phone end germinating doll according to an embodiment of the present application;
fig. 16C is a schematic diagram illustrating operation of a mobile phone end germinating doll for one day according to an embodiment of the present application;
FIG. 17 is a schematic diagram illustrating an overall architecture of a one-day application of a display device of the embodiment of the present application;
fig. 18 is a schematic flowchart illustrating a video highlight file generation method according to an embodiment of the present application.
Fig. 19A shows a schematic watermark diagram of playing a video album file according to an embodiment of the present application;
fig. 19B shows a schematic watermark diagram of playing a video album file according to an embodiment of the present application;
fig. 20A shows a schematic watermark diagram of playing a video album file according to an embodiment of the present application;
fig. 20B shows a schematic watermark diagram of playing a video album file according to an embodiment of the present application;
fig. 21 is a schematic diagram illustrating a watermark played by a video highlight file according to an embodiment of the present application;
fig. 22 is a schematic flow chart illustrating watermarking of a video highlight file according to an embodiment of the present application;
FIG. 23 is a schematic flow chart illustrating recording of a video clip according to an embodiment of the present application;
FIG. 24 is a schematic flow chart illustrating the creation of a watermark according to an embodiment of the present application;
fig. 25 is a flowchart illustrating a method for generating a watermark in a video highlight file according to an embodiment of the present application;
fig. 26 is a schematic view illustrating a flow of turning on a video highlight switch by a display device according to an embodiment of the present application;
FIG. 27 is a schematic diagram illustrating a process of turning off a video highlight switch by a display device according to an embodiment of the present application;
fig. 28 is a flowchart illustrating a method for controlling a camera of a display device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the exemplary embodiments of the present application clearer, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, but not all the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module" as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment" or the like throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
The term "remote control" as used in this application refers to a component of an electronic device, such as the display device disclosed in this application, that is typically wirelessly controllable over a short range of distances. Typically using infrared and/or Radio Frequency (RF) signals and/or bluetooth to connect with the electronic device, and may also include WiFi, wireless USB, bluetooth, motion sensor, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in the common remote control device with the user interface in the touch screen.
The term "gesture" as used in this application refers to a user's behavior through a change in hand shape or an action such as hand motion to convey a desired idea, action, purpose, or result.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the mobile terminal 300 and the control apparatus 100.
The control device 100 may control the display device 200 in a wireless or other wired manner by using a remote controller, including infrared protocol communication, bluetooth protocol communication, other short-distance communication manners, and the like. The user may input a user command through a key on a remote controller, voice input, control panel input, etc. to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
In some embodiments, mobile terminals, tablets, computers, laptops, and other smart devices may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device. The application, through configuration, may provide the user with various controls in an intuitive User Interface (UI) on a screen associated with the smart device.
For example, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 300 and the display device 200 can establish a control instruction protocol, synchronize a remote control keyboard to the mobile terminal 300, and control the display device 200 by controlling a user interface on the mobile terminal 300. The audio and video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so as to realize the synchronous display function.
As also shown in fig. 1, the display apparatus 200 also performs data communication with the server 400 through various communication means. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The servers 400 may be a group or groups of servers, and may be one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide an intelligent network tv function that provides a computer support function in addition to the broadcast receiving tv function. Examples include a web tv, a smart tv, an Internet Protocol Tv (IPTV), and the like.
A hardware configuration block diagram of a display device 200 according to an exemplary embodiment is exemplarily shown in fig. 2. As shown in fig. 2, the display device 200 includes a controller 210, a tuning demodulator 220, a communication interface 230, a detector 240, an input/output interface 250, a video processor 260-1, an audio processor 60-2, a display 280, an audio output 270, a memory 290, a power supply, and an infrared receiver.
A display 280 for receiving the image signal from the video processor 260-1 and displaying the video content and image and components of the menu manipulation interface. The display 280 includes a display screen assembly for presenting a picture, and a driving assembly for driving the display of an image. The video content may be displayed from broadcast television content, or may be broadcast signals that may be received via a wired or wireless communication protocol. Alternatively, various image contents received from the network communication protocol and sent from the network server side can be displayed.
Meanwhile, the display 280 simultaneously displays a user manipulation UI interface generated in the display apparatus 200 and used to control the display apparatus 200.
And, a driving component for driving the display according to the type of the display 280. Alternatively, in case the display 280 is a projection display, it may also comprise a projection device and a projection screen.
The communication interface 230 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communication interface 230 may be a Wifi chip 231, a bluetooth communication protocol chip 232, a wired ethernet communication protocol chip 233, or other network communication protocol chips or near field communication protocol chips, and an infrared receiver (not shown).
The display apparatus 200 may establish control signal and data signal transmission and reception with an external control apparatus or a content providing apparatus through the communication interface 230. And an infrared receiver, an interface device for receiving an infrared control signal for controlling the apparatus 100 (e.g., an infrared remote controller, etc.).
The detector 240 is a signal used by the display device 200 to collect an external environment or interact with the outside. The detector 240 includes a light receiver 242, a sensor for collecting the intensity of ambient light, and parameters such as parameter changes can be adaptively displayed by collecting the ambient light.
The image collector 241, such as a camera or a video camera, may be used to collect external environment scenes, collect attributes of a user or interact gestures with the user, adaptively change display parameters, and recognize user gestures, so as to implement a function of interaction with the user.
In some other exemplary embodiments, the detector 240, a temperature sensor, etc. may be provided, for example, by sensing the ambient temperature, and the display device 200 may adaptively adjust the display color temperature of the image. For example, the display apparatus 200 may be adjusted to display a cool tone when the temperature is in a high environment, or the display apparatus 200 may be adjusted to display a warm tone when the temperature is in a low environment.
In other exemplary embodiments, the detector 240, and a sound collector, such as a microphone, may be used to receive a user's voice, a voice signal including a control instruction from the user to control the display device 200, or collect an ambient sound for identifying an ambient scene type, and the display device 200 may adapt to the ambient noise.
The input/output interface 250 controls data transmission between the display device 200 of the controller 210 and other external devices. Such as receiving video and audio signals or command instructions from an external device.
Input/output interface 250 may include, but is not limited to, the following: any one or more of high definition multimedia interface HDMI interface 251, analog or data high definition component input interface 253, composite video input interface 252, USB input interface 254, RGB ports (not shown in the figures), etc.
In some other exemplary embodiments, the input/output interface 250 may also form a composite input/output interface with the above-mentioned plurality of interfaces.
The tuning demodulator 220 receives the broadcast television signals in a wired or wireless receiving manner, may perform modulation and demodulation processing such as amplification, frequency mixing, resonance, and the like, and demodulates the television audio and video signals carried in the television channel frequency selected by the user and the EPG data signals from a plurality of wireless or wired broadcast television signals.
The tuner demodulator 220 is responsive to the user-selected television signal frequency and the television signal carried by the frequency, as selected by the user and controlled by the controller 210.
The tuner-demodulator 220 may receive signals in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcast, cable broadcast, satellite broadcast, or internet broadcast signals, etc.; and according to different modulation types, the modulation mode can be digital modulation or analog modulation. Depending on the type of television signal received, both analog and digital signals are possible.
In other exemplary embodiments, the tuner/demodulator 220 may be in an external device, such as an external set-top box. In this way, the set-top box outputs television audio/video signals after modulation and demodulation, and the television audio/video signals are input into the display device 200 through the input/output interface 250.
The video processor 260-1 is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, and the like according to a standard codec protocol of the input signal, so as to obtain a signal that can be displayed or played on the direct display device 200.
Illustratively, the video processor 260-1 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio and video data stream, and if the input MPEG-2 is input, the demultiplexing module demultiplexes the input audio and video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert an input video frame rate, such as a 60Hz frame rate into a 120Hz frame rate or a 240Hz frame rate, and the normal format is implemented in, for example, an interpolation frame mode.
The display format module is used for converting the received video output signal after the frame rate conversion, and changing the signal to conform to the signal of the display format, such as outputting an RGB data signal.
The audio processor 260-2 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, amplification processing, and the like to obtain an audio signal that can be played in the speaker.
In other exemplary embodiments, video processor 260-1 may comprise one or more chips. The audio processor 260-2 may also comprise one or more chips.
And, in other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips or may be integrated together with the controller 210 in one or more chips.
An audio output 272, which receives the sound signal output from the audio processor 260-2 under the control of the controller 210, such as: the speaker 272, and the external sound output terminal 274 that can be output to the generation device of the external device, in addition to the speaker 272 carried by the display device 200 itself, such as: an external sound interface or an earphone interface and the like.
The power supply provides power supply support for the display device 200 from the power input from the external power source under the control of the controller 210. The power supply may include a built-in power supply circuit installed inside the display device 200, or may be a power supply interface installed outside the display device 200 to provide an external power supply in the display device 200.
A user input interface for receiving an input signal of a user and then transmitting the received user input signal to the controller 210. The user input signal may be a remote controller signal received through an infrared receiver, and various user control signals may be received through the network communication module.
For example, the user inputs a user command through the remote controller 100 or the mobile terminal 300, the user input interface responds to the user input through the controller 210 according to the user input, and the display device 200 responds to the user input.
In some embodiments, a user may enter a user command on a Graphical User Interface (GUI) displayed on the display 280, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
The controller 210 controls the operation of the display apparatus 200 and responds to the user's operation through various software control programs stored in the memory 290.
As shown in fig. 2, the controller 210 includes a RAM213 and a ROM214, and a graphic processor 216, a CPU processor 212, a communication interface 218, such as: a first interface 218-1 through an nth interface 218-n, and a communication bus. The RAM213 and the ROM214, the graphic processor 216, the CPU processor 212, and the communication interface 218 are connected via a bus.
A ROM213 for storing instructions for various system boots. If the display apparatus 200 starts power-on upon receipt of the power-on signal, the CPU processor 212 executes a system boot instruction in the ROM, copies the operating system stored in the memory 290 to the RAM213, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 212 copies the various application programs in the memory 290 to the RAM213, and then starts running and starting the various application programs.
A graphics processor 216 for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And a renderer for generating various objects based on the operator and displaying the rendered result on the display 280.
A CPU processor 212 for executing operating system and application program instructions stored in memory 290. And executing various application programs, data and contents according to various interactive instructions received from the outside so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 212 may include a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some operations of the display apparatus 200 in a pre-power-up mode and/or operations of displaying a screen in a normal mode. A plurality of or one sub-processor for one operation in a standby mode or the like.
The controller 210 may control the overall operation of the display apparatus 100. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon. The user command for selecting the UI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch pad, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
The memory 290 includes a memory for storing various software modules for driving the display device 200. Such as: various software modules stored in memory 290, including: the system comprises a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like.
Wherein the basic module is a bottom layer software module for signal communication among the various hardware in the postpartum care display device 200 and for sending processing and control signals to the upper layer module. The detection module is used for collecting various information from various sensors or user input interfaces, and the management module is used for performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is a module for controlling the display 280 to display image content, and may be used to play information such as multimedia image content and UI interface. And the communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing a module for data communication between browsing servers. And the service module is used for providing various services and modules including various application programs.
Meanwhile, the memory 290 is also used to store visual effect maps and the like for receiving external data and user data, images of respective items in various user interfaces, and a focus object.
A block diagram of the configuration of the control apparatus 100 according to an exemplary embodiment is exemplarily shown in fig. 3. As shown in fig. 3, the control apparatus 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control device 100 is configured to control the display device 200 and may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200. Such as: the user responds to the channel up and down operation by operating the channel up and down keys on the control device 100.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications that control the display apparatus 200 according to user demands.
In some embodiments, as shown in fig. 1, a mobile terminal 300 or other intelligent electronic device may function similar to the control device 100 after installing an application that manipulates the display device 200. Such as: the user may implement the functions of controlling the physical keys of the device 100 by installing applications, various function keys or virtual buttons of a graphical user interface available on the mobile terminal 300 or other intelligent electronic device.
The controller 110 includes a processor 112 and RAM113 and ROM114, a communication interface 218, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components for communication and coordination and external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display apparatus 200. The communication interface 130 may include at least one of a WiFi chip, a bluetooth module, an NFC module, and other near field communication modules.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a camera 142, a sensor 143, keys 144, and other input interfaces. Such as: the user can realize a user instruction input function through actions such as voice, touch, gesture, pressing, and the like, and the input interface converts the received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the instruction signal to the display device 200. It should be noted that the camera provided in the present application may be implemented as a camera 142 externally connected to the display device, and may also be implemented as an image collector 241 built in the display device.
The output interface includes an interface that transmits the received user instruction to the display apparatus 200. In some embodiments, the interface may be an infrared interface or a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
In some embodiments, the control device 100 includes at least one of a communication interface 130 and an output interface. The control device 100 is provided with a communication interface 130, such as: the WiFi, bluetooth, NFC, etc. modules may transmit the user input command to the display device 200 through the WiFi protocol, or the bluetooth protocol, or the NFC protocol code.
A memory 190 for storing various operation programs, data and applications for driving and controlling the control apparatus 200 under the control of the controller 110. The memory 190 may store various control signal commands input by a user.
And a power supply 180 for providing operational power support to the various elements of the control device 100 under the control of the controller 110. A battery and associated control circuitry.
Fig. 4 is a diagram schematically illustrating a functional configuration of the display device 200 according to an exemplary embodiment. As shown in fig. 4, the memory 290 is used to store an operating system, an application program, contents, user data, and the like, and performs system operations for driving the display device 200 and various operations in response to a user under the control of the controller 210. The memory 290 may include volatile and/or nonvolatile memory.
The memory 290 is specifically configured to store an operating program for driving the controller 210 in the display device 200, and to store various application programs installed in the display device 200, various application programs downloaded by a user from an external device, various graphical user interfaces related to the applications, various objects related to the graphical user interfaces, user data information, and internal data of various supported applications. The memory 290 is used to store system software such as an OS kernel, middleware, and applications, and to store input video data and audio data, and other user data.
The memory 290 is specifically used for storing drivers and related data such as the audio/video processors 260-1 and 260-2, the display 280, the communication interface 230, the tuning demodulator 220, the input/output interface of the detector 240, and the like.
In some embodiments, memory 290 may store software and/or programs, software programs for representing an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (e.g., the middleware, APIs, or applications), and the kernel may provide interfaces to allow the middleware and APIs, or applications, to access the controller to implement controlling or managing system resources.
The memory 290, for example, includes a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. The controller 210 performs functions such as: a broadcast television signal reception demodulation function, a television channel selection control function, a volume selection control function, an image control function, a display control function, an audio control function, an external instruction recognition function, a communication control function, an optical signal reception function, an electric power control function, a software control platform supporting various functions, a browser function, and the like.
Fig. 5 is a block diagram illustrating a configuration of a software system in the display apparatus 200 according to an exemplary embodiment.
As shown in fig. 5, an operating system 2911, including executing operating software for handling various basic system services and for performing hardware related tasks, acts as an intermediary for data processing performed between application programs and hardware components. In some embodiments, portions of the operating system kernel may contain a series of software to manage the display device hardware resources and provide services to other programs or software code.
In other embodiments, portions of the operating system kernel may include one or more device drivers, which may be a set of software code in the operating system that assists in operating or controlling the devices or hardware associated with the display device. The drivers may contain code that operates the video, audio, and/or other multimedia components. Examples include a display screen, a camera, Flash, WiFi, and audio drivers.
The accessibility module 2911-1 is configured to modify or access the application program to achieve accessibility and operability of the application program for displaying content.
A communication module 2911-2 for connection to other peripherals via associated communication interfaces and a communication network.
The user interface module 2911-3 is configured to provide an object for displaying a user interface, so that each application program can access the object, and user operability can be achieved.
Control applications 2911-4 for controllable process management, including runtime applications and the like.
The event transmission system 2914, which may be implemented within the operating system 2911 or within the application program 2912, in some embodiments, on the one hand, within the operating system 2911 and on the other hand, within the application program 2912, is configured to listen for various user input events, and to refer to handlers that perform one or more predefined operations in response to the identification of various types of events or sub-events, depending on the various events.
The event monitoring module 2914-1 is configured to monitor an event or a sub-event input by the user input interface.
The event identification module 2914-1 is configured to input definitions of various types of events for various user input interfaces, identify various events or sub-events, and transmit the same to a process for executing one or more corresponding sets of processes.
The event or sub-event refers to an input detected by one or more sensors in the display device 200 and an input of an external control device (e.g., the control device 100). Such as: the method comprises the following steps of inputting various sub-events through voice, inputting gestures through gesture recognition, inputting sub-events through remote control key commands of the control equipment and the like. Illustratively, the one or more sub-events in the remote control include a variety of forms including, but not limited to, one or a combination of key presses up/down/left/right/, ok keys, key presses, and the like. And non-physical key operations such as move, hold, release, etc.
The interface layout manager 2913, directly or indirectly receiving the input events or sub-events from the event transmission system 2914, monitors the input events or sub-events, and updates the layout of the user interface, including but not limited to the position of each control or sub-control in the interface, and the size, position, and level of the container, and other various execution operations related to the layout of the interface.
As shown in fig. 6, the application layer 2912 contains various applications that may also be executed at the display device 200. The application may include, but is not limited to, one or more applications such as: live television applications, video-on-demand applications, media center applications, application centers, gaming applications, and the like.
The live television application program can provide live television through different signal sources. For example, a live television application may provide television signals using input from cable television, radio broadcasts, satellite services, or other types of live television services. And, the live television application may display video of the live television signal on the display device 200.
A video-on-demand application may provide video from different storage sources. Unlike live television applications, video on demand provides a video display from some storage source. For example, the video on demand may come from a server side of the cloud storage, from a local hard disk storage containing stored video programs.
The media center application program can provide various applications for playing multimedia contents. For example, a media center, which may be other than live television or video on demand, may provide services that a user may access to various images or audio through a media center application.
The application program center can provide and store various application programs. The application may be a game, an application, or some other application associated with a computer system or other device that may be run on the smart television. The application center may obtain these applications from different sources, store them in local storage, and then be operable on the display device 200. .
The embodiment of the application can be applied to various types of display devices (including but not limited to smart televisions, mobile terminals, tablet computers, set-top boxes and the like). The following will describe a display device and a method for generating a video album file by operating a mini-home application program, taking a smart television and a mobile phone as examples.
FIG. 7A is a diagram illustrating an application interface of a display device according to an embodiment of the present application.
The figure shows the application UI interface of a display screen showing a television, for example, the application UI interface comprises 4 applications installed on the television, respectively news headlines, cinema on demand, small focus at home, karaoke, etc. By moving the focus on the display screen using a controller such as a remote control, different applications, or other function buttons, may be selected.
In some embodiments, the television display, while presenting the application UI interface, is also configured to present other interactive elements, which may include, for example, television home page controls, search controls, message button controls, mailbox controls, browser controls, favorites controls, signal bar controls, and the like.
To improve the convenience and the image of the UI of the television, in some embodiments, the controller of the display device in the embodiments of the present application controls the UI of the television in response to the manipulation of the interactive element. For example, a user clicking on a search control through a controller such as a remote control may expose the search UI on top of other UIs, i.e., the UI controlling the application components to which the interactive elements are mapped can be made large, or run and displayed full screen.
In some embodiments, the interactive element may also be operated by a sensor, which may be, but is not limited to, an acoustic input sensor, such as a microphone, which may detect a voice command including an indication of the desired interactive element. For example, a user may identify a desired interactive element, such as a search control, using a "small focus at home" or any other suitable identification, and may also describe a desired action to be performed in relation to the desired interactive element. The controller may recognize the voice command and submit data characterizing the interaction to the UI or its processing component or engine.
FIG. 7B is a diagram illustrating a display device application interface according to another embodiment of the present application.
In some embodiments, a user may control the focus of a display screen via a remote control, selecting a mini-home application to have its icons highlighted in a user interface of the display screen; then, by clicking the high-brightness icon, the opening of the icon-mapped application program can be realized.
It should be noted that the UI interface icons and characters shown in the embodiment of the present application are only used as examples to explain the technical solution of generating the video highlight file at home, and the UI interface icons and characters in the drawings may also be implemented as other contents, and the drawings of the present application are not limited specifically.
Fig. 8 shows a novice guidance UI diagram of a display device small-focusing at-home application according to an embodiment of the present application.
The application provides a display device, which can be implemented as an intelligent television, for example, and comprises a camera, a display screen and a controller. The display screen is used for displaying a user interface; the controller is configured to receive a video clip from a camera capture when a target user is within a monitoring range of the display device; and generating a video collection file based on the acquired video clips, wherein the controller controls the user interface to play the video collection file after the video collection file receives a confirmation operation.
In some embodiments, the controller controls the user interface to display the novice guidance UI when the user first starts the widget home application on the premise that the television is logged in to the account. In some embodiments, after the television switches accounts, when the user starts the mini-home application again, the controller will control the user interface to display the novice guidance UI again. It will be appreciated that on the television, for each newly logged-in account, the controller will control the user interface to display a novice guidance UI.
The small cluster is at home and comprises the application of the germinating doll vlog collection, the application of the germinating doll vlog collection can realize the production of the video collection file of the target user, namely the video collection file. It should be noted that, the logue of the germinating baby is hereinafter referred to as a day of germinating baby, or a day of germinating baby, and will not be described in detail below.
On the day of the doll, the life segment of the doll of the target user is recorded through the camera configured by the television, so that the video collection file about the doll of the target user is generated. When a user starts the one-day function of a child gathering in home application, the television can automatically record the family diary of the user, namely, the life condition of the user in the monitoring range of the television camera of the user is recorded by using the video.
Fig. 9 shows a start UI diagram of a one-day application of a display device budding doll according to an embodiment of the present application.
In the user interface shown in fig. 8, the user interface jumps to the user interface shown in fig. 9 after the user first starts the cluster of the doll Vlog.
In some embodiments, before the controller controls the camera to record the video segment, the controller further determines that a video highlight switch of the display device and the camera are in an on state.
The camera is in an open state, namely the camera keeps a lifting state, namely the camera is in a starting state; if the camera drops, the camera stops detecting and identifies whether the user appears in the monitoring range, so that the video clip of the target user can be temporarily stopped to be acquired in the day when the child is in the bud.
Video collection brocade switch can implement for sprouting the one day function switch of baby, also can be called and sprouts baby detection switch, before the video clip is recorded to display device controller control camera, will detect whether the one day function switch who sprouts the baby is the on-state, if be in the on-state, cut the camera and also be in the on-state, will control the camera at the controller and record the video clip.
In some embodiments, the camera mode in the television set up is "keep up" when the one-day function of the lovely baby is enabled. After a user clicks a button of 'one day of the doll', the controller needs to judge a camera mode in the whole machine setting, and if the camera mode is 'keep rising', the controller directly starts the one day of the doll; if the camera mode is "intelligent up-down", the user interface prompts that the camera mode needs to be switched to "keep up-down".
Fig. 10 shows a NAS prompting diagram of a day that a display device sprouts.
In some embodiments, if the television does not have access to a NAS (Network Attached Storage), the one-day button of the doll in the login interface of the small family will be configured to be grayed out, meaning that the video highlight switch is in the off state, and the user interface will display a prompt.
In some embodiments, the video album file generated by the display device may be stored in a NAS, and the display device, or the mobile terminal, or the home private cloud may play the video album file by accessing the NAS. The video collection files generated by the display device are stored on the NAS, the video collection files can be browsed and played through a one-day user interface which is gathered at home and used for entering the Yongwa through the television end, can also be browsed and checked through a family sharing directory in a family private cloud space, and can also be used for checking the generated video collection files stored on the NAS through the television end.
In some embodiments, the controller controls the video album file stored in the NAS to be pushed to a display device, or a mobile terminal, or a home private cloud, and the user can download and play the video album file.
Fig. 11 shows a schematic diagram of a video album file of a display device according to an embodiment of the present application.
In some embodiments, a one-day application of a display device budding doll is shown with a user interface that has generated a plurality of video highlight files related to the budding doll, the user interface including a plurality of video highlight poster controls, an underside of which further includes prompt text prompting a date of generation. For example, the video highlights may be implemented as a plurality of video highlights of a day, i.e. the video highlight generation dates are all the same day, but the content of the video highlights is different; in other embodiments, the video highlights may also be implemented as video highlights of different dates, i.e. the generation date of each video highlight is different.
In some embodiments, the display device controller controls the camera to record video clips, and generates a video highlight file based on the acquired plurality of video clips, specifically including the controller recording video clips not greater than a first preset time length within a preset time period; and after the preset time period or after the first startup of the day after the preset time period, the controller generates a video collection file based on the acquired video clips.
For example, in a preset time period, for example, within 00:00-22:00 of each day, the controller detects that the video collection switch is in an on state and the camera is in an on state, that is, the camera is in a rising state, the controller of the television automatically detects a baby in front of the television, and automatically records a video clip with a time length not greater than a first preset time length, for example, 6s after detecting the baby; after a preset time period, namely after 22:00 days, if the television is still in a starting state or after the television is started on the next day, the recorded video segments are automatically synthesized into a video collection file, the file is stored in an NAS appointed directory, the file is pushed to a user through notification, the user can click to check the file, and the user can check and download the target user collection stored in the NAS disc through a mobile phone terminal.
In some embodiments, the display device controller generates the video highlight file based on the acquired plurality of video segments after the first startup of the day after the preset time period, specifically including generating the video highlight file based on the acquired plurality of video segments after a second preset time length after the first startup of the day after the preset time period.
For example, target user detection services for a display device may be configured to include power-on initiation and timed auto-initiation; if the configuration is that the target user detection service is started, the task of synthesizing the video collection file by the target user video clip can be set after a preset second time length, for example, 5 minutes, and the detection and synthesis after 5 minutes can avoid that the application starts more system resources to be preempted to cause system jam and slow down.
In some embodiments, the video album file may add background music, and the camera records the video clip without recording audio data. After a preset time period, namely after 22:00 of each day, if the television is still in a power-on state, or after the television is powered on the next day, the recorded video segments can be automatically synthesized into a video collection file added with background music, and the camera only needs to record image data and does not record audio data when recording the video segments.
In some embodiments, the controller generates a video highlight file based on the acquired plurality of video clips, wherein the video highlight file comprises the earliest video clip and the latest video clip within the preset time period.
For example, the controller detects whether the NAS has a valid video clip when generating the video album file; wherein, the effective video clip refers to the video clip which is recorded normally. When the NAS has the video clips to be synthesized and the NAS space capacity is sufficient, the controller classifies all the effective video clips according to the day, performs time ascending arrangement on the video clips in each day, and selects a preset number of video clips to synthesize, for example, selects 5 video clips to synthesize.
In some embodiments, the video segment selection rule may be implemented as: if the number of the video clips is less than or equal to the preset number, for example, the number is less than or equal to 5 segments, synthesizing all the video clips; if the number of the multiple video clips is larger than the preset number, for example, the number is larger than 5, the first and earliest video clips are selected, the last video clip is the last and latest video clip of the video collection file, and the other 3 video clips are uniformly selected from the rest video clips.
In some embodiments, the process of the controller generating the video highlight file based on the video clip may be implemented as:
firstly, the controller reads each frame of video stream data from a video clip and writes the video stream data into a video track of a target file; and writing the first frame of the next video segment into the last frame of the previous video segment to form video continuous frame data of the video collection.
And secondly, accumulating and calculating the video duration of the generated video collection file when the controller synthesizes the video stream data.
Then, after the video stream data is synthesized, the controller reads each frame data of the audio file and writes the frame data into the audio track of the target file.
Finally, if the time length of the read audio track data is longer than that of the video, stopping reading the audio data, and finishing video synthesis; and deleting the video clip to be synthesized after the video is synthesized, and releasing the occupied space. In some embodiments, the display device, or display device NAS, may store the largest video album file of 100, and the oldest album file may be deleted after exceeding 100.
Fig. 12 is a logic diagram illustrating a one-day video segment recording and synthesizing process of a display device of a doll according to an embodiment of the present application.
In some embodiments, when the target user is in the monitoring range of the display device, the receiving, by the controller, a video clip captured by a camera includes: when the user is in the monitoring range of the display equipment, the controller controls the camera to carry out face detection; judging whether the user is smaller than a preset age or not based on the face detection, if so, judging that the user is a target user, and receiving a video clip collected by a camera by the controller; otherwise, the controller controls the camera to continue face detection.
In some embodiments, after the one-day application service of the lovely doll is started, it is first detected whether the one-day application service is set to be started, and if the display device is set to be started, a delay task of a preset time is started, for example, a 5-second delay task is started; if the display equipment is not set to be started, detecting whether the hardware equipment has NAS or not;
if no NAS exists, the service is automatically terminated; if the NAS exists, detecting whether a video clip to be synthesized exists, and if so, synthesizing the video; if not, detecting whether a one-day function switch of the doll is turned on and whether the camera is set to keep rising, namely whether the video collection switch and the camera are in a starting state;
after synthesizing the video, storing the video collection file to an NAS; detecting whether a one-day function switch of the doll is turned on and whether the camera is set to keep rising; if the switch is off or the camera is not set to keep rising, the service is automatically terminated, namely suicide service is performed; if the video collection switch is turned on and the camera is set to keep rising, detecting whether the system time is within a preset time period, for example, whether the system time is between 0 and 22 points;
if the system time is within a preset time period, for example, the detection time is between 0 point and 22 points, detecting whether the camera is occupied; otherwise, starting the task at regular time after a preset time period, for example, starting the task at regular time after 20 minutes;
when the camera is detected to be occupied, starting a preset time period and then restarting a task at regular time; otherwise, starting to detect the human face;
identifying whether a child exists, namely a target user, and if the child is detected, starting to record a video; otherwise, restarting the task at regular time after starting the preset time period. In some embodiments, after the camera is turned on, the face detection is performed on the user within the monitoring range, and after the face is detected, whether the age is smaller than a preset age, for example 12 years old, is judged by using an algorithm through the acquired face information; if the age is identified to be less than 12 years old, judging that a child appears in the monitoring range before the television, stopping detection by the camera, starting recording a video clip, and storing the recorded video clip in a directory specified by the television;
and finally, the controller controls the camera to close detection, and restarts the task at regular time after starting the preset time.
In some embodiments, the display device is started, specifically including ac start and STR start; and starting to execute the task after a preset time period, for example, starting to execute the task after 5 minutes. Therefore, the phenomena of unsmooth system, blockage and the like caused by starting of all applications during starting can be avoided.
When the switch of the doll is turned off or the camera is set to be intelligently lifted one day, the service of the doll one day is automatically stopped, and the task does not need to be restarted at regular time; and if the switch of the doll in one day is on, the service of the doll in one day is started after the corresponding broadcast message is received.
The premise that the one-day application of the doll performs detection on the target user is that the time is accurate, for example, the detection time is 00:00-22:00, and whether the time is accurate is judged according to whether the current system time is in a preset time period.
In some embodiments, if the time for the timer-started service is longer than 22 points, the video composition is directly performed, and the timer restart time can be changed to 2 hours after the completion. For another example, after the camera of the display device is not occupied and resources can be acquired, the detection time of the doll in one day can be set to be 1 minute or other time lengths.
In some embodiments, the video composition selection rule is to select 5 segments of video in total. For example, if the video recording number is less than or equal to 5 segments on the same day, all the videos are selected for synthesis; if the video is larger than 5 segments, the earliest and latest two segments of videos are selected, and the rest 3 segments of videos are selected from the rest 3 segments of videos uniformly. And when the display equipment is judged not to be accessed into the NAS, the service is automatically terminated, and the starting does not start the day of the doll.
In some embodiments, video is recorded
Background music is added when synthesizing the video album file, so that recording of sound streams is not required.
And storing the video clip file in an application directory, and closing the camera after the recording is finished.
In the detection and recording process, if the conditions that a video collection switch is closed or a camera is occupied are monitored, the detection and recording work is stopped, and the one-day service of the doll is automatically stopped after the timed starting task is initiated.
Fig. 13 shows an operation UI diagram of a video album file of a display apparatus according to an embodiment of the present application.
In some embodiments, when multiple video compilations have been generated on the day of the lovely baby, for example, including 7 video compilations files, with the default initial position of focus at the first video compilations poster control, the television will play video compilations 1 using the media center player when the user clicks the OK button of the remote control.
In some embodiments, clicking a menu key in the user interface will display an interactive menu in the lower left corner of the user interface, which is located in the uppermost layer of the user interface, including options to share a circle of friends, delete a selected record, empty all records, close this function, and so on.
Fig. 14 shows a recommended schematic diagram of a video highlight file of a display device according to an embodiment of the present application.
The video collection file is generated one day by the doll and then pushed to a television display or a user interface of a mobile phone end. In some embodiments, the day of the doll is configured to push only once per day.
Fig. 15A-15B show schematic diagrams of one day starting of a doll sprouted at a mobile phone end according to an embodiment of the present application.
A mobile phone end user starts the one-day application of the small group of children in the family to be loved into a baby, and a system loading interface is shown in fig. 15A; when the television is in the off state, the user interface at the mobile phone end is as shown in fig. 15B, and includes a prompt text.
Fig. 15C shows a schematic operation diagram of a day when the mobile phone terminal is sprouted with a doll according to an embodiment of the present application.
In some embodiments, when the NAS is not inserted into the tv, the user interface at the mobile phone end, as shown in the figure, will display a prompt text, which may be displayed as "the tv is not inserted into the NAS, and the function cannot be used", for example.
Fig. 16A-16B are schematic diagrams illustrating operation of a mobile phone end germinating doll for one day according to an embodiment of the application.
In some embodiments, when the mobile phone end user starts the application of the mini-spotlight one day of the home-loved baby but the function of the television end mini-spotlight one day of the home-loved baby is not turned on, the mobile phone end user interface is as shown in fig. 16A; when the mobile phone end user starts the application focused on the home for one day but the television end starts the application focused on the home for one day without generating the video highlight file, the mobile phone end user interface is as shown in fig. 16B.
Fig. 16C is a schematic diagram illustrating a day operation of a mobile phone end germinating doll according to an embodiment of the present application.
In some embodiments, when the mobile phone end user starts the one-day application focused on the home boy, and the television end one-day application focused on the home boy starts and generates the video collection file, the mobile phone end user interface is as shown in the figure. Displaying the generated video collection file, a prompt text and a download control on a user interface, wherein an icon of the video collection file comprises a thumbnail and the prompt text is used for displaying the recording time of the video collection file; after the download control receives the click confirmation of the user, the mobile phone accesses the video file stored in the NAS of the television, downloads the video file to the local and can play the video file.
Fig. 17 shows an overall architecture diagram of a one-day application of a display device doll according to an embodiment of the present application.
In some embodiments, a one-day application architecture of a lovely doll includes: the system comprises a Framework module, a complete machine setting module, a small cluster at home module, a recording and synthesizing service module, an NAS module and an APP module.
The recording and synthesizing service module comprises a face detection module, a video recording module and a video synthesizing module. The face detection module is used for detecting whether a target user exists in the preview frame, such as a doll exists; the video recording module is used for recording videos when a child is detected; the video synthesis module is used for synthesizing a video clip which can be synthesized into a video source to generate a lovely doll collection, namely a video collection file.
The NAS module is used for storing files and storing the contents output by the recording and synthesizing service module. For example, a video recording file is saved, and a synthesized video album file is saved to the relevant directory of the NAS.
The APP module comprises a collection browsing interface module and a menu chip page module, wherein the collection browsing interface module is used for displaying video collection files stored in the NAS module, and the video collection files can be shared to a fellow of relatives and friends or deleted or subjected to other operations through the menu chip page module in the collection browsing interface.
The Framework module is used for detecting the occupation state of the camera and outputting the detection result to the recording and synthesizing service module; the whole machine setting module is used for setting the camera and outputting the camera to the recording and synthesizing service module; the small household module is used for controlling a one-day application switch of the doll and outputting the same to the recording and synthesizing service module, and can also determine whether a face detection function is started to identify a target user, namely the target user; the small cluster home module is also a one-day entrance of the doll, the doll detection is allowed to be carried out only after the doll detection switch is opened, and the doll video clip is captured; in some embodiments, the video playing video album file may be played by starting the media center.
In some embodiments, after the switch of the doll is set to on for one day, besides the switch state is saved, a broadcast message that the switch is turned on is sent to the recording and synthesizing service module, so as to start the face detection; similarly, the same operation is required to be carried out when the switch of the doll in one day is closed, and the recording and synthesizing service module cancels the timed restart task after receiving the switch closing broadcast message so as to reduce unnecessary starting.
In some embodiments, the age of face detection is limited to 12 years, and 6s of video is recorded after the target user is detected, i.e., a user under 12 years of age is detected. The detection success rate is reduced due to the detection start and the detection end in a short time, and the detection time can be set to last for 1 minute or other time lengths; after the video recording is completed, a 20-minute restart task is customized by the timing management component, or implemented for other time periods, and then the service is automatically terminated.
In some embodiments, the video segment to be synthesized is circulated to extract the track data through the extraction component and then written into the designated synthesized file through the media synthesis component; similarly, the audio track data is processed according to the same logic, and the recorded video and the composite video formats can be configured as MP4 or other formats.
In some embodiments, it is determined whether the recorded video is a valid file, the recorded video may be stored in a temporary folder, and the corresponding temporary file may be deleted after the recorded video is copied to the designated directory. In some embodiments, the video album file is composed and pushed. For example, the video synthesis work can be carried out in the idle time period of 22:00-00:00, and the video is immediately pushed to the user after synthesis without waiting for the next day for pushing.
In some embodiments, the date of generation of the first video album file is 6 months and 2 days, the date of generation of the second video album file is 6 months and 3 days, and the date of generation of the third video album file is 6 months and 4 days. The controller automatically generates and displays the video highlight file on the user interface at regular time intervals, which may be implemented, for example, as 8 am 30 min per morning.
In some embodiments, the controller actively deletes the video segment referred to by the video highlight file after the video highlight file of the current day is generated, so as to optimize the storage resource of the television.
In some embodiments, the video clips collected by the display device, and the generated video album file may be stored in a NAS (Network Attached Storage). For example, it may be implemented to allow storage of up to 100 video album files on the NAS to optimize storage resources. In some embodiments, if the NAS space is not sufficient, the video album file with an earlier generation time may be deleted.
In some embodiments, the camera is configured to stop acquiring the video clip for the controller when invoked by another application.
When the camera is called by other applications, the controller stops acquiring the video clip; and when the video collection function is started, the controller starts to control the camera to acquire the video clip. When the video collection function of the television is in an open state, the controller can control the camera to automatically collect and store video clips. When the controller detects that the camera is called by other applications, the video clip is stopped or paused to be acquired. For example, when a camera is used for a video call, the video highlights application pauses, or stops capturing video clips. If the video collection function of the television is in a closed state, the controller can not control the camera to record video clips even if the camera is not occupied, and the controller stops the camera to perform monitoring activities.
In some embodiments, the controller configures the length of time of the video segment to be less than or equal to a second threshold; and the controller configures the time length of the first video album file to a fixed value.
For example, when the second threshold value is 6 seconds, the length of the video segment recorded by the controller is less than or equal to 6 seconds from the detection of the doll, and when the doll continuously exceeds 6 seconds in the video acquisition range of the camera, the controller only acquires the video segment of the first 6 seconds, wherein the length of the video segment is 6 seconds; when the duration of the doll in the video acquisition range of the camera is less than 6 seconds, namely the doll leaves the acquisition range of the camera in 6 seconds, the controller only acquires the video clip of the doll in the video acquisition range, and the length of the video clip is less than 6 seconds.
For another example, when the fixed value is 30 seconds, the controller performs length detection on the generated first video highlight file, and removes the content clips exceeding 30 seconds in the video segment, so as to ensure that the first video highlight file does not exceed the preset fixed time length.
The application also provides a method for generating the video collection file based on the technical scheme introduction and the accompanying drawing description of the production process of the video collection file realized by the display equipment.
Fig. 18 is a schematic flowchart illustrating a video highlight file generation method according to an embodiment of the present application.
In step 1801, recording a video clip when the target user is in the monitoring range;
in step 1802, a video highlight file is generated based on the acquired plurality of video clips, wherein the video highlight file is played in a user interface after a confirmation operation is received. The process and operation of generating the video collection file by the method have been described in detail in the technical scheme of implementing the video collection file generation by the display device, and are not described herein again.
In some embodiments, before recording the video segment, the method further includes determining that a video highlight switch and a camera of the display device are in an on state. The process and operation of generating the video collection file by the method have been described in detail in the technical scheme of implementing the video collection file generation by the display device, and are not described herein again.
In some embodiments, recording a video segment, and generating a video album file based on a plurality of acquired video segments specifically includes: recording a video clip with a length not greater than a first preset time length in a preset time period; and after the preset time period or after the first startup of the mobile terminal in the next day of the preset time period, generating a video collection file based on the acquired video clips. The process and operation of generating the video collection file by the method have been described in detail in the technical scheme of implementing the video collection file generation by the display device, and are not described herein again.
In some embodiments, after the first startup of the preset time period, generating a video highlight file based on the obtained plurality of video segments specifically includes: and after a second preset time length of starting up for the first time on the day after the preset time period, generating a video collection file based on the acquired video clips. The process and operation of generating the video collection file by the method have been described in detail in the technical scheme of implementing the video collection file generation by the display device, and are not described herein again.
In some embodiments, a video highlight file is generated based on the acquired plurality of video clips, wherein the video highlight file comprises the earliest video clip and the latest video clip within the preset time period. The process and operation of generating the video collection file by the method have been described in detail in the technical scheme of implementing the video collection file generation by the display device, and are not described herein again.
In some embodiments, when the user is in the monitoring range, face detection is performed; judging whether the user is smaller than a preset age or not based on the face detection, and if the user is smaller than the preset age, judging that the user is a child, namely a target user, and recording a video clip; otherwise, the face detection is continued. The process and operation of generating the video collection file by the method have been described in detail in the technical scheme of implementing the video collection file generation by the display device, and are not described herein again.
The method has the advantages that the classification of the video clips can be realized by constructing the face detection and recording the video clips; further, the video collection file is generated by constructing preset time, so that the video collection file can be automatically acquired; further, by constructing a second preset time length, the situation that the display equipment is started to occupy resource blocks can be avoided; further, the length of the video collection file can be controlled by constructing the first preset time length, so that the video clips are automatically acquired, and the video collection file is intelligently generated.
In order to solve the problems of long time consumption and poor adaptability of video file watermark generation, the application provides a display device and a method for generating a video collection file watermark.
The following describes a display device and a method for generating a video album file watermark by operating a small-scale home application program, taking a smart television and a mobile phone as examples.
In some embodiments, the present application provides a display device, which may be implemented, for example, as a smart television, including a camera; a display screen for displaying a user interface; a controller configured to receive a video clip from a camera capture in response to a detected target user; and generating a video collection file containing the first watermark and the second watermark based on the video segments.
The small cluster at home comprises a germination doll vlog collection sub application which can realize the generation of a video collection file. It should be noted that, the logue of the germinating baby is hereinafter referred to as a day of germinating baby, or a day of germinating baby, and will not be described in detail below.
The child sprouts one day, the camera through the TV configuration records target user's life section to generate about target user contains the video collection file of watermark, target user for example can implement for the suitable age children that some embodiments of this application provided, also can be called and sprout the child, no longer gives details in the following.
When a user starts the one-day function of a child gathering in home application, the television can automatically record the family diary of the user, namely, the life condition of the user in the monitoring range of the television camera of the user is recorded by using the video.
In some embodiments, the camera mode in the television set up is "keep up" when the one-day function of the lovely baby is enabled. After a user clicks a button of 'one day of the doll', the controller needs to judge a camera mode in the whole machine setting, and if the camera mode is 'keep rising', the controller directly starts the one day of the doll; if the camera mode is "intelligent up-down", the user interface prompts that the camera mode needs to be switched to "keep up-down".
In some embodiments, if the television is not connected to a NAS (Network Attached Storage), the one-day button of the doll in the login interface of the small home will be configured to be grayed out, meaning that the doll detection switch is in the off state, and the user interface will display a prompt message.
In some embodiments, the video album file generated by the display device may be stored in a NAS, and the display device, or the mobile terminal, or the home private cloud may play the video album file by accessing the NAS. The video collection files generated by the display device are stored on the NAS, the video collection files can be browsed and played through a one-day user interface which is gathered at home and used for entering the Yongwa through the television end, can also be browsed and checked through a family sharing directory in a family private cloud space, and can also be used for checking the generated video collection files stored on the NAS through the television end. In some embodiments, the controller controls the video album file stored in the NAS to be pushed to a display device, or a mobile terminal, or a home private cloud, and the user can download and play the video album file.
In some embodiments, a day that the display device loves a baby applies a user interface that has generated a plurality of video highlight files is illustrated, the user interface comprising a plurality of video highlight poster controls, an underside of the video highlight poster controls further comprising prompt text prompting a date of generation. For example, the video highlights may be implemented as a plurality of video highlights of a day, i.e. the video highlight generation dates are all the same day, but the content of the video highlights is different; in other embodiments, the video highlights may also be implemented as video highlights of different dates, i.e. the generation date of each video highlight is different.
Fig. 19A shows a schematic watermark diagram of playing a video album file according to an embodiment of the present application.
The display device comprises a controller, wherein the controller responds to a detected target user and receives a video clip collected by a camera; and generating a video collection file containing the first watermark and the second watermark based on the video segments. The first watermark is not displayed any more after a first display time length when the video highlight file is played on the user interface, and the second watermark is updated along with system time when the video segment is recorded when the user interface is played, as shown in fig. 19B, fig. 19B shows a watermark schematic diagram of video highlight file playing in an embodiment of the present application.
For example, the video album 1 includes a first watermark and a second watermark, where the first display time length may be configured to be 3 seconds, and when the video album 1 is played on the user interface of the display device, a playing picture in 3 seconds includes the first watermark and the second watermark; after 3 seconds, the playing interface of the video album 1 does not display the first watermark, but only displays the second watermark, where the second watermark is displayed as the system time when the video segment is recorded, for example, the second watermark may be displayed as the system time when the playing picture of the video album 1 is recorded, so that the user can know the date and time when the playing progress of the current album file 1 occurs.
In some embodiments, the display device controller records a video clip not greater than a first preset length of time within a preset time period; and after the preset time period or after the first startup of the day after the preset time period, the controller generates a video collection file based on the acquired video clips. For example, in a preset time period, for example, within 00:00-22:00 of each day, the controller detects that the doll detection switch is in an on state and the camera is in an on state, that is, the camera is in a rising state, the controller of the television automatically detects a target user in front of the television, and automatically records a video clip with a duration not greater than a first preset time length, for example, 6s after detecting the target user each time; after a preset time period, namely after 22:00 days, if the television is still in a starting state or after the television is started on the next day, the recorded video segments are automatically synthesized into a video collection file, the file is stored in an NAS appointed directory, the file is pushed to a user through notification, the user can click to view the file, and the user can view and download the video collection file stored in the NAS disk through a mobile phone terminal.
In some embodiments, the controller receives a video clip from a camera in response to the detected target user, specifically including adding a first watermark and a second watermark to the video clip when the controller determines that the video clip is the first of the day; otherwise, the controller adds a second watermark to the video segment.
For example, the controller of the display device starts to record a video after recognizing a target user, and when judging that a currently recorded video segment is a first video segment of the current day, the controller adds a first watermark and a second watermark to the currently recorded video segment, where the first watermark may be implemented to include a cover picture and a system date, for example; the second watermark may be implemented, for example, to include timestamp information, i.e., a system date and a system time, i.e., a system time at which the video segment is currently recorded, and may be refreshed with the user interface for dynamic display;
when the controller judges that the currently recorded video segment is not the first video segment of the current day, the controller only adds a second watermark to the currently recorded video segment, wherein the second watermark can be implemented to contain timestamp information, namely a system date and a system time, the system time is the system time of the currently recorded video segment, and the second watermark can be dynamically displayed along with the refreshing of a user interface.
Fig. 20A shows a schematic watermark diagram of playing a video album file according to an embodiment of the present application.
In some embodiments, the video highlight file generated by the display device of the present application further includes a third watermark, and the controller receives a video segment from the camera in response to the detected target user, specifically includes adding the first watermark, the second watermark, and the third watermark to the video segment when the controller determines that the video segment is the first of the day; otherwise, the controller adds the second watermark and the third watermark to the video segment.
For example, the controller of the display device starts to record a video after recognizing a target user, and when judging that a currently recorded video segment is a first video segment of the current day, the controller adds a first watermark, a second watermark and a third watermark to the currently recorded video segment, wherein the first watermark may be implemented to include a cover picture and a system date, for example; the second watermark may be implemented, for example, to include timestamp information, i.e., a system date and a system time, i.e., a system time at which the video segment is currently recorded, and may be refreshed with the user interface for dynamic display; the third watermark may be implemented to include brand LOGO information, for example, as shown in fig. 21, where fig. 21 shows a watermark schematic diagram of video album file playing according to an embodiment of the present application.
When the controller judges that the currently recorded video segment is not the first video segment of the current day, the controller only adds a second watermark and a third watermark to the currently recorded video segment, wherein the second watermark can be implemented to contain timestamp information, namely system date and system time, the system time is the system time of the currently recorded video segment, and the second watermark can be dynamically displayed along with the refreshing of a user interface; the third watermark may be implemented to include brand LOGO information, for example, as shown in fig. 20B, where fig. 20B shows a watermark schematic diagram of video album file playing according to an embodiment of the present application.
Fig. 22 shows a schematic flowchart of video highlight file watermarking according to an embodiment of the present application.
In some embodiments, the controller adds a first watermark, a second watermark, and a third watermark to the video segment when determining that the video segment is the current day, and the execution includes the following steps:
in step 2201, when it is determined that the recording time is less than or equal to the first display time duration, a cover watermark including a first watermark, a second watermark, and a third watermark is superimposed on a preview screen, wherein the preview screen is updated with the system time.
The method comprises the steps that a display device controller controls a camera to record a first video segment of the current day, if the first display time length is 2 seconds, for example, a cover watermark is created while the first video segment is recorded, and the cover watermark comprises a first watermark, a second watermark and a third watermark; and then the controller superposes the cover watermark on a preview picture of the current recording time of the current video clip to obtain preview data containing a cover watermark image, and the preview picture can be refreshed along with the system time of display equipment to obtain a display effect of dynamic update change.
For example, if the controller sends the update watermark message at an interval of 1 second, the controller of the display device updates the watermark picture every second, and the playing interface of the first video segment of the current day displays the first watermark, the second watermark and the third watermark within a first display time length from the beginning.
In step 2202, when it is determined that the recording time is longer than the first display time duration, a cover watermark including a second watermark and a third watermark is superimposed on a preview screen.
In some embodiments, the controller of the display device re-determines the currently recorded duration after each overlay of the cover watermark to the preview picture, and when the recording time is longer than the first display time length, for example, the recording time is longer than 2 seconds, overlays the cover watermark including the second watermark and the third watermark to the preview picture, so as to enable the video album to enter a normal playing interface after the album cover is displayed for a set time in the playing process.
The normal playing interface does not include the first watermark, and the first watermark can be implemented as a cover picture and a system date, for example; the second watermark may be implemented, for example, to include timestamp information, i.e., a system date and a system time, i.e., a system time at which the video segment is currently recorded, and may be refreshed with the user interface for dynamic display; the third watermark may be implemented, for example, to include brand LOGO information.
In step 2203, the superimposed preview screen is composed to the video clip.
In some embodiments, the controller generates preview data by rendering the created cover watermark on a preview screen by superposition using opengl tools; then sending a message after 1 second to update the cover watermark; the controller transmits the updated preview picture, namely preview frame data to a MediaCodec tool for video coding after acquiring the updated preview picture; after the encoding is completed, the encoded data is output to a MediaMuxer synthesizer for data synthesis to obtain video segments containing different watermarks, as shown in fig. 23, fig. 23 shows a flow diagram of video segment recording according to an embodiment of the present application.
In some embodiments, the length of the video segment recorded by the display device may be set to 6 seconds, that is, the video recording is finished after the recording time lasts for 6 seconds.
In some embodiments, the adding, by the controller, the second watermark and the third watermark to the video segment when the controller determines that the video segment is not the first day specifically includes superimposing, by the controller, a cover watermark including the second watermark and the third watermark to the preview picture when the controller determines that the video segment is not the first day; and synthesizing the superposed preview pictures into a video clip.
For example, when the video clip is not the current one, the controller adds the second watermark and the third watermark, that is, the timestamp information located in the upper right corner of the playing interface of the video collection file and the brand LOGO information located in the lower right corner, to the current video clip without adding the first watermark, that is, the cover sheet picture, to the currently recorded video clip.
In some embodiments, the first watermark includes a cover picture, a system date; the second watermark comprises a system time and a system date; the third watermark includes a brand LOGO.
When the video collection file is played, a playing interface of the video collection file can display two watermark effects, wherein the first watermark effect comprises a cover picture, a date, a timestamp and a LOGO (social television proprietary brand), and the watermark effect is only displayed in the first 2 seconds of the playing of the video collection file; the second watermark effect includes a timestamp, social television proprietary brand LOGO.
Fig. 24 shows a flowchart of creating a watermark according to an embodiment of the present application.
In some embodiments, the controller creates a canvas, uses a painting tool to paint different watermark pictures and text contents in a designated area, and then saves the pictures generated by the canvas.
For example, the lowest layer of the picture is a preview picture image of a currently recorded video clip at the current time, the controller draws a designated cover picture to a designated first watermark region, draws a social television brand LOGO picture to a designated third watermark region, draws system date and system time information, namely timestamp information to a designated second watermark region, and finally obtains a composite watermark picture; and then, the watermark pictures are overlaid and drawn on the preview picture, and the effect that the first watermark is displayed in a specific time period and the second watermark and the third watermark are always displayed can be realized through the timing refreshing of the user interface.
In some embodiments, the first watermark may also include system date information, i.e., the first watermark may be composed of a cover picture and a system date, as shown in fig. 21.
In some embodiments, the display device controller generates the video compilation file based on the plurality of video segments that have been acquired after a second predetermined length of time of first powering on a day after the predetermined period of time. For example, a doll detection service of a display device may be configured to include a power-on start and a timed auto start; if the configuration is to start the detection service of the doll, the task of synthesizing the video collection file by starting the video clip of the doll after a preset second time length, for example, 5 minutes, can be set, and the detection and synthesis after 5 minutes can avoid that the application starts more system resources to be seized and cause the system to be blocked and slow down after starting.
In some embodiments, the video album file may add background music, and the camera records the video clip without recording audio data. After a preset time period, namely after 22:00 of each day, if the television is still in a power-on state, or after the television is powered on the next day, the recorded video segments can be automatically synthesized into a video collection file added with background music, and the camera only needs to record image data and does not record audio data when recording the video segments.
In some embodiments, the controller generates a video highlight file based on the acquired plurality of video clips, wherein the video highlight file comprises the earliest video clip and the latest video clip within the preset time period. For example, the controller detects whether the NAS has a valid video clip when generating the video album file; wherein, the effective video clip refers to the video clip which is recorded normally. When the NAS has the video clips to be synthesized and the NAS space capacity is sufficient, the controller classifies all the effective video clips according to the day, performs time ascending arrangement on the video clips in each day, and selects a preset number of video clips to synthesize, for example, selects 5 video clips to synthesize.
In some embodiments, the video segment selection rule may be implemented as: if the number of the video clips is less than or equal to the preset number, for example, the number is less than or equal to 5 segments, synthesizing all the video clips; if the number of the multiple video clips is greater than the preset number, for example, if the number is greater than 5, because the first video clip has a highlight cover when being recorded, the first and earliest video clips are selected as the first constituent video clip of the video highlight file, the last video clip is the last and latest video clip of the video highlight file, and the remaining 3 video clips are uniformly selected from the remaining video clips.
In some embodiments, when the target user is in the monitoring range of the display device, the receiving, by the controller, a video clip captured by a camera includes: when the user is in the monitoring range of the display equipment, the controller controls the camera to carry out face detection; judging whether the user is smaller than a preset age or not based on the face detection, if so, judging that the user is a child, namely a target user, and receiving a video clip acquired by a camera by the controller; otherwise, the controller controls the camera to continue face detection.
In some embodiments, if the time for the timer-started service is longer than 22 points, the video composition is directly performed, and the timer restart time can be changed to 2 hours after the completion. For another example, after the camera of the display device is not occupied and resources can be acquired, the detection time of the doll in one day can be set to be 1 minute or other time lengths.
In some embodiments, the file of the recorded video contains timestamp information, and the video composition selection rule is to select 5 video segments in total. For example, if the video recording number is less than or equal to 5 segments on the same day, all the videos are selected for synthesis; if the video is larger than 5 segments, the earliest and latest two segments of videos are selected, and the rest 3 segments of videos are selected from the rest 3 segments of videos uniformly. And when the display equipment is judged not to be accessed into the NAS, the service is automatically terminated, and the starting does not start the day of the doll.
In some embodiments, the video segment to be synthesized is circulated to extract the track data through the extraction component and then written into the designated synthesized file through the media synthesis component; similarly, the audio track data is processed according to the same logic, the format of the recorded video and the synthesized video can be configured into MP4 or other formats, and the file name contains a time stamp, so that the deleted data can be conveniently searched.
In some embodiments, it is determined whether the recorded video is a valid file, the recorded video may be stored in a temporary folder, and the corresponding temporary file may be deleted after the recorded video is copied to the designated directory. In some embodiments, the video album file is composed and pushed. For example, the video synthesis work can be carried out in the idle time period of 22:00-00:00, and the video is immediately pushed to the user after synthesis without waiting for the next day for pushing.
In some embodiments, the date of generation of the first video album file is 6 months and 2 days, the date of generation of the second video album file is 6 months and 3 days, and the date of generation of the third video album file is 6 months and 4 days. The controller automatically generates and displays the video highlight file on the user interface at regular time intervals, which may be implemented, for example, as 8 am 30 min per morning.
In some embodiments, the controller actively deletes the video segment referred to by the video highlight file after the video highlight file of the current day is generated, so as to optimize the storage resource of the television.
In some embodiments, the video clips collected by the display device, and the generated video album file may be stored in a NAS (Network Attached Storage). For example, it may be implemented to allow storage of up to 100 video album files on the NAS to optimize storage resources. In some embodiments, if the NAS space is not sufficient, the video album file with an earlier generation time may be deleted.
In some embodiments, the camera is configured to stop acquiring the video clip for the controller when invoked by another application.
When the camera is called by other applications, the controller stops acquiring the video clip; and when the video collection function is started, the controller starts to control the camera to acquire the video clip. When the video collection function of the television is in an open state, the controller can control the camera to automatically collect and store video clips. When the controller detects that the camera is called by other applications, the video clip is stopped or paused to be acquired. For example, when a camera is used for a video call, the video highlights application pauses, or stops capturing video clips. If the video collection function of the television is in a closed state, the controller can not control the camera to record video clips even if the camera is not occupied, and the controller stops the camera to perform monitoring activities.
In some embodiments, the controller configures the length of time of the video segment to be less than or equal to a second threshold; and the controller configures the time length of the first video album file to a fixed value.
For example, when the second threshold value is 6 seconds, the length of the video clip recorded by the controller is less than or equal to 6 seconds from the detection of the target user, and when the target user continuously exceeds 6 seconds in the camera video acquisition range, the controller only acquires the video clip of the first 6 seconds, wherein the length of the video clip is 6 seconds; when the duration of the target user in the camera video acquisition range is less than 6 seconds, namely the target user leaves the camera acquisition range within 6 seconds, the controller only acquires the video clip of the target user in the camera acquisition range, wherein the length of the video clip is less than 6 seconds.
For another example, when the fixed value is 30 seconds, the controller performs length detection on the generated first video highlight file, and removes the content clips exceeding 30 seconds in the video segment, so as to ensure that the first video highlight file does not exceed the preset fixed time length.
The application also provides a method for generating the video brocade file watermark based on the technical scheme introduction and the accompanying drawing description of the generation process of the video brocade file watermark realized by the display equipment.
Fig. 25 is a flowchart illustrating a method for generating a watermark in a video highlight file according to an embodiment of the present application.
In step 2501, in response to the detected target user, receiving a video clip collected from a camera;
in step 2502, a video highlight file including a first watermark and a second watermark is generated based on the video segment, wherein the first watermark is not displayed after a first display time length when the video highlight file is played on the user interface, and the second watermark is displayed as a system time when the video segment is recorded. The process and operation of generating the video highlight file by the method have been described in detail in the technical scheme of realizing the video highlight file watermark generation by the display device, and are not described herein again.
In some embodiments, in response to a detected target user recording a video segment, specifically including determining that the video segment is the first of the day, adding a first watermark and a second watermark to the video segment; otherwise, adding a second watermark to the video segment. The process and operation of generating the video highlight file by the method have been described in detail in the technical scheme of realizing the video highlight file watermark generation by the display device, and are not described herein again.
In some embodiments, the video album file further comprises: the controller responds to the detected target user to receive the video segment collected by the camera, and specifically comprises the steps that when the controller judges that the video segment is the first day, the first watermark, the second watermark and the third watermark are added to the video segment; otherwise, the controller adds the second watermark and the third watermark to the video segment. The process and operation of generating the video highlight file by the method have been described in detail in the technical scheme of realizing the video highlight file watermark generation by the display device, and are not described herein again.
In some embodiments, when the video segment is determined to be the first of the day, adding a first watermark, a second watermark, and a third watermark to the video segment, specifically including when it is determined that the recording time is less than or equal to the first display time length, superimposing a cover watermark including the first watermark, the second watermark, and the third watermark on a preview picture, where the preview picture is updated along with the system time; when the recording time is judged to be longer than the first display time length, the cover watermark containing the second watermark and the third watermark is superposed on the preview picture; and synthesizing the superposed preview pictures into a video clip. The process and operation of generating the video highlight file by the method have been described in detail in the technical scheme of realizing the video highlight file watermark generation by the display device, and are not described herein again.
In some embodiments, when it is determined that the video clip is not the current day, adding a second watermark and a third watermark to the video clip, specifically, when it is determined that the video clip is not the current day, superimposing a cover watermark including the second watermark and the third watermark on a preview picture; and synthesizing the superposed preview pictures into a video clip. The process and operation of generating the video highlight file by the method have been described in detail in the technical scheme of realizing the video highlight file watermark generation by the display device, and are not described herein again.
In some embodiments, the first watermark comprises a cover picture, a system date; the second watermark comprises a system time and a system date; the third watermark includes a brand LOGO. The process and operation of generating the video highlight file by the method have been described in detail in the technical scheme of realizing the video highlight file watermark generation by the display device, and are not described herein again.
The method has the advantages that the transient display of the picture of the brocade cover can be realized by constructing the first watermark and the first display time length; further, by constructing a second watermark, the recording time of the collection file can be displayed; further, by constructing a third watermark, the display of a brand LOGO of the collection file can be realized; further, by judging the first video segment in the current day, the watermark can be instantly added to the video collection file, and the universal adaptability of watermark addition is improved.
In order to solve the problems that the privacy of a user is violated and the service calling conflict of a camera is violated when the target user detection service is executed by display equipment in a family, the application provides the display equipment and the control method of the camera.
The following explains a control method of a display device and a camera by taking a smart television and a mobile phone as examples and by operating a process of generating a video collection file through a small-sized home application program.
In some embodiments, the controller receives a first instruction input by a user, and determines whether the camera is in a lifting state, wherein the first instruction is used for starting a video highlight switch to start a target user detection service; when the camera is in a lifting state, the controller can control the camera to record a video clip containing a target user, and the video clip can be used for generating a video collection file played on the user interface;
it should be noted that, in some embodiments provided in the present application, the target user may be specifically implemented as an elderly child, also called a baby; the target user detection service can be specifically implemented as a lovely baby detection service; the target user detection can be specifically implemented as doll detection; the video clips recorded by the camera in the text refer to the video clips recorded by the camera and containing target users; the video collection file is specifically implemented as a lovely baby video collection file; the video collection switch is embodied as a doll detection switch, and is not described in detail in the following document.
For example, before the controller controls the camera to record a video clip about a doll, it is necessary to determine that a video highlight switch of the display device and the camera are in a state of being kept raised. The camera is in an open state, namely the camera keeps a lifting state, namely the camera is in a starting state; if the camera drops, the camera will stop detecting, identifying whether the user appears in the monitoring range, so that the video clip of the doll will be obtained in a pause mode in one day, namely, the target user detection service is stopped.
In some embodiments, the video collection switch may be implemented as a one-day function switch of a doll, and the display device controller controls the camera to detect whether the one-day function switch of the doll is in an on state before recording a video clip containing a target user or before performing target user detection, and if the one-day function switch is in the on state, that is, the camera is also in the on state, the controller controls the camera to record the video clip or perform target user detection.
In some embodiments, when the camera is not in the keep-up state, the controller controls the user interface of the display device to display a first setting interface for changing the camera state to execute a first instruction for turning on the video highlight switch to start the target user detection service, as shown in fig. 26.
For example, when a one-day function of a child is enabled, the controller receives a first instruction sent by a user, where the first instruction is used to start a video collection switch, and the video collection switch may be specifically implemented as a child detection switch, and is used to detect a child and obtain a video collection file related to the child;
the controller detects whether a camera mode in the television set is in a 'keeping lifting state', and the keeping lifting state can be implemented as starting lifting in some models;
when the camera is set to be kept in a lifting state, the controller turns on a video collection switch, namely a doll detection switch, a target user detection service starts to run in a background, and the controller can control the camera to record a video clip containing a target user and execute target user detection; the target user detection service is a background service, and the user cannot perceive whether the detection service is running, so whether the user allows the background to execute the function of target user detection needs to be considered.
When the camera is set to be in a non-maintaining lifting state, the controller controls a user interface of the display device to display a first setting interface, and the first setting interface is used for changing the state of the camera to be in a maintaining lifting state, so that the controller executes a first instruction to start a video collection switch and start target user detection service.
It should be noted that the video highlight switch can be turned on by the display device only when the user agrees to change the camera setting to the power-on and power-up.
In some embodiments, the controller allows the background to run the target user detection service only when the video highlight switch of the display device is in an on state, and the target user detection service is set to be strongly related to the camera.
The first setting interface of the display equipment can be implemented to set camera parameters, and the states of the camera comprise starting up and lifting, intelligent lifting and first use;
for a user, the camera is set to be started and lifted, so that the camera can be ensured to be in a lifting state, and the target user detection service can call the camera service in real time;
the other states of the camera indicate that the user needs to actively operate the camera, and the controller can call the camera service, so that the background is allowed to run the target user detection service only when the display device sets the camera to be started up and started up.
In some embodiments, after the user clicks the "one day of the doll" button, the controller needs to judge a camera mode in the overall machine setting, and if the camera mode is "keep rising", the one day of the doll is directly started; if the camera mode is 'intelligent lifting', the user interface displays a first setting interface to prompt that the camera mode needs to be switched to 'keeping lifting'.
In some embodiments, the controller may control the camera to record a video clip containing the target user, the controller further configured to: receiving a second instruction input by a user, wherein the second instruction is used for closing the lifting-keeping state of the camera; and controlling the display equipment to close the video highlight switch to stop the target user detection service.
For example, when a video highlight switch in the display device is in an on state, and the camera is in an intelligent lifting state or a lifting state for the first time when the video highlight switch is turned on, and a user changes the state of the camera to be on lifting through a popped up first setting interface, namely the lifting state is kept, the target user detection service of the display device runs at the background; if the user sets the camera parameters to other options such as intelligent lifting or lifting for the first time by manually modifying the camera parameters in the using process of the prior trial equipment, the controller closes the video collection switch to stop the detection service of the target user, and the privacy of the user is protected to the maximum extent.
In some embodiments, the controller controls the user interface to display a first setting interface, the first setting interface is used for changing the state of the camera to execute the first instruction, and the user configures the camera so that the controller of the display device can control the camera to record the video clip containing the target user and execute the target user detection. The controller can also receive a third instruction input by the user, wherein the third instruction is used for closing the video collection switch to stop the target user detection service; and controlling the camera to automatically restore to the state before the change in the first setting interface.
For example, a user enters a video collection file interface, which can be specifically implemented as a lovely baby video collection file interface, a video collection switch in the current display device is in an on state, a camera is changed from intelligent lifting to starting lifting in a first setting interface to keep a lifting state, and target user detection service of the display device runs in a background;
the user sends a third instruction for closing the video collection switch, namely the doll detection switch, and the controller pops the control user interface out of a closing prompt interface;
after the user confirms to close, the controller closes the video collection switch, namely the doll detection switch, and automatically configures the camera into a state before change, namely the state is changed from starting up to lifting into intelligent lifting, and the controller does not randomly modify the setting before the user, as shown in fig. 27.
In some embodiments, the controller suspends detection of doll and/or records the video segment containing the target user if the camera of the display device has been invoked by the second application while the display device performs detection of the target user or records the video segment containing the target user.
The target user detection service needs to use a camera, a plurality of applications can call a conflict phenomenon in the use process of the camera, and the target user detection service can be implemented in a maximum scene under the condition that the use of other applications of the camera of the display device is not influenced.
For example, when the target user detection service starts the camera, the controller detects whether the camera is called by the second application to serve the camera, namely, the camera is called by other applications in advance, and if the second application uses the camera service, the controller does not start the target user to detect and/or record the video clip containing the target user; and when the camera is idle, starting the target user detection service again to call the camera.
In some embodiments, when the controller detects and/or records a video clip including a target user, the controller is further configured to control the user interface to display a camera recording screen of a second application in response to a camera call instruction from the second application; the target user is paused to detect, and/or record, the video segment containing the target user.
For example, the controller sets the level of the target user detection service calling the camera to be the lowest, and other applications, for example, the second application, can preempt the camera resource called by the target user detection service. In the process of doll detection, the second application needs to call the camera for service, the controller suspends the use of the camera for doll detection and/or recording of a video clip containing a target user.
In some embodiments, the controller configures the starting of the target user detection service as a timed starting, for example, the starting interval is set to 20 minutes, and a timed task is created again for self-starting when the target user detection service application exits;
in some embodiments, the controller creates a timing task immediately after the target user detection service is started, for example, if the target user normally exits after the target user detection is completed, the original timing task is cancelled and reset; if the target user detects abnormal exit of the service due to the fact that the second application calls the camera with high priority, the set timing task after starting is used, and it is guaranteed that the service can be started at fixed time.
In some embodiments, the display device controller generates the video highlight file based on the acquired plurality of video segments after the first startup of the day after the preset time period, specifically including generating the video highlight file based on the acquired plurality of video segments after a second preset time length after the first startup of the day after the preset time period.
For example, target user detection services for a display device may be configured to include power-on initiation and timed auto-initiation; if the configuration is that the target user detection service is started, the task of synthesizing the video collection file by starting the video clips of the doll after a preset second time length, for example, 5 minutes, can be set, and the system can be prevented from being jammed and slowed down due to the fact that more system resources are started by the application after the start by detecting and synthesizing after 5 minutes.
In some embodiments, the video album file may add background music, and the camera records the video clip without recording audio data. After a preset time period, namely after 22:00 of each day, if the television is still in a power-on state, or after the television is powered on the next day, the recorded video segments can be automatically synthesized into a video collection file added with background music, and the camera only needs to record image data and does not record audio data when recording the video segments containing the target user.
In some embodiments, the controller generates a video highlight file based on the acquired plurality of video clips, wherein the video highlight file comprises the earliest video clip and the latest video clip within the preset time period.
For example, the controller detects whether the NAS has a valid video clip when generating the video album file; wherein, the effective video clip refers to the video clip which is recorded normally. When the NAS has the video clips to be synthesized and the NAS space capacity is sufficient, the controller classifies all the effective video clips according to the day, performs time ascending arrangement on the video clips in each day, and selects a preset number of video clips to synthesize, for example, selects 5 video clips to synthesize.
In some embodiments, when the doll is in the monitoring range of the display device, the controller controls the camera to record a video clip containing a target user, and the method specifically includes: when the user is in the monitoring range of the display equipment, the controller controls the camera to carry out face detection; judging whether the user is smaller than a preset age or not based on the face detection, if so, judging that the user is a child, and controlling the camera to record a video clip containing a target user by the controller; otherwise, the controller controls the camera to continue face detection.
In some embodiments, after the one-day application service of the lovely doll is started, it is first detected whether the one-day application service is set to be started, and if the display device is set to be started, a delay task of a preset time is started, for example, a 5-second delay task is started; if the display equipment is not set to be started, detecting whether the hardware equipment has NAS or not;
if no NAS exists, the service is automatically terminated; if the NAS exists, detecting whether a video clip to be synthesized exists, and if so, synthesizing the video; if not, detecting whether a one-day function switch of the doll is turned on and whether the camera is set to keep rising, namely whether the video collection switch and the camera are in a starting state;
after synthesizing the video, storing the video collection file to an NAS; detecting whether a one-day function switch of the doll is turned on and whether the camera is set to keep rising; if the switch is off or the camera is not set to keep rising, the service is automatically terminated, namely suicide service is performed; if the video collection switch is turned on and the camera is set to keep rising, detecting whether the system time is within a preset time period, for example, whether the system time is between 0 and 22 points;
if the system time is within a preset time period, for example, the detection time is between 0 point and 22 points, detecting whether the camera is occupied; otherwise, starting the task at regular time after a preset time period, for example, starting the task at regular time after 20 minutes;
when the camera is detected to be occupied, starting a preset time period and then restarting a task at regular time; otherwise, starting to detect the human face;
identifying whether a child is germinated or not, and if the child is detected, starting to record a video; otherwise, restarting the task at regular time after starting the preset time period. In some embodiments, after the camera is turned on, the face detection is performed on the user within the monitoring range, and after the face is detected, whether the age is smaller than a preset age, for example 12 years old, is judged by using an algorithm through the acquired face information; if the age is identified to be less than 12 years old, judging that a child appears in the monitoring range before the television, stopping detection by the camera, starting recording a video clip containing a target user, and storing the recorded video clip in a directory specified by the television;
and finally, the controller controls the camera to close detection, and restarts the task at regular time after starting the preset time.
In some embodiments, the display device is started, specifically including ac start and STR start; and starting to execute the task after a preset time period, for example, starting to execute the task after 5 minutes. Therefore, the phenomena of unsmooth system, blockage and the like caused by starting of all applications during starting can be avoided.
When the switch of the doll is turned off or the camera is set to be intelligently lifted one day, the service of the doll one day is automatically stopped, and the task does not need to be restarted at regular time; and if the switch of the doll in one day is on, the service of the doll in one day is started after the corresponding broadcast message is received.
The precondition for detecting the doll by applying the one-day application of the doll is that the time is accurate, for example, the detection time is 00:00-22:00, and whether the time is accurate is judged by judging whether the current system time is in a preset time period.
In some embodiments, if the time for the timer-started service is longer than 22 points, the video composition is directly performed, and the timer restart time can be changed to 2 hours after the completion. For another example, after the camera of the display device is not occupied and resources can be acquired, the detection time of the doll in one day can be set to be 1 minute or other time lengths.
In some embodiments, after the switch of the doll is set to on for one day, besides the switch state is saved, a broadcast message that the switch is turned on is sent to the recording and synthesizing service module, so as to start the face detection; similarly, the same operation is required to be carried out when the switch of the doll in one day is closed, and the recording and synthesizing service module cancels the timed restart task after receiving the switch closing broadcast message so as to reduce unnecessary starting.
In some embodiments, the age of face detection is limited to 12 years, and after detecting a baby, i.e. a user under 12 years of age is detected, a 6s video is recorded. The detection success rate is reduced due to the detection start and the detection end in a short time, and the detection time can be set to last for 1 minute or other time lengths; after the video recording is completed, a 20-minute restart task is customized by the timing management component, or implemented for other time periods, and then the service is automatically terminated.
In some embodiments, the video segment to be synthesized is circulated to extract the track data through the extraction component and then written into the designated synthesized file through the media synthesis component; similarly, the audio track data is processed according to the same logic, and the recorded video and the composite video formats can be configured as MP4 or other formats.
In some embodiments, it is determined whether the recorded video is a valid file, the recorded video may be stored in a temporary folder, and the corresponding temporary file may be deleted after the recorded video is copied to the designated directory. In some embodiments, the video album file is composed and pushed. For example, the video synthesis work can be carried out in the idle time period of 22:00-00:00, and the video is immediately pushed to the user after synthesis without waiting for the next day for pushing.
In some embodiments, the date of generation of the first video album file is 6 months and 2 days, the date of generation of the second video album file is 6 months and 3 days, and the date of generation of the third video album file is 6 months and 4 days. The controller automatically generates and displays the video highlight file on the user interface at regular time intervals, which may be implemented, for example, as 8 am 30 min per morning.
In some embodiments, the controller actively deletes the video segment referred to by the video highlight file after the video highlight file of the current day is generated, so as to optimize the storage resource of the television.
In some embodiments, the video clips collected by the display device, and the generated video album file may be stored in a NAS (Network Attached Storage). For example, it may be implemented to allow storage of up to 100 video album files on the NAS to optimize storage resources. In some embodiments, if the NAS space is not sufficient, the video album file with an earlier generation time may be deleted.
In some embodiments, the camera is configured to stop acquiring the video clip for the controller when invoked by another application.
When the camera is called by other applications, the controller stops acquiring the video clip; and when the video collection function is started, the controller starts to control the camera to acquire the video clip. When the video collection function of the television is in an open state, the controller can control the camera to automatically collect and store video clips. When the controller detects that the camera is called by other applications, the video clip is stopped or paused to be acquired. For example, when a camera is used for a video call, the video highlights application pauses, or stops capturing video clips. If the video collection function of the television is in a closed state, the controller can not control the camera to record a video clip containing a target user even if the camera is not occupied, and the controller stops the camera to perform monitoring activities.
In some embodiments, the controller configures the length of time of the video segment to be less than or equal to a second threshold; and the controller configures the time length of the first video album file to a fixed value.
For example, when the second threshold value is 6 seconds, the length of the video segment recorded by the controller is less than or equal to 6 seconds from the detection of the doll, and when the doll continuously exceeds 6 seconds in the video acquisition range of the camera, the controller only acquires the video segment of the first 6 seconds, wherein the length of the video segment is 6 seconds; when the duration of the doll in the video acquisition range of the camera is less than 6 seconds, namely the doll leaves the acquisition range of the camera in 6 seconds, the controller only acquires the video clip of the doll in the video acquisition range, and the length of the video clip is less than 6 seconds.
For another example, when the fixed value is 30 seconds, the controller performs length detection on the generated first video highlight file, and removes the content clips exceeding 30 seconds in the video segment, so as to ensure that the first video highlight file does not exceed the preset fixed time length.
The application also provides a control method of the camera based on the technical scheme introduction and the accompanying drawing description of the production process of the video collection file realized by the display equipment.
Fig. 28 is a flowchart illustrating a method for controlling a camera of a display device according to an embodiment of the present application.
In step 2801, receive a first instruction input by a user, determine whether a camera is in a state of being kept lifted, where the first instruction is used to turn on a video highlight switch to start a target user detection service;
in step 2802, while the camera is in the state of keeping rising, the camera may be controlled to record a video clip containing a target user, where the video clip may be used to generate a video highlight file played on a user interface; otherwise, controlling the user interface to display a first setting interface, wherein the first setting interface is used for changing the state of the camera to execute the first instruction. The process and operation of the camera control have been described in detail in the above technical solution for generating the video collection file by the display device, and are not described herein again.
In some embodiments, the camera may be controlled to record a video clip including a target user: receiving a second instruction input by a user, wherein the second instruction is used for closing the lifting-keeping state of the camera; and closing the video collection switch to stop the target user detection service. The process and operation of the camera control have been described in detail in the above technical solution for generating the video collection file by the display device, and are not described herein again.
In some embodiments, in performing target user detection, and/or recording a video segment containing a target user: receiving a third instruction input by a user, wherein the third instruction is used for closing the video collection switch to stop the target user detection service; and controlling the camera to automatically restore to the state before the change in the first setting interface. The process and operation of the camera control have been described in detail in the above technical solution for generating the video collection file by the display device, and are not described herein again.
In some embodiments, in performing target user detection, or recording, a video segment containing a target user: and if the camera is called by the second application, the target user is paused to detect and/or record the video clip containing the target user. The process and operation of the camera control have been described in detail in the above technical solution for generating the video collection file by the display device, and are not described herein again.
In some embodiments, in performing target user detection, and/or recording a video segment containing a target user: responding to a camera calling instruction from a second application, and controlling the user interface to display a camera recording picture of the second application; the target user is paused to detect, and/or record, the video segment containing the target user. The process and operation of the camera control have been described in detail in the above technical solution for generating the video collection file by the display device, and are not described herein again.
The method has the advantages that the detection service of the target user can be prevented from invading the privacy of the user by constructing the first instruction and keeping the lifting state; further, automatic setting of the camera to keep a lifting state can be realized by constructing a first setting interface; further, by constructing a second instruction and a third instruction, the target user detection service can be closed; furthermore, the problem of camera calling conflict when the target user detection service is operated by the display equipment can be solved by configuring the priority of the target user detection service camera.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block", "controller", "engine", "unit", "component", or "system". Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.

Claims (10)

1. A display device, comprising:
a display screen for displaying a user interface;
a controller configured to:
when the target user is in the monitoring range of the display equipment, receiving a video clip collected by a camera;
and generating a video collection file based on the acquired video clips, wherein after the video collection file receives a confirmation operation, the controller controls the user interface to play the video collection file.
2. The display device of claim 1, further comprising the controller determining that a video highlight switch, a camera of the display device, is in an on state before the controller receives the video clip from the camera capture.
3. The display device of claim 1, wherein the controller receives video clips captured by a camera and generates a video highlight file based on the plurality of captured video clips, and specifically comprises:
the controller records a video clip with a length not more than a first preset time length in a preset time period;
and after the preset time period or after the first startup of the day after the preset time period, the controller generates a video collection file based on the acquired video clips.
4. The display device according to claim 3, wherein the generating, by the controller, the video album file based on the plurality of acquired video clips after the first power-on a day after the preset time period specifically includes:
and after a second preset time length of starting up for the first time on the day after the preset time period, the controller generates a video collection file based on the acquired video clips.
5. The display device according to claim 3, wherein the controller generates a video highlight file based on the plurality of acquired video clips, wherein the video highlight file includes an earliest video clip and a latest video clip within the preset time period.
6. The display device of claim 1, wherein when the target user is within the monitoring range of the display device, the controller receives a video clip captured by the camera, and specifically comprises:
when the user is in the monitoring range of the display equipment, the controller controls the camera to carry out face detection;
judging whether the user is smaller than a preset age or not based on the face detection, if so, judging the user as a target user, and receiving a video clip collected by a camera by the controller; otherwise, the controller controls the camera to continue face detection.
7. The display device of claim 1, wherein the video album file is stored in a NAS, and the video album file is playable by a display device, or a mobile terminal, or a home private cloud by accessing the NAS.
8. The display device of claim 7, wherein the controller controls the video album file stored in the NAS to be pushed to a display device, or a mobile terminal, or a home private cloud, and the video album file can be downloaded and played by a user.
9. The display device of claim 1, wherein the video album file can be populated with background music and the camera records the video clips without recording audio data.
10. A method for generating a video album file, the method comprising:
recording a video clip when a target user is in a monitoring range;
and generating a video collection file based on the acquired video clips, wherein the video collection file is played in a user interface after receiving a confirmation operation.
CN202011148295.9A 2020-07-06 2020-10-23 Display device and video collection file generation method Pending CN112351323A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/098617 WO2022007568A1 (en) 2020-07-06 2021-06-07 Display device, smart terminal, and video highlight generation method
CN202180046688.5A CN116391358A (en) 2020-07-06 2021-06-07 Display equipment, intelligent terminal and video gathering generation method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010850122 2020-08-21
CN2020108501225 2020-08-21

Publications (1)

Publication Number Publication Date
CN112351323A true CN112351323A (en) 2021-02-09

Family

ID=74360137

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202011149949.XA Pending CN114079829A (en) 2020-07-06 2020-10-23 Display device and generation method of video collection file watermark
CN202011148295.9A Pending CN112351323A (en) 2020-07-06 2020-10-23 Display device and video collection file generation method
CN202011148312.9A Pending CN114079812A (en) 2020-07-06 2020-10-23 Display equipment and camera control method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011149949.XA Pending CN114079829A (en) 2020-07-06 2020-10-23 Display device and generation method of video collection file watermark

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011148312.9A Pending CN114079812A (en) 2020-07-06 2020-10-23 Display equipment and camera control method

Country Status (1)

Country Link
CN (3) CN114079829A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860922A (en) * 2021-03-10 2021-05-28 北京晓数聚数字科技有限公司 Video collection automatic generation method based on data intelligence and machine vision
CN113038151A (en) * 2021-02-25 2021-06-25 北京达佳互联信息技术有限公司 Video editing method and video editing device
CN113542791A (en) * 2021-07-08 2021-10-22 山东云缦智能科技有限公司 Personalized video collection generation method
WO2022007568A1 (en) * 2020-07-06 2022-01-13 海信视像科技股份有限公司 Display device, smart terminal, and video highlight generation method

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075814A (en) * 2009-11-19 2011-05-25 康佳集团股份有限公司 Television system for recording growth process of baby
JP5760438B2 (en) * 2010-12-28 2015-08-12 富士通株式会社 Digital watermark embedding apparatus, digital watermark embedding method, digital watermark embedding computer program, and digital watermark detection apparatus
US9473809B2 (en) * 2011-11-29 2016-10-18 At&T Intellectual Property I, L.P. Method and apparatus for providing personalized content
CN102905073B (en) * 2012-09-28 2014-12-24 歌尔声学股份有限公司 Control device of television camera
CN103677848B (en) * 2013-12-27 2018-11-16 厦门雅迅网络股份有限公司 A kind of camera control method based on Android
CN103793246B (en) * 2014-01-22 2017-09-05 深圳Tcl新技术有限公司 Coordinate the method and system of camera resource
CN104135670B (en) * 2014-07-22 2018-01-19 乐视网信息技术(北京)股份有限公司 A kind of video broadcasting method and device
CN104301777B (en) * 2014-10-24 2017-10-03 杭州自拍秀科技有限公司 The automatic system and method for obtaining image and pushing to correspondence intelligent terminal immediately
CN104867203A (en) * 2015-06-10 2015-08-26 深圳市凯立德科技股份有限公司 Method for storing video files and automobile data recorder
CN105872854A (en) * 2015-12-14 2016-08-17 乐视网信息技术(北京)股份有限公司 Watermark showing method and device
CN105979188A (en) * 2016-05-31 2016-09-28 北京疯景科技有限公司 Video recording method and video recording device
CN106534940B (en) * 2016-10-14 2020-12-11 腾讯科技(北京)有限公司 Display method and device of live broadcast entry preview
US10218911B2 (en) * 2017-03-22 2019-02-26 Htc Corporation Mobile device, operating method of mobile device, and non-transitory computer readable storage medium
CN108429879B (en) * 2018-02-13 2020-12-25 Oppo广东移动通信有限公司 Electronic apparatus, camera control method, camera control apparatus, and computer-readable storage medium
CN108924290B (en) * 2018-06-12 2020-01-14 Oppo广东移动通信有限公司 Camera control method and device, mobile terminal and computer readable medium
CN110505390B (en) * 2019-09-24 2021-02-05 深圳创维-Rgb电子有限公司 Television, camera calling method thereof, control device and readable storage medium
CN111405237A (en) * 2019-12-11 2020-07-10 杭州海康威视系统技术有限公司 Cloud storage system providing preview function and preview method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CNMO宅秘: "《https://baijiahao.baidu.com/s?id=1664476332813999618&wfr=spider&for=pc》", 20 April 2020 *
MIN: "《https://jingyan.baidu.com/article/08b6a5911fe82d54a9092228.html》", 3 November 2019 *
无: "《news.ikanchai.com/2019/0821/304806.shtml》", 21 August 2019 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022007568A1 (en) * 2020-07-06 2022-01-13 海信视像科技股份有限公司 Display device, smart terminal, and video highlight generation method
CN113038151A (en) * 2021-02-25 2021-06-25 北京达佳互联信息技术有限公司 Video editing method and video editing device
CN113038151B (en) * 2021-02-25 2022-11-18 北京达佳互联信息技术有限公司 Video editing method and video editing device
CN112860922A (en) * 2021-03-10 2021-05-28 北京晓数聚数字科技有限公司 Video collection automatic generation method based on data intelligence and machine vision
CN113542791A (en) * 2021-07-08 2021-10-22 山东云缦智能科技有限公司 Personalized video collection generation method

Also Published As

Publication number Publication date
CN114079812A (en) 2022-02-22
CN114079829A (en) 2022-02-22

Similar Documents

Publication Publication Date Title
CN111510781A (en) Display device standby control method and display device
CN112351323A (en) Display device and video collection file generation method
CN112333509B (en) Media asset recommendation method, recommended media asset playing method and display equipment
CN111787379B (en) Interactive method for generating video collection file, display device and intelligent terminal
WO2021031623A1 (en) Display apparatus, file sharing method, and server
CN111836109A (en) Display device, server and method for automatically updating column frame
CN111277884A (en) Video playing method and device
CN112165642B (en) Display device
CN111031375B (en) Method for skipping detailed page of boot animation and display equipment
CN111970549B (en) Menu display method and display device
CN113259741A (en) Demonstration method and display device for classical viewpoint of episode
CN112543359B (en) Display device and method for automatically configuring video parameters
CN113438539A (en) Digital television program recording method and display equipment
CN111787376A (en) Display device, server and video recommendation method
CN112203154A (en) Display device
CN111954059A (en) Screen saver display method and display device
CN111372133A (en) Method for reserving upgrading and display device
CN112506859B (en) Method for maintaining hard disk data and display device
CN113495711A (en) Display apparatus and display method
CN112272320B (en) Display device and duplicate name detection method thereof
CN112333520B (en) Program recommendation method, display device and server
CN112040299B (en) Display device, server and live broadcast display method
CN113542878B (en) Wake-up method based on face recognition and gesture detection and display device
CN113259733B (en) Display device
CN112261463A (en) Display device and program recommendation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210209

RJ01 Rejection of invention patent application after publication