CN116391358A - Display equipment, intelligent terminal and video gathering generation method - Google Patents

Display equipment, intelligent terminal and video gathering generation method Download PDF

Info

Publication number
CN116391358A
CN116391358A CN202180046688.5A CN202180046688A CN116391358A CN 116391358 A CN116391358 A CN 116391358A CN 202180046688 A CN202180046688 A CN 202180046688A CN 116391358 A CN116391358 A CN 116391358A
Authority
CN
China
Prior art keywords
video
controller
recording
camera
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180046688.5A
Other languages
Chinese (zh)
Inventor
鲍姗娟
于文钦
杨鲁明
宁静
孙娜
王学磊
王之奎
丁佳一
刘兆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010640381.5A external-priority patent/CN111787379B/en
Priority claimed from CN202011148295.9A external-priority patent/CN112351323A/en
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Publication of CN116391358A publication Critical patent/CN116391358A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]

Abstract

A display device, an intelligent terminal and a video highlight generation method are provided. The display device includes: a camera; a display for displaying a user interface; a controller configured to: when the target user is in the monitoring range of the display equipment, receiving a video clip acquired from a camera; and generating a video highlight file based on the acquired video clips, wherein the controller controls the user interface to play the video highlight file after receiving confirmation operation.

Description

Display equipment, intelligent terminal and video gathering generation method
The application requires a display device and a method for generating a watermark of a video highlight file, which are filed in the year of 2020, 10 and 23, and have the application number of 20201149949. X and the application name of 'a method for generating a watermark of a video highlight file'; the application number is 202011148295.9 and the application name is 'a display device and a method for generating video highlight files' which are submitted in the year 10 and 23 of 2020; the application number of the camera is 202011148312.9, the application name of the camera is a display device and a control method of the camera, which are submitted in the year 10 and 23 of 2020; an interaction method for generating video highlight files, display equipment and intelligent terminal are submitted in 7/6/2020, the application number is 202010640381.5; priority of chinese patent application No. 202010850122.5, entitled "a display device", filed on 8/21/2020, the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the technical field of display equipment, in particular to display equipment, an intelligent terminal and a video highlight generation method.
Background
Video syndication files are video files that stitch different videos or video clips having similar content, or the same elements, into one video file. In some video highlight file implementations, a user is required to acquire video clips by using a professional photographing tool, watch and screen each video, and then manually intercept and splice the video clips into the video highlight. However, when a user needs to make video collection files for lovely baby, namely, child of a target user in a family, the user cannot operate a photographing tool for a long time, the number of video clips is huge, the video scene is complex, the user cannot acquire the video clips, the clipping speed is low, and the video clip splicing omission errors are caused.
Disclosure of Invention
Some embodiments of the present application provide a display device, including: a camera; the display screen is used for displaying a user interface; a controller configured to: when the target user is in the monitoring range of the display equipment, receiving a video clip acquired from a camera; and generating a video highlight file based on the acquired video clips, wherein the controller controls the user interface to play the video highlight file after receiving confirmation operation.
Some embodiments of the present application provide a method for generating a video highlight file, where the method includes: recording video clips when a target user is in a monitoring range; and generating a video highlight file based on the acquired video clips, wherein the video highlight file is played in a user interface after receiving the confirmation operation.
Some embodiments of the present application provide an intelligent terminal, including:
a display configured to present an end user interface displaying a video recording initiation portal for initiating a video recording function for enabling recording of moving images by video recording and generating a video highlight file within a preset recording duration, the video recording function being configured in a controller of a display device;
a processor coupled to a controller in the display device, the processor having a designated application configured therein, the processor configured to:
receiving a video recording function calling instruction generated when the appointed application program is triggered;
responsive to the video recording function call instruction, presenting an end user interface within the display displaying a video recording initiation portal;
Receiving a video recording starting instruction generated by triggering the video recording starting inlet, sending the video recording starting instruction to a controller, wherein the video recording starting instruction is used for instructing the controller to call a camera in a display device to collect moving image materials of a moving target within a preset recording duration according to a preset collection frequency, and splicing and synthesizing a specified number of the moving image materials to generate a video highlight file for recording the moving target within the preset recording duration.
Drawings
FIG. 1 is a schematic diagram of an operational scenario between a display device and a control apparatus according to one or more embodiments of the present application;
FIG. 2 is a block diagram of a hardware configuration of a display device 200 in accordance with one or more embodiments of the present application;
fig. 3 is a hardware configuration block diagram of the control device 100 according to one or more embodiments of the present application;
FIG. 4 is a schematic diagram of a software configuration in a display device 200 according to one or more embodiments of the present application;
FIG. 5 is a schematic diagram of an icon control interface display for an application in a display device 200 in accordance with one or more embodiments of the present application;
FIG. 6 is a schematic diagram of a user interface according to one or more embodiments of the present application;
FIG. 7 is an interface schematic diagram of a functional effect presentation interface according to one or more embodiments of the present application;
FIG. 8 is an interface diagram of a video recording function start page according to one or more embodiments of the present application;
FIG. 9 is a schematic representation of a playback watermark in accordance with one or more embodiments of the present application;
FIG. 10 is a schematic diagram of an interactive interface for performing sharing to a parent circle in accordance with one or more embodiments of the present application;
FIG. 11 is an interface diagram showing push notifications in accordance with one or more embodiments of the present application;
FIG. 12 is a schematic illustration of a display device video highlight file according to one or more embodiments of the present application;
FIG. 13 is an interface schematic diagram of an end user interface in accordance with one or more embodiments of the present application;
FIG. 14 is an interface schematic diagram of a video list interface in accordance with one or more embodiments of the present application;
FIG. 15 is a watermark schematic of video highlight file playback in accordance with one or more embodiments of the present application;
FIG. 16 is a watermark schematic of video highlight file playback in accordance with one or more embodiments of the present application;
FIG. 17 is a watermark schematic of video highlight file playback in accordance with one or more embodiments of the present application;
FIG. 18 is a watermark schematic of video highlight file playback in accordance with one or more embodiments of the present application;
fig. 19 is a watermark schematic of video highlight file playback in accordance with one or more embodiments of the present application.
Detailed Description
For purposes of clarity, embodiments and advantages of the present application, the following description will make clear and complete the exemplary embodiments of the present application, with reference to the accompanying drawings in the exemplary embodiments of the present application, it being apparent that the exemplary embodiments described are only some, but not all, of the examples of the present application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the claims appended hereto. Furthermore, while the disclosure is presented in the context of an exemplary embodiment or embodiments, it should be appreciated that the various aspects of the disclosure may, separately, comprise a complete embodiment. It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to one or more embodiments of the present application, and as shown in fig. 1, a user may operate the display device 200 through the mobile terminal 300 and the control apparatus 100. The control apparatus 100 may be a remote control, and the communication between the remote control and the display device includes infrared protocol communication, bluetooth protocol communication, and wireless or other wired manner to control the display device 200. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc. In some embodiments, mobile terminals, tablet computers, notebook computers, and other smart devices may also be used to control the display device 200.
In some embodiments, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and achieve the purpose of one-to-one control operation and data communication. The audio/video content displayed on the mobile terminal 300 may also be transmitted to the display device 200, so that the display device 200 may also perform data communication with the server 400 through various communication modes. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The display device 200 may additionally provide an intelligent network television function of a computer support function in addition to the broadcast receiving television function.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200. The communication interface 130 is configured to communicate with the outside, and includes at least one of a WIFI chip, a bluetooth module, NFC, or an alternative module. The user input/output interface 140 includes at least one of a microphone, a touch pad, a sensor, keys, or an alternative module.
Fig. 3 shows a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment. The display apparatus 200 shown in fig. 3 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface 280. The controller includes a central processor, a video processor, an audio processor, a graphic processor, a RAM, a ROM, and first to nth interfaces for input/output. The display 260 may be at least one of a liquid crystal display, an OLED display, a touch display, and a projection display, and may also be a projection device and a projection screen. The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals. The detector 230 is used to collect signals of the external environment or interaction with the outside. The controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include at least one of a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
Fig. 4 is a schematic view of software configuration in a display device 200 according to one or more embodiments of the present application, as shown in fig. 4, the system is divided into four layers, namely, an application layer (application layer), an application framework layer (Application Framework layer), an Android run layer and a system library layer (system runtime layer), and a kernel layer from top to bottom. The kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
Fig. 5 is a schematic diagram of an icon control interface of an application in the display device 200 according to one or more embodiments of the present application, where, as shown in fig. 5, an application layer includes at least one application program that can display a corresponding icon control in a display, for example: a live television application icon control, a video on demand application icon control, a media center application icon control, an application center icon control, a game application icon control, and the like. Live television applications can provide live television through different signal sources. Video on demand applications may provide video from different storage sources. Unlike live television applications, video-on-demand provides video displays from some storage sources. The media center application may provide various applications for playing multimedia content. An application center may be provided to store various applications. < video Collection >
In some embodiments, the display device uses a camera to record a scene of use, for example, by video recording, of a pet's daily life. Because the display device in the related art records the video of the pet, a user usually needs to operate the display device in real time to call the camera to acquire images, so that the video file can be obtained. Therefore, in order to facilitate the user not to be at home in the daytime, the display device can still call the camera to record the wonderful life of the pet, a video recording function can be configured in the display device, and the richness of the use scene of the display device adopting the camera is improved. When the video recording function in the display device is started, the display device can call the camera to collect the moving image materials of the pets in real time. The display device calls the process of the camera to collect the image in real time, the user does not need to operate in real time, and the display device can automatically realize the process as long as the display device is electrified and networked. Finally, the display device can also generate video gathering files according to a plurality of moving image materials collected by the camera in one day and push the video gathering files to the user, so that the user is prevented from missing the wonderful life of the pet after leaving home in daytime.
The video recording function is suitable for recording the daily wonderful life of the pet, and can also record the conditions of other moving targets, such as the condition that the baby of the user is at home. When the display device starts the video recording function, as long as moving targets such as pets or babies appear in the shooting area of the camera, the camera can collect moving image materials, and finally a plurality of moving image materials collected in one day are generated into a video gathering file, and the video gathering file is pushed to a user when the user starts the display device next time so as to be checked. The user can know the daily living condition of the pet or baby at home according to the video gathering file, and good experience can be brought to the user.
Some embodiments of the present application provide a display device 200 that includes a display 275, a camera 232, and a controller 250. The display is configured to present a user interface displaying a video recording initiation portal for initiating a function of recording a specific target by video recording and generating a video highlight file for a preset recording duration; wherein the specific object includes a moving object. The camera is configured to acquire image material of a specific target at a preset acquisition frequency. The specific target is a target of which an image needs to be recorded through video recording within a preset recording duration to generate a video highlight file. The controller is respectively connected with the camera and the display, and is used for starting a video recording function configured in the display equipment, calling the camera to collect image materials of a specific target and finally generating a video highlight file. In order to facilitate faster and more convenient operation when a user interacts with a display device, an interaction method for generating a video highlight file is provided, which can instruct the user how to start a video recording function and how to perform interactive operation on the video highlight file.
To realize the video recording function, a target application program is configured in the display device, specifically, a target application program is configured in the controller, and the target application program is displayed on a system home page interface of the display device. The target application is used to provide a portal for enabling video recording functions. In order to cause a user interface to be presented in the display with the video recording initiation portal, the controller is further configured to: and step 011, responding to a device starting instruction generated when a user starts the display device, and displaying a system homepage interface in the display, wherein a target application program is displayed on the system homepage interface. Step 012, in response to the call-up instruction generated when the target application is triggered, presenting a user interface in the display, in which a video recording start entry is displayed. When the user turns on the display device, a device start command is generated. The controller can call out the system homepage interface according to the equipment starting instruction and display the system homepage interface in the display, and the system homepage interface is displayed with the target application program. The user triggers the target application program in a remote controller or voice mode to generate a call-up instruction, and the controller calls up the user interface according to the call-up instruction to be displayed in the display. At this time, the content displayed on the display is switched from the display system home page interface to a user interface, which is the home page of the target application. And when the user interface is called up, displaying a video recording starting inlet in the user interface, and enabling a user to start a video recording function of the display device by clicking the video recording starting inlet in a remote controller or voice mode, namely starting to acquire specific target image materials, and generating a video gathering file.
FIG. 6 is a schematic diagram of a user interface according to one or more embodiments of the present application. Referring to fig. 6, if the target application is the "a application", a video recording start entry is displayed in the user interface displayed after the "a application" is triggered, for example, the "video highlight sub-application" displayed in the upper right corner of the user interface is triggered, and the video recording function of the display device can be started. In some embodiments, a video recording initiation portal may also be added to the system home interface in the form of a shortcut, such that a user can quickly initiate a video recording function of the display device through the video recording initiation portal on the system home interface after the display device is turned on.
The user triggers a video recording starting inlet on a user interface or a system homepage interface, and a video recording starting instruction can be generated so that a controller starts a video recording function of the display device, namely, a camera is called to start collecting images. Specifically, the controller, upon executing the trigger video recording initiation portal, generates a video recording initiation instruction, further configured to: step 021, responding to the starting instruction generated when triggering the video recording starting entrance, and displaying a functional effect display interface in the display, wherein the functional effect display interface is displayed with a confirmation starting entrance. Step 022, in response to the confirmation instruction generated by triggering the confirmation initiation portal, canceling the display of the functional effect presentation interface in the display, and generating a video recording initiation instruction. In order to facilitate the user to clearly understand the implementation effect of the video recording function when the video recording function is first used, a function effect display interface may be displayed in the display when the video recording start entry is triggered. The content displayed on the functional effects presentation interface includes a functional profile and an available effects presentation.
FIG. 7 is an interface schematic diagram of a functional effect presentation interface in accordance with one or more embodiments of the present application. The pages shown in fig. 7 (a) and 7 (b) are exemplary presentation interfaces of the functional effect presentation interface, and a user may make a certain knowledge of the video recording function according to the content presented on the functional effect presentation interface. The user triggers a video recording start-up entry on the user interface or system home interface via the remote control, which generates a start-up instruction. The controller can adjust the functional effect display interface shown in fig. 7 according to the starting instruction and display the functional effect display interface in the display, and at the moment, the content displayed in the display is switched and displayed as the functional effect display interface by the display user interface. To facilitate a user's ability to quickly initiate a video recording function, a confirmation initiation portal, such as the "on" control shown in fig. 7 (b), may be displayed on the last interface of the functional effects presentation interface. The user triggers the confirmation start-up entry via the remote control, and a confirmation instruction is generated to indicate that the user wants to start up the video recording function. At this time, the controller cancels the display function effect display interface in the display according to the confirmation instruction, and generates a video recording start instruction to start the video recording function of the display device.
In some embodiments, if the user is to activate the video recording function of the display device a second or subsequent time (the user is to activate the video recording function for the first time and then to deactivate it), the user may not display the function effect presentation interface any more, but may display the video recording function activation interface directly when the user triggers the video recording activation entry on the user interface. FIG. 8 is an interface diagram of a video recording function start page according to one or more embodiments of the present application. Referring to fig. 8, a user triggers a video recording start-up entry in a user interface to generate a start-up instruction, and a controller calls up a video recording function start-up interface according to the start-up instruction and displays the video recording function start-up interface in a display. A confirmation initiation portal, such as the "open" control shown in fig. 8, is displayed on the video recording function initiation interface. The user triggers a confirmation initiation portal that generates a confirmation instruction. At this time, the controller cancels the display of the video recording function starting interface in the display according to the confirmation instruction, and generates a video recording starting instruction to start the video recording function of the display device.
After the video recording function of the display device is started, an image material list interface is presented in the display, namely, the controller calls up the image material list interface and displays the image material list interface in the display after responding to the confirmation instruction, wherein the image material list interface is used for displaying image materials and video highlight files. In some embodiments, if the user first initiates the video recording function of the display device, it is stated that the camera has not captured any image material, and no video highlight file is generated, and therefore no content is displayed in the image material list interface. In some embodiments, if the user does not start the video recording function of the display device for the first time, but starts the video recording function of the display device for the second time or for a subsequent time, it is indicated that the camera has collected image materials, and possibly a video highlight file is generated, at this time, the image materials collected by the camera in history and the video highlight file generated by the controller in history are displayed in the image material list interface.
In some embodiments, when performing the interactive method of generating a video highlight file, the controller thereof is configured to perform the steps of: s11, receiving a video recording starting instruction generated when triggering a video recording starting entrance, wherein the video recording starting instruction is used for indicating a camera to acquire image materials of a specific target according to a preset acquisition frequency within a preset recording duration. The user triggers a video recording starting inlet on a user interface or a system homepage interface to generate a video recording starting instruction, and the controller can start a video recording function of the display device according to the video recording starting instruction, namely, invokes a camera to start collecting image materials of specific targets, wherein the specific targets comprise moving targets, and the image materials comprise moving pictures or moving videos and the like. For example, the specific target may be a pet or baby. Since the user is usually separated from the pet or baby during the daytime, that is, the display device only needs to record the life condition of the pet or baby during the daytime, the preset recording duration can be set. The preset recording duration is a time period for the controller to automatically call the camera to collect the specific target image material. In some embodiments, the preset recording duration may be set to 6:00-18:00, i.e., the controller invokes the camera to collect moving image material during a period of 6:00-18:00. When the camera collects the image materials of the specific target, if the specific target always keeps a posture and does not move in the shooting range of the camera, the image materials collected by the camera are the same. In order to avoid excessive storage space occupation caused by long-term acquisition of the same materials by the camera, in some embodiments, the camera may be controlled to acquire image materials according to a preset acquisition frequency.
< determination of Camera State >
In some embodiments, the camera is executing capturing image material of a specific object at a preset capture frequency, and the controller executes: s111, receiving a camera call instruction generated when the controller responds to the video recording start instruction. And the controller can call the camera to collect images after receiving the video recording starting instruction. Therefore, after receiving the video recording starting instruction, the controller generates a camera calling instruction and sends the camera calling instruction to the camera, and the camera can start and start to collect image materials according to the camera calling instruction. The camera may be in a state of being called by other application programs, and at this time, the controller cannot call the camera to collect the image material. Therefore, the controller needs to judge that the camera can be called first, and send a camera calling instruction to the camera under the condition that the camera can be called.
In some embodiments, the controller generates the camera call instruction before sending the camera call instruction to the camera, i.e., when executing the response to the video recording start instruction, the controller performs: and step 1111, responding to the video recording starting instruction, and acquiring the current running state of the camera. Step 1112, if the current running state of the camera is not called state, generating a camera call instruction, where the camera call instruction is used to call the camera to collect the image material. The controller obtains the current running state of the camera according to the video recording starting instruction so as to judge whether the camera can be called. And when the camera is judged to be in the non-called state, the camera can be called to collect the moving image material of the moving object. At this time, the controller generates a camera calling instruction and sends the camera calling instruction to the camera so as to start the camera and call the camera to collect the image materials. After the video recording function is started, the controller calls the camera to continuously collect image materials all the time as long as the camera is not occupied by other application programs. And stopping acquiring the video clips or suspending acquiring the video clips when the controller detects that the camera is called by other applications. For example, when a camera is used for a video call, the video staging application pauses, or stops, capturing video clips. If the video gathering function of the display device is in a closed state, the instant camera is not occupied, the controller does not control the camera to record video clips, and the controller stops the camera from monitoring activities.
S112, responding to a camera calling instruction, and collecting image materials according to a preset collection frequency from the initial moment of the preset recording duration.
And S113, ending the collection of the image materials when the ending time of the preset recording duration is reached. After the camera receives a camera calling instruction sent by the controller, the camera can execute the instruction every day to collect image materials. Therefore, the image material collection starting time of the camera is the initial time of the preset recording duration. The camera collects image materials according to the preset collection frequency from the initial moment of the preset recording duration every day until the collection is ended when the end moment of the preset recording duration is reached. For example, the camera starts the acquisition of moving image material for a moving object every day at 6:00, and ends the acquisition at 18:00. When the ending time of the preset recording duration is reached, the controller generates a camera stopping calling instruction and sends the camera stopping calling instruction to the camera so as to control the camera to end the acquisition of the image materials.
In some embodiments, when the specific target is in the monitoring range of the display device, the controller receives the video clip collected by the camera, and specifically includes: when a specific target is in the monitoring range of the display equipment, the controller controls the camera to perform face detection; judging whether a user is smaller than a preset age or not based on the face detection, and if the specific target is smaller than the preset age, receiving a video clip acquired by a camera by the controller; otherwise, the controller controls the camera to continue face detection.
In some embodiments, after the application service a is started, firstly detecting whether the application service a is set to be started, if the display device is set to be started, starting a delay task with preset time, for example, starting a 5-second delay task; if the display equipment is not set to be started, detecting whether the NAS exists in the hardware equipment; if no NAS exists, the A service is automatically terminated; if the NAS exists, detecting whether video fragments to be synthesized exist, and if so, synthesizing the video; if not, judging whether the video gathering function switch is on and whether the camera is set to be kept lifted (namely, whether the camera is in a starting state).
< acquisition of moving image >
After the camera is called, the camera does not always collect images, but begins to collect image materials after a moving object appears in the shooting range of the camera, so that useless image materials are prevented from being collected by the camera, and resource waste is avoided. In some embodiments, to facilitate accurate determination that the camera is capable of capturing moving image material including a specific object, the controller is further configured to, at an initial time when the camera performs a recording from a preset recording duration, capture the moving image material of the specific object at a preset capture frequency, perform the steps of: and 1121, acquiring a preview picture acquired by the camera from the initial moment of the preset recording duration. Step 1122, image recognition is performed on the preview screen, and it is determined whether or not a moving object exists in the preview screen. And 1123, if a moving object exists in the preview picture, calling a camera to acquire moving image materials of the moving object according to a preset acquisition frequency. After the camera is called, the camera can acquire a preview picture in real time from the initial moment of presetting the recording duration, and the problem of occupying storage space can not occur because the preview picture indicates that video recording is not performed. The controller carries out image recognition on the preview picture acquired by the camera in real time, and determines whether a moving target appears in a shooting area of the camera according to an image recognition result. Only when the moving object appears in the shooting area of the camera, the camera is called to collect the moving image material of the moving object. After judging that the moving object exists in the preview picture, the controller can send a control instruction to the camera, and at the moment, the camera receives the control instruction, and can acquire moving image materials of the moving object according to the preset acquisition frequency.
In some embodiments, to avoid a situation that the camera continuously collects the moving object with the motion state unchanged in a short time, the moving object may perform the collection process only once when the moving object appears in the shooting area of the camera, that is, the camera performs the collection of the moving image material of the moving object according to the preset collection frequency, and is further configured to: and when the moving object exists in the preview picture, calling the camera to execute a process of collecting the moving image material of the moving object once. When the controller judges that a moving object appears in the camera shooting area through image recognition, the controller only calls the camera to collect moving image materials of a section of the moving object, and if the moving object leaves the camera shooting area and enters the camera shooting area again, the controller calls the camera again to collect the moving image materials of a section of the moving object. For example, when a moving object enters a shooting area of a camera at the moment of 9:00, the camera is called to collect a first section of moving image material, and the moving object leaves the shooting area of the camera at the moment of 9:10; after a period of time, the moving object enters the shooting area of the camera again at the moment of 10:23, and then the camera is called again to acquire a second section of moving image material, and the moving object leaves the shooting area of the camera at the moment of 10:40. It can be seen that between 9:00 and 9:10, the camera only collects a section of moving image material, and between 10:23 and 10:40, the camera also only collects a section of moving image material. The acquisition process of the camera is similar and will not be described in detail.
In some embodiments, the preset acquisition frequency includes a preset acquisition duration and a preset acquisition interval duration; then, the camera, when executing the capturing of the moving image material of the moving object at the preset capturing frequency, is further configured to: and 11231, collecting a first section of moving image material of the moving object according to a preset collection duration from the moment when the moving object exists in the preview picture. Step 11232, after the preset acquisition interval duration, acquiring a second section of moving image material of the moving object according to the preset acquisition duration. Starting to collect a first section of moving image material of the moving object from the moment when the moving object exists in the preview picture, namely after receiving a control instruction sent by the controller, wherein the duration of the first section of moving image material is a preset collection duration. In order to avoid the situation that the camera collects the moving object with unchanged moving state in a short time, the camera needs to be controlled to wait for a period of time after completing the collection of one section of moving image material, and then the next section of moving image material is collected. The camera waits for the time of the preset acquisition interval time after acquiring the first section of moving image materials, and acquires the second section of moving image materials of the moving object again, wherein the time of the second section of moving image materials is the preset acquisition time.
And if the moving object leaves the shooting area of the camera again, the controller carries out image recognition on the acquisition picture of the camera again. And if the moving object is not found in the shooting area of the camera, controlling the camera to stop the acquisition of the moving image. That is, in the process of the camera capturing the moving image material of the moving object, image recognition is performed on the capturing frame of the camera in real time, and once the motion is recognized not to appear in the capturing frame, a stop control command is immediately sent to the camera, so that the camera stops capturing. In the preset recording time period, the moving target may appear in the shooting area of the camera for many times, and various moving states may appear in the shooting area of the camera, so that the camera may acquire multiple sections of moving image materials according to the preset acquisition frequency.
And S12, when the ending time of the preset recording duration is reached, acquiring a plurality of moving image materials of the moving object acquired by the camera within the preset recording duration.
After the video recording function is started, the camera is always in a state of collecting image materials as long as the camera is not occupied by other application programs. The moment when the camera stops collecting the moving image material of the moving object is the ending moment when the preset recording duration is reached, for example, when the current 18:00 is reached, the controller generates a stop control instruction and sends the stop control instruction to the camera so as to control the camera to stop collecting the moving image material of the moving object. At this time, in a preset recording duration, the camera may acquire a plurality of moving image materials, and the controller may acquire a plurality of moving image materials to synthesize a video highlight file.
And S13, splicing and synthesizing the specified number of moving image materials to generate a video highlight file for recording the moving object in the preset recording duration. In order to ensure that the synthesized video highlight files meet the requirement of short videos, when synthesizing a plurality of moving image materials into the video highlight files, the video highlight files are needed to be synthesized according to a specified number. Each moving image material comprises the moving condition of the moving object, and then the video highlight file is used for recording the moving condition of the moving object in the preset recording duration. When viewing the video highlight file, the user can watch the wonderful life of a moving object such as a pet or a baby in the preset recording time. In some embodiments, the controller generates a video highlight file based on the acquired plurality of video segments, wherein the video highlight file includes an earliest video segment and a latest video segment within the preset time period. For example, the controller detects whether a valid video clip exists in the memory when generating the video highlight file; the effective video clips refer to video clips which are recorded normally. When the video clips to be synthesized exist in the memory and the space capacity is sufficient, the controller classifies all the effective video clips according to the day, the video clips in each day are arranged in an ascending order, and a preset number of video clips are selected from the video clips to be synthesized. Notably, in this application, the video clips and image material are substantially identical.
In some embodiments, the process of the controller generating the video highlight file based on the video clip may be implemented as: the controller reads each frame of video stream data from the video clip and writes the video stream data into a video track of the target file; and writing the first frame of the next video into the back of the last frame of the previous video to form video continuous frame data of the video highlight. The controller accumulates and calculates the video duration of the generated video highlight file when synthesizing the video stream data. After the video stream data is synthesized, the controller reads each frame of data of the audio file and writes the data into the audio track of the target file. If the time length of the read audio track data is longer than the video time length, stopping reading the audio data, and completing video synthesis; and deleting the video clips to be synthesized after video synthesis, and releasing occupied space. < number of synthetic files >
In some embodiments, in order to quickly and accurately generate a video highlight file meeting the short video requirement, the controller is further configured to perform the following steps when performing stitching and synthesizing a specified number of moving image materials to generate a video highlight file recording a moving object within a preset recording duration: s131, acquiring the total material quantity of the moving image materials acquired by the camera within a preset recording time. And S132, if the total material quantity is smaller than or equal to the specified quantity, splicing and synthesizing the moving image materials with the total material quantity according to the time sequence. S133, if the total number of materials is greater than the specified number, selecting the specified number of target moving image materials in the moving image material set. S134, splicing and synthesizing the target moving image materials with the specified quantity according to the time sequence, and generating a video highlight file for recording the moving target in the preset recording duration. And each time the camera acquires a section of moving image material, the moving image material is sent to the controller, and the controller can count the total material quantity of the moving image material acquired by the camera within a preset recording duration.
In some embodiments, the total duration of the synthesized video highlight file is about 30s, which accords with the duration requirement of the short video, that is, at most 6 sections of moving image materials are needed to be synthesized, and if the total material number exceeds 6 sections, 6 sections of suitable moving image materials are needed to be selected from the total material number to be synthesized. And comparing the total material quantity of the moving image materials collected by the camera with the appointed quantity, and if the total material quantity is smaller than or equal to the appointed quantity, indicating that the duration requirement of synthesizing the short video is met, and at the moment, directly splicing and synthesizing the moving image materials with the total material quantity according to the time sequence.
If the total material number is larger than the specified number, the total duration of the video highlight file exceeds the duration of the short video after the moving image materials with the total material number are synthesized into the video highlight file, and the video highlight file is a file which does not meet the requirements. Therefore, to ensure smooth composition of the video highlight files, the controller may select a specified number of target moving image materials in the moving image material set corresponding to the total number of materials. And then splicing and synthesizing the target moving image materials with the specified quantity according to the time sequence to generate a video highlight file for recording the moving target in the preset recording duration. For example, if the specified number sets 6 segments and the total number of the moving image materials is 5 segments, the 5 segments of the moving image materials can be directly spliced and combined to obtain a video highlight file; if the total number of the moving image materials is 10, selecting a proper target moving image material from the 10 moving image materials, and then splicing and synthesizing the 6 target moving image materials to obtain the video highlight file.
In some embodiments, when the total number of materials exceeds the specified number to select the specified number of target moving image materials in the moving image material set corresponding to the total number of materials, the time span between the selected target moving image materials may be larger. For example, if the camera acquires the moving image materials in the front section, the middle section and the rear section of the preset recording duration, when selecting the target moving image materials, it is required to ensure that the moving materials in the front section, the middle section and the rear section are selected, so as to ensure that the life condition of the moving target contained in the synthesized video highlight file is richer.
For this purpose, the controller, when executing step 133, that is, executing selecting a specified number of target moving image materials among the moving image material sets if the total material number is greater than the specified number, is further configured to execute the steps of: step 1331, acquiring the start acquisition time of each piece of moving image material. Step 1332, dividing the start acquisition time corresponding to each section of moving image material into a plurality of material acquisition time periods according to the time period of the start acquisition time, wherein each material acquisition time period comprises a plurality of moving image materials. Step 1333, selecting a target number of moving image materials in each material acquisition period as target moving image materials, wherein the sum of the target numbers selected in each material acquisition period is equal to the designated number.
When the camera collects the moving image materials, the time of each section of moving image materials can be marked by adopting the collection starting time, so that the time period of each section of moving image materials when collected, namely the time period of the front section, the middle section or the rear section of the preset recording duration, can be determined. For example, if the preset recording duration is 6:00-18:00, the former period may refer to the morning period, the middle period may refer to the midday period, and the latter period may refer to the afternoon period. According to different settings of the preset recording duration, the division of the belonging time period can be different, and the number of the division sections of the belonging time period can also be determined according to actual conditions. After each section of moving image material is marked out of the belonging time period, the moving image material can be divided into a plurality of material acquisition time periods according to the starting acquisition time, for example, the material acquisition time periods are an morning time period, a noon time period and an afternoon time period. Each material acquisition period includes at least one section of moving image material.
In order to ensure that the time span of a plurality of moving image materials for synthesizing the video highlight file is large, a certain number of moving image materials can be selected in each material acquisition period respectively. For example, n1 moving image materials are selected in the midday period, n2 moving image materials are selected in the midday period, and n3 moving image materials are selected in the afternoon period. Where n1+n2+n3=a specified number is to be ensured. And finally, splicing and synthesizing the selected n1 moving image materials positioned in the midday period, the n2 moving image materials positioned in the midday period and the n3 moving image materials positioned in the afternoon period according to the time sequence of the starting acquisition time to obtain the video highlight file.
In some embodiments, the video clip selection rule may be implemented as: if the number of the video clips is less than or equal to the preset number, for example, the number is less than or equal to 5 segments, synthesizing all the video clips; if the number of the multiple video clips is greater than the preset number, for example, the number is greater than 5, the first video clip and the earliest video clip are selected, the last video clip is the last video clip and the latest video clip of the video highlight file, and the rest 3 video clips are uniformly selected from the rest video clips.
< watermarking >
In some embodiments, to improve the viewing effect of the video highlight files, background music may be automatically superimposed and a start-up watermark may be added to the video highlight files when they are synthesized. In order to improve the viewing effect of the video highlight file, the controller is used for performing splicing and synthesizing on the specified number of moving image materials to generate the video highlight file with the recorded moving object within the preset recording duration, and the video highlight file is further configured to: s141, splicing and synthesizing the specified number of moving image materials to obtain the video file of the recorded moving object in the preset recording duration. S142, acquiring a preset audio file and a start-up watermark, wherein the start-up watermark is used for identifying the information of the product to which the display equipment belongs. S143, superposing the audio file, the start watermark and the video file to generate a video highlight file for recording the moving object in a preset recording duration. The method comprises the steps of firstly splicing and synthesizing a specified number of moving image materials, wherein the moving image materials can be pictures or videos, and video files are obtained. The controller is preset with a plurality of audio files, and one of the audio files and the video file are randomly selected for synthesis. Meanwhile, the starting watermark is acquired and added into the picture of the video file, in order to avoid shielding of the video content, the starting watermark can be added into the lower right corner of the picture, and the actual adding position can be set according to user preference. The on-air watermark is used to identify the product information to which the display device belongs.
Fig. 9 is a schematic diagram showing a start-up watermark in accordance with one or more embodiments of the present application. The start-up watermark may include a title watermark and a product watermark, see fig. 9 (a), which shows the title watermark "my family diary," 30 days 4/2020, at the start of playing of the video file, and disappears after several seconds of the title watermark; referring to fig. 9 (b), the "XX social television" with the product watermark added in the lower right corner of the picture of the video file, instantly looks like, and is cheered up to one screen ", the product watermark is always displayed on the picture of the video file. And superposing the audio file, the start-up watermark and the video file to generate a video highlight file for recording the moving object within the preset recording duration. In order to provide users with more operation demands, the moving image materials collected by the camera can be stored and displayed in a display. The user can delete and edit the displayed moving image material.
In some embodiments, to facilitate user manipulation of the plurality of motion image materials, the controller is further configured to: when a camera starts to collect moving image materials of a moving object in response to a video recording starting instruction, acquiring each section of moving image materials collected by the camera; an image material list interface is generated in the user interface, and each piece of moving image material is displayed in the image material list interface in time sequence. The user interface is provided with a video recording starting inlet, and after the user triggers the video recording starting inlet, an image material list interface can be presented in the display. The image material list interface is used for displaying moving image materials and video highlight files. Each section of moving image material collected by the camera is displayed in an image material list interface. In some embodiments, the image material list interface is displayed sequentially in chronological order of the start acquisition time of each piece of moving image material. The user may operate on the moving image material displayed in the image material list interface to implement different functions, such as sharing to a friend circle, deleting a selected recording, emptying a recording, and turning off a video recording function. The upper right corner of the image material list interface presents an operation prompt item which can be used for pressing a menu key, closing the function or sharing and deleting the video, and the user acts according to the operation indicated by the operation prompt item, so that the interactive function can be realized.
In some embodiments, to facilitate a user performing a corresponding operation on the moving image material displayed in the image material list interface, the controller is further configured to perform the steps of: step 151, receiving an interactive function calling instruction generated when the user executes corresponding operation according to the operation prompt item. And step 152, responding to the interaction function calling instruction, and presenting a function interaction floating layer in the image material list interface, wherein the function interaction floating layer comprises a plurality of function controls. And 153, responding to a target control instruction generated when the target function control is triggered, and executing target operation corresponding to the target function control. The image material list interface does not display the interactive function portal when the content is displayed normally. If the user wants to realize a certain interactive function, the user needs to call out the interactive function entry first, namely, the user triggers a menu key through a remote controller according to the content of the operation prompt item, and then the interactive function entry can be called out. When the user executes corresponding operation according to the operation prompt item, an interactive function calling instruction is generated, and the controller calls out the functional interactive floating layer as an interactive function inlet according to the interactive function calling instruction and displays the interactive function inlet in the display. The function interaction floating layer covers the bottom of the image material list interface in a floating layer mode, and when the function interaction floating layer is called up for display, the image material list interface is not disappeared for display.
In some embodiments, fig. 15 is a watermark schematic of video highlight file playback according to one or more embodiments of the present application, referring to fig. 15, a controller receives video clips from a camera acquisition in response to a detected target user; a video highlight file containing the first watermark and the second watermark is generated based on the video segment. The first watermark is not displayed after a first display time length from the start of playing the video highlight file by the user interface, and the second watermark is updated along with the system time when the video clip is recorded when the user interface is played, as shown in fig. 16, and fig. 16 is a watermark schematic diagram of playing the video highlight file according to one or more embodiments of the present application. For example, the video highlight 1 contains a first watermark and a second watermark, the first display time length is 3 seconds, and when the video highlight 1 is played on the user interface of the display device, the playing picture includes the first watermark and the second watermark within 3 seconds; after 3 seconds, the playing interface of the video highlight 1 does not display the first watermark, but only displays the second watermark, where the second watermark is displayed as the system time when the video clip is recorded, for example, the second watermark may be displayed as the system time when the playing picture of the video highlight 1 is recorded, so that the user can know the date and time when the playing progress of the current highlight file 1 occurs.
In some embodiments, the controller responds to the detected target user to receive the video segment collected by the camera, and specifically includes that when the controller judges that the video segment is the first of the day, the controller adds a first watermark and a second watermark to the video segment; otherwise, the controller adds a second watermark to the video segment. For example, when the controller of the display device recognizes the target user and starts recording video and determines that the currently recorded video clip is the first video clip of the current day, the controller adds a first watermark and a second watermark to the currently recorded video clip, where the first watermark may be implemented to include, for example, a cover picture and a system date; the second watermark may be implemented, for example, to include timestamp information, i.e., a system date, and a system time, i.e., a system time of the currently recorded video clip, which may be refreshed with the user interface to be displayed dynamically; when the controller judges that the currently recorded video clip is not the first video clip of the current day, the controller only adds a second watermark to the currently recorded video clip, wherein the second watermark can be implemented to contain time stamp information, namely a system date and a system time, namely the system time of the currently recorded video clip, and the second watermark can be refreshed along with a user interface to be dynamically displayed.
Fig. 17 is a watermark schematic of video highlight file playback according to one or more embodiments of the present application. In some embodiments, the video highlight file generated by the display device further includes a third watermark, and the controller responds to the detected target user to receive the video segment collected by the camera, and specifically includes that when the controller determines that the video segment is the first of the day, the controller adds the first watermark, the second watermark and the third watermark to the video segment; otherwise, the controller adds a second watermark and a third watermark to the video segment. For example, when the controller of the display device recognizes the target user and starts recording video and determines that the currently recorded video clip is the first video clip of the current day, the controller adds a first watermark, a second watermark and a third watermark to the currently recorded video clip, where the first watermark may be implemented to include, for example, a cover picture and a system date; the second watermark may be implemented, for example, to include timestamp information, i.e., a system date, and a system time, i.e., a system time of the currently recorded video clip, which may be refreshed with the user interface to be displayed dynamically; the third watermark may be implemented, for example, to include brand LOGO information, as shown in fig. 19, and fig. 19 is a watermark schematic of video highlight file playback according to one or more embodiments of the present application.
When the controller judges that the currently recorded video clip is not the first video clip of the current day, the controller only adds a second watermark and a third watermark to the currently recorded video clip, wherein the second watermark can be implemented to contain time stamp information, namely a system date and a system time, namely the system time of the currently recorded video clip, and the second watermark can be refreshed along with a user interface to be dynamically displayed; the third watermark may be implemented, for example, to include brand LOGO information, as shown in fig. 18, fig. 18 being a watermark representation of video highlight files played according to one or more embodiments of the present application.
< sharing >
The functional interaction float layer comprises a plurality of functional controls, including, for example, a "share to parent circle" control, a "delete selected record" control, a "empty all records" control, a "close this functionality" control, and the like. If the user wants to execute a certain interactive function on the moving image material, the corresponding functional control is triggered by the remote controller. The control of sharing to the parent circle is used for executing sharing operation, the control of deleting the selected record is used for executing operation of deleting a certain moving image material or video highlight file, the control of clearing all records is used for executing operation of clearing all display contents in the image pixel material list interface, and the control of closing the function is used for executing operation of closing the video recording function of the display equipment.
FIG. 10 is a schematic diagram of an interactive interface for performing a sharing to parent circle function in accordance with one or more embodiments of the present application. Referring to fig. 10 (a), if a user wants to share a certain moving image material or video highlight file to a friend circle, after selecting a file to be shared, a "share to friend circle" control is selected as a target function control through a remote controller. The user clicks the target function control "share to parent circle" through the remote controller again, and then the display interface in the display jumps to the sharing interface, and the sharing interface is shown in fig. 10 (b). In the sharing interface, a certain moving image material or video highlight file can be shared to a friend circle. If the user wants to delete a certain moving image material or video highlight file, after selecting the file to be shared, selecting a "delete selected record" control as a target function control through a remote controller. The user clicks the target function control to delete the selected record again through the remote controller, the display interface in the display jumps to a deleting interface, and the deleting interface can delete a certain moving image material or video highlight file. The user can execute the deleting operation by triggering the deleting control based on the deleting interface, and the prompt of deleted is displayed in the deleting interface.
In some embodiments, if the user wants to empty all the moving image materials collected by the camera or the generated video highlight files, the "empty all records" control can be selected as a target function control through the remote controller. And the user clicks the target function control to empty all records through the remote controller again, the display interface in the display jumps to an empty interface, and all the moving image materials or video highlight files displayed in the image material list interface can be emptied in the empty interface. The user can execute the emptying operation by triggering the emptying control based on the emptying interface, and a prompt of emptying all housekeeping records is displayed in the emptying interface.
In some embodiments, if the user wants to turn off the video recording function of the display device, the "close this function" control may be selected as the target function control by the remote control. The user clicks the target function control to close the function again through the remote controller, the display interface in the display jumps to the closing interface at the moment, and the video recording function of the display equipment can be closed in the closing interface. The user can execute the closing operation based on the closing interface triggering the closing control, and the closing interface displays a prompt of closing. After the video recording function of the display device is closed, the user can click on the sharing link to view the video shared by the user before. If the user wants to turn on the function again, the user can continue to view the moving image material of the previous camera and the generated video highlight file.
In some embodiments, to facilitate a user viewing each moving image material based on the image material list interface, a temporal watermark may be added to each moving image material as it is displayed into the image material list interface. Accordingly, the controller, upon performing the display of each piece of moving image material in the image material list interface in chronological order, is further configured to perform the steps of: step 161, acquiring the start acquisition time of each section of moving image material. Step 162, generating a time watermark including the start acquisition time, where the time watermark corresponds to the moving image material one by one. Step 163, displaying each section of moving image material and the corresponding time watermark in the image material list interface according to the time sequence of the starting acquisition time, wherein the time watermark is displayed at the bottom or the top of the corresponding moving image material.
To identify each segment of motion image material, it may be marked by means of a temporal watermark. The time watermark is generated according to the initial acquisition time of each section of moving image material, and each section of moving image material has an independent corresponding time watermark. Each section of moving image material is displayed in the image material list interface according to the time sequence of the starting acquisition time, and likewise, each time watermark is displayed in the image material list interface according to the time sequence of the starting acquisition time, so that the time watermarks are in one-to-one correspondence with the moving image materials. The temporal order may be in order from near to far from the current time so that the newly acquired moving image material is displayed in the front of the image material list interface. The display position of the time watermark may be determined according to actual use, and for example, may be displayed at the bottom, top, left or right of the corresponding moving image material.
In some embodiments, because the moving image materials collected by the camera and the video highlight files synthesized by the controller are all displayed in the image material list interface, in order to identify the video highlight files, time watermarks can be added to the video highlight files. To this end, in adding a temporal watermark to the video highlight file, the controller is further configured to perform the steps of: step 171, obtain the generation time of each video highlight file. Step 172, creating a generation time watermark comprising the generation time. Step 173, displaying each segment of video highlight file and the corresponding generated time watermark in the image material list interface according to the time sequence of the generated time, wherein the generated time watermark is displayed at the bottom or the top of the corresponding video highlight file. And 174, deleting the moving image material generated into the video highlight file. To identify each video highlight file, it may be marked in a manner that generates a temporal watermark. The generated time watermark is generated according to the generation time of each section of video highlight file, and each section of video highlight file has an independent corresponding generated time watermark.
And each section of video highlight files is displayed in the image material list interface according to the time sequence of the generation time, and likewise, each generation time watermark is displayed in the image material list interface according to the time sequence of the generation time, so that the generation time watermarks are in one-to-one correspondence with the video highlight files. The temporal order may be in order from near to far from the current time such that the newly generated video highlight file is displayed in the front of the image material list interface and the video highlight file is displayed before the moving image material.
In some embodiments, a video highlight file may be added with background music, and the camera does not record audio data while recording the video clip. After a preset period of time, i.e. 22 per day: and if the display device is still in the starting-up state after 00 days, or the display device is started up the next day, automatically synthesizing the recorded video clips into a video highlight file with background music added, wherein the camera only needs to record image data and does not record audio data when recording the video clips.
< message push >
In order to remind a user to timely view video highlight files collected and synthesized by the camera automatically when the user is not at home, the video highlight files synthesized on the same day are pushed to the user.
To this end, the controller is further configured to perform the following steps when pushing the video highlight file to the user: generating a push notification according to the video highlight file, wherein the push notification comprises push content and an operation prompt; when the display device is started, a push notification is displayed on a system home interface presented on the display. Because the video highlight file is generated by collecting moving image materials of the moving object within a preset recording time of the day, the user may not start the display device within the preset recording time. Thus, the video highlight file may be pushed to the user when the user first activates the display device after the video highlight file is generated. When a user starts the display device for the first time, a push notification is generated according to the video highlight file, and the push notification is displayed on a system homepage interface presented on a display.
FIG. 11 is an interface diagram showing push notifications in accordance with one or more embodiments of the present application. Referring to fig. 11, the push notification includes push contents such as "your lovely pet-specific Vlog has been generated, see bar soon" and operation prompts such as "view details by menu key". The user can check the pushed video highlight file by triggering the menu key through the remote controller. In some embodiments, the video highlight file is pushed only once and is pushed when a user after the video highlight file is generated first starts the display device. For example, if the preset recording duration is 6:00-18:00, when the preset recording duration reaches 18:00, the controller controls the camera to stop collecting the moving image materials, and meanwhile, the moving image materials with the specified quantity are spliced and synthesized to obtain the video highlight file. After 18:00, if the user starts the display for the first time at the time of 19:00, the form of push notification of the video highlight file is displayed on the system homepage interface so as to prompt the user to check and record the highlight moment of one day of the pet or baby.
It can be seen that the display device provided in the embodiments of the present application is configured to present a user interface displaying a video recording initiation portal for initiating a video recording function of the display device. The controller receives a video recording starting instruction generated when triggering the video recording starting entrance to instruct the camera to acquire moving image materials of the moving target according to a preset acquisition frequency within a preset recording duration; and when the ending time of the preset recording duration is reached, splicing and synthesizing the specified number of moving image materials to generate a video highlight file for recording the moving object in the preset recording duration. Therefore, after the user starts the video recording function, the display device provided by the embodiment of the application does not need to be operated in real time, the controller can actively call the camera in real time to collect moving image materials of the moving object, the video gathering files are synthesized after the image collection is completed, and the user can know the moving condition of the moving object in the preset recording duration according to the video gathering files, so that good experience can be brought to the user.
< video highlight File >
In some embodiments, fig. 12 is a schematic diagram of a display device video highlight file according to one or more embodiments of the present application. Referring to fig. 12, the user interface includes a plurality of video highlight poster controls, the underside of which also includes a reminder text for a reminder generation date. For example, the video highlights may be implemented as multiple video highlights in a day, i.e., the video highlights are all generated on the same day, but the content of the video highlights is different; in other embodiments, the video highlights may also be implemented as video highlights of different dates, i.e. the date of generation of each video highlight is different.
In some embodiments, the display device controller controls the camera to record video clips, generates a video highlight file based on the acquired plurality of video clips, and specifically includes the controller recording video clips not greater than a first preset time length in a preset time period; after the preset time period or after the first starting up of the following day of the preset time period, the controller generates a video highlight file based on the acquired plurality of video clips. For example, the display device displays the image for a preset period of time, such as 00 per day: 00-22: in 00, the controller detects that the video gathering switch is in an on state, the camera is in an on state, namely the camera is in a lifting state, the controller of the display device automatically detects a lovely baby in front of the display device, and automatically records a video segment with a duration not longer than a first preset time length, such as 6s, after each detected lovely baby; after a preset period of time, i.e. 22 per day: and if the display device is still in the starting-up state after 00 days, or the display device is started up the next day, automatically synthesizing the recorded video clips into a video highlight file, storing the video highlight file into an NAS (Network Attached Storage: network attached storage) appointed directory, and pushing the video highlight file to a user through notification, wherein the user can click to view, and can also view a target user highlight stored in a downloaded NAS disc through a mobile phone terminal.
In some embodiments, the display device controller generates a video highlight file based on the plurality of video clips acquired after the first power-on a day subsequent to the preset time period, specifically including generating the video highlight file based on the plurality of video clips acquired after a second preset time length of the first power-on a day subsequent to the preset time period. For example, the target user detection service of the display device may be configured to include power-on initiation and timed auto-initiation; if the system is configured to start the target user detection service, the system can be set after a preset second time length, for example, after 5 minutes, the task of synthesizing the video highlight file by the target user video clip can be set, and the system is prevented from being blocked and slowed down due to the fact that more preemptive system resources are applied to start after the system is detected and synthesized after 5 minutes.
In some embodiments, if the television display device is not connected to the NAS, a specific button in the login interface of the a application will be grayed out, which means that the video highlight switch is in the off state, and the user interface will display a prompt message. In some embodiments, the video highlight files generated by the display device may be stored in the NAS, and the display device, or the mobile terminal, or the home private cloud may play the video highlight files by accessing the NAS. The video highlight files generated by the display equipment are stored on the NAS, the video highlight files can be browsed and played through the application A, can be browsed and checked through a family sharing directory in a family private cloud space, and can be checked through a mobile phone terminal application, and the generated video highlight files stored on the NAS by the television display device are stored. In some embodiments, the controller controls the video highlight files stored in the NAS to be pushed to the display device, the mobile terminal, or the home private cloud, and the user can download and play the video highlight files.
< protocol software architecture >
In some embodiments, the a application architecture includes: the system comprises a camera detection module, a complete machine setting module, a switch module, a recording synthesis service module, a NAS module and an application program module. The recording and synthesizing service module comprises a face detection module, a video recording module and a video synthesizing module. The face detection module is used for detecting whether a target user exists in the preview frame, for example, a lovely baby exists; the video recording module is used for recording video when detecting a lovely baby; the video synthesis module is used for synthesizing the synthesizable video clips and synthesizing the video clip sources to generate lovely baby-shaped portraits, namely video portraits. The NAS module is used for storing the file and storing the content output by the recording and synthesizing service module. For example, a video recording file is saved, and the synthesized video highlight file is saved to the relevant directory of the NAS. The application program module comprises a highlight browsing interface module and a menu module, wherein the highlight browsing interface module is used for displaying video highlight files stored in the NAS module, and the video highlight files can be shared to a parent circle, deleted or otherwise operated through the menu module in the highlight browsing interface. The camera detection module is used for detecting the occupancy state of the camera and outputting the detection result to the recording and synthesizing service module; the whole machine setting module is used for setting the camera and outputting the camera to the recording and synthesizing service module; the switch module is used for controlling the A application switch and outputting the A application switch to the recording synthesis service module, and can also determine whether to start the face detection function to perform target user identification, namely target user identification; the switch module is also used for an entrance of the application A, for example, when a lovely baby detection switch is opened, lovely baby detection is allowed, and a lovely baby video fragment is shot; in some embodiments, the video playing video highlight file may be played by activating a media center.
< Intelligent terminal scheme >
The implementation process of starting the video recording function of the display device provided in some embodiments is implemented by a controller in the display device, and in other embodiments, the implementation process of starting the video recording function of the display device can also be an intelligent terminal connected with the display device. On the intelligent terminal side, the starting of the video recording function of the display equipment, the browsing and editing of moving image materials and the viewing and sharing of video highlight files are supported.
In order to implement a process that the intelligent terminal starts a video recording function of the display device, some embodiments of the present application provide an intelligent terminal 300, which includes: a display 320 and a processor 310 connected to each other. The display 320 is configured to present an end user interface displaying a video recording initiation portal for initiating a video recording function for enabling recording of moving images by video recording and generating video highlights files over a preset recording duration, the video recording function being configured in a controller of the display device. The processor 310 is coupled to the controller 250 in the display device and has a designated application configured therein for calling up an end user interface for display in the display 320. The processor receives a video recording starting instruction generated when a user operates the intelligent terminal, and sends the video recording starting instruction to a controller in the display device, so that the controller starts a video recording function of the display device, namely, a camera is actively called in real time to collect moving image materials of a moving object and synthesize a video highlight file.
In some embodiments, when performing the interactive method of generating a video highlight file, the processor thereof is configured to perform the steps of: s21, receiving a video recording function calling instruction generated when the appointed application program is triggered. In order to realize the video recording function of starting the display device, the intelligent terminal side needs to configure a designated application program in a processor of the intelligent terminal, and the designated application program is displayed on a terminal homepage. An application is designated for providing access to initiate video recording functions. The user manually triggers the designated application program to generate a video recording function call instruction, and the processor can call out an entry for starting the video recording function according to the video recording function call instruction. S22, responding to the video recording function calling instruction, and displaying an end user interface displaying a video recording starting entrance in the display. Upon receiving the video recording function call instruction, the processor presents an end user interface in the display 320 of the intelligent terminal and displays a video recording initiation portal in the end user interface. In some embodiments, fig. 13 is an interface schematic diagram of an end user interface in accordance with one or more embodiments of the present application. After the user triggers the designated application, an end user interface is displayed in the display and a video recording initiation portal, such as "day of lovely pet," is presented in the end user interface. The user clicks the "lovely one day", a video recording function starting interface can be generated, an "open" control is displayed on the page, and the user triggers the "open" control, so that the video recording function can be started.
S23, receiving a video recording starting instruction generated by triggering a video recording starting inlet, sending the video recording starting instruction to a controller, wherein the video recording starting instruction is used for instructing the controller to call a camera in the display device to collect moving image materials of a moving object within a preset recording duration according to a preset collection frequency, and splicing and synthesizing a specified number of moving image materials to generate a video highlight file for recording the moving object within the preset recording duration.
The user starts the portal based on the video record displayed on the terminal user interface, and triggers the portal, namely opens the control, so as to generate the video record starting instruction. And then the processor sends a video recording starting instruction to a controller in the display device, and after receiving the video recording starting instruction, the controller can start a video recording function of the display device, namely, the controller executes the related steps of calling a camera in the display device to collect moving image materials of a moving object within a preset recording time according to a preset collection frequency, and splicing and synthesizing the moving image materials with a specified quantity to generate a video gathering file for recording the moving object within the preset recording time. The relevant steps of the controller to perform the video recording function may be based on the content of the display device portion with reference to the foregoing embodiments, and will not be described here again.
In some embodiments, the intelligent terminal side will present a video list interface in the display of the intelligent terminal after starting the video recording function of the display device. In presenting the video list interface, the processor is further configured to perform the steps of: step 241, in response to the video recording initiation instruction, presenting a video list interface in the display. And step 242, acquiring the moving image materials and the video highlight files stored in the controller, and displaying the moving image materials and the video highlight files in a video list interface. After receiving a video recording starting instruction generated when a user triggers a video recording starting inlet, the processor sends the video recording starting instruction to the controller so as to start a video recording function of the display device, and switches display contents of a display of the intelligent terminal according to the video recording starting instruction, namely, a terminal user interface displayed in the display of the intelligent terminal is switched and displayed as a video list interface. The video list interface is similar to the image material list interface in the display device, and is used for displaying moving image materials and video highlight files. The method comprises the steps that moving image materials collected by a camera and synthesized video highlight files are stored in a controller, the controller displays the moving image materials and the synthesized video highlight files on an image material list interface and simultaneously sends the moving image materials and the synthesized video highlight files to a processor of an intelligent terminal, and the processor displays the moving image materials and the synthesized video highlight files on the video list interface. That is, the video listing interface is consistent with what the image material listing interface displays.
In some embodiments, if the user first activates the video recording function of the display device, it is stated that the camera has not captured any moving image material, and no video highlight file is generated, and therefore no content is displayed in the video list interface. FIG. 14 is an interface schematic diagram of a video list interface in accordance with one or more embodiments of the present application. When the user starts the video recording function for the first time, the contents displayed in the video list interface are as shown in fig. 14 (a). At this time, the prompt content "pet is not found yet" is displayed in the video list interface. In some embodiments, if the user does not start the video recording function of the display device for the first time, but starts the video recording function for the second time or for a subsequent time, it is explained that the camera has once acquired the moving image material, and possibly the video highlight file is generated, at this time, the moving image material acquired by the camera in the history and the video highlight file generated by the controller in the history are displayed in the video list interface, as shown in the video list interface (b) in fig. 14. The display sequence of the moving image material and the video highlight file in the video list interface may be the same as that in the image material list interface, and will not be described here again.
In some embodiments, the user may personalize the moving image material and video highlight files displayed in the video listing interface to achieve different functions, such as sharing and downloading. To this end, when the user performs a personalization operation based on the video list interface, the processor is further configured to perform the steps of: step 251, displaying personalized operation controls at the bottom of each moving image material and video highlight file displayed in the video list interface. Step 252, executing the target personalized operation corresponding to the target personalized operation control in response to the personalized operation instruction generated by triggering the target personalized operation control. In some embodiments, the personalized operation control includes a "share" control and a "download" control, where the "share" control may perform an operation of sharing a certain moving image material or video highlight file to another platform, and the "download" control is used to perform an operation of downloading a certain moving image material or video highlight file to a local location. And the processor can execute corresponding personalized operation according to the personalized operation instruction, for example, sharing to other platforms or downloading to the local.
In some embodiments, in order to facilitate personalized operations of moving image materials and video highlights displayed in the video list interface by family members through the intelligent terminal, the intelligent terminal may grant rights to other intelligent terminals to operate the video list interface. When the intelligent terminal authorizes the other intelligent terminals to perform personalized operation on the video list interface, the processor is further configured to perform the following steps: step 261, receiving a device sharing instruction generated when the device sharing control is triggered, where the device sharing instruction is used to establish an association relationship with the opposite-end intelligent terminal. Step 262, a device sharing instruction is sent to at least one opposite-end intelligent terminal, an association relationship between the local-end intelligent terminal and the at least one opposite-end intelligent terminal is established, and the device sharing instruction is used for instructing the opposite-end intelligent terminal to execute personalized operation on the video highlight file according to the association relationship. The terminal user interface is provided with a setting entry, and a user triggers the setting entry, so that a terminal setting interface can be presented in a display of the intelligent terminal, and equipment sharing controls are configured in the terminal setting interface. And triggering the equipment sharing control in the terminal setting interface by the user to generate an equipment sharing instruction. And the processor of the local intelligent terminal sends the equipment sharing instruction to the opposite intelligent terminal which needs to perform personalized operation on the video list interface. And the opposite-end intelligent terminal can establish an association relation with the local-end intelligent terminal after receiving the equipment sharing instruction, so that the opposite-end intelligent terminal operates the video list interface.
The operation steps of the opposite-end user to operate the opposite-end intelligent terminal are the same as those of the operation steps of the local-end user to operate the local-end intelligent terminal and the pages presented by the corresponding display, and the only difference is that the opposite-end user cannot operate the opposite-end intelligent terminal to execute opening and closing operations on the video recording function of the display device. If the local end user does not start the video recording function of the display device, no content is displayed in the video list interface. At this time, if the opposite terminal user triggers the video recording start entry on the terminal user interface of the opposite terminal intelligent terminal, a prompt is displayed on the terminal user interface, for example, the prompt content may be "the administrator can view only by opening the function", and the administrator refers to the local terminal user. After the local user starts the video recording function of the display device based on the local intelligent terminal, the opposite terminal user can perform personalized operation on the moving image material and the video highlight file on the opposite terminal intelligent terminal.
As can be seen, some embodiments of the present application provide an intelligent terminal wherein the display is configured to present an end user interface displaying a video recording initiation portal for initiating video recording functions of the display device. The processor is coupled to the controller in the display device, the processor configured to: receiving a video recording function calling instruction generated when a specified application program is triggered, and presenting a terminal user interface displaying a video recording starting entrance in a display; the method comprises the steps of receiving a video recording starting instruction generated by triggering a video recording starting inlet and sending the video recording starting instruction to a controller, wherein the video recording starting instruction is used for instructing the controller to call a camera in display equipment to collect moving image materials of a moving object in a preset recording duration according to a preset collection frequency, and splicing and synthesizing a specified number of moving image materials to generate a video gathering file for recording the moving object in the preset recording duration. Therefore, according to the intelligent terminal provided by the embodiment of the application, the user starts the video recording function of the display device through the intelligent terminal, the user does not need to operate the display device in real time, the functions of the display device for collecting moving image materials of moving targets and synthesizing video highlight files can be achieved, the user can know the movement condition of the moving targets in the preset recording duration according to the video highlight files, and good experience can be brought to the user.
Some embodiments of the present application further provide an interaction method for generating a video highlight file, which is applied to a display device, and the method includes: s11, receiving a video recording starting instruction generated when the video recording starting entrance is triggered, wherein the video recording starting instruction is used for indicating a camera to acquire moving image materials of a moving object according to a preset acquisition frequency in a preset recording time period, the video recording starting entrance is used for starting a function of recording moving images through video recording in the preset recording time period and generating a video highlight file, and the moving object is a target which needs to record the moving images through the video recording in the preset recording time period to generate the video highlight file; s12, when the ending time of the preset recording duration is reached, acquiring a plurality of moving image materials of the moving object acquired by the camera in the preset recording duration; and S13, splicing and synthesizing the specified number of the moving image materials to generate a video highlight file for recording the moving object within a preset recording duration.
Some embodiments of the present application further provide an interaction method for generating a video highlight file, which is applied to an intelligent terminal, and the method includes: s21, receiving a video recording function calling instruction generated when the appointed application program is triggered; s22, responding to the video recording function calling instruction, presenting a terminal user interface displaying a video recording starting inlet in the display, wherein the video recording starting inlet is used for starting a video recording function which is used for recording moving images through video recording and generating video gathering files in a preset recording duration, and the video recording function is configured in a controller of display equipment; s23, receiving a video recording starting instruction generated by triggering the video recording starting inlet, and sending the video recording starting instruction to a controller, wherein the video recording starting instruction is used for instructing the controller to call a camera in a display device to collect moving image materials of a moving target within a preset recording duration according to a preset collection frequency, and splicing and synthesizing a specified number of the moving image materials to generate a video highlight file for recording the moving target within the preset recording duration.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the above discussion in some examples is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (15)

  1. A display device, comprising:
    a display for displaying a user interface;
    a controller configured to:
    when the target user is in the monitoring range of the display equipment, receiving a video clip acquired from a camera;
    and generating a video highlight file based on the acquired video clips, wherein the controller controls the user interface to play the video highlight file after receiving the confirmation operation.
  2. The display device of claim 1, the controller further configured to determine that a video highlight switch, camera of the display device is in an on state before the controller receives video clips from a camera acquisition.
  3. The display device of claim 1, the controller configured to:
    the controller records video clips with the time length not longer than a first preset time period in a preset time period;
    after the preset time period or after the first starting up of the following day of the preset time period, the controller generates a video highlight file based on the acquired plurality of video clips.
  4. A display device as claimed in claim 3, the controller being configured to:
    and after a second preset time length of first startup on the day after the preset time period, the controller generates a video highlight file based on the acquired plurality of video clips.
  5. A display device as claimed in claim 3, the controller being configured to:
    the controller generates a video highlight file based on the acquired plurality of video clips, wherein the video highlight file comprises the earliest video clip and the latest video clip in the preset time period.
  6. The display device of claim 1, the controller configured to:
    when a user is in the monitoring range of the display equipment, the controller controls the camera to perform face detection;
    judging whether the user is smaller than a preset age or not based on the face detection, if the user is smaller than the preset age, judging the user as a target user, and receiving a video clip acquired from a camera by the controller; otherwise, the controller controls the camera to continue face detection.
  7. The display device of claim 1, further comprising a network attached memory,
    the controller is configured to:
    and storing the video highlight file in a network attached memory.
  8. The display device of claim 1, the controller configured to:
    and adding background music to the video highlight file, wherein the video clips are recorded by the camera without recording audio data.
  9. The display device of claim 1, the controller configured to:
    acquiring a preview picture acquired by the camera from an initial moment of presetting a recording duration;
    performing image recognition on the preview picture, and judging whether the moving target exists in the preview picture;
    and if the moving target exists in the preview picture, calling the camera to acquire moving image materials of the moving target according to a preset acquisition frequency.
  10. An intelligent terminal, comprising:
    a display configured to present an end user interface displaying a video recording initiation portal for initiating a video recording function for enabling recording of moving images by video recording and generating a video highlight file within a preset recording duration, the video recording function being configured in a controller of a display device;
    A processor coupled to a controller in the display device, the processor having a designated application configured therein, the processor configured to:
    receiving a video recording function calling instruction generated when the appointed application program is triggered;
    responsive to the video recording function call instruction, presenting an end user interface within the display displaying a video recording initiation portal;
    receiving a video recording starting instruction generated by triggering the video recording starting inlet, sending the video recording starting instruction to a controller, wherein the video recording starting instruction is used for instructing the controller to call a camera in a display device to collect moving image materials of a moving target within a preset recording duration according to a preset collection frequency, and splicing and synthesizing a specified number of the moving image materials to generate a video highlight file for recording the moving target within the preset recording duration.
  11. The intelligent terminal of claim 10, the controller further configured to:
    responsive to the video recording initiation instruction, presenting a video list interface in the display;
    and acquiring the moving image materials and the video highlight files stored in the controller, and displaying the moving image materials and the video highlight files in the video list interface.
  12. The intelligent terminal of claim 11, the controller further configured to:
    displaying personalized operation controls at the bottom of each moving image material and video highlight file displayed in the video list interface;
    and responding to a personalized operation instruction generated by triggering a target personalized operation control, and executing target personalized operation corresponding to the target personalized operation control.
  13. The intelligent terminal of claim 10, wherein the terminal user interface has a device sharing control presented therein; and, the controller is further configured to:
    receiving an equipment sharing instruction generated when the equipment sharing control is triggered, wherein the equipment sharing instruction is used for establishing an association relation with an opposite-end intelligent terminal;
    and sending the equipment sharing instruction to at least one opposite-end intelligent terminal, and establishing an association relation between the local-end intelligent terminal and the at least one opposite-end intelligent terminal, wherein the equipment sharing instruction is used for indicating the opposite-end intelligent terminal to execute personalized operation on the video highlight file according to the association relation.
  14. A method for generating a video highlight file comprises the following steps:
    recording video clips when a target user is in a monitoring range;
    And generating a video highlight file based on the acquired video clips, wherein the video highlight file is played in a user interface after receiving the confirmation operation.
  15. The method of claim 14, comprising:
    receiving a video recording starting instruction generated when triggering the video recording starting entrance, wherein the video recording starting instruction is used for indicating a camera to acquire moving image materials of a moving object according to a preset acquisition frequency in a preset recording time period, the video recording starting entrance is used for starting a function of recording moving images through video recording in the preset recording time period and generating a video highlight file, and the moving object is a target which needs to record the moving images through video recording in the preset recording time period to generate the video highlight file;
    when the ending time of the preset recording duration is reached, acquiring a plurality of moving image materials of the moving object acquired by the camera in the preset recording duration;
    and splicing and synthesizing the specified number of the moving image materials to generate a video highlight file for recording the moving object within a preset recording duration.
CN202180046688.5A 2020-07-06 2021-06-07 Display equipment, intelligent terminal and video gathering generation method Pending CN116391358A (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
CN202010640381.5A CN111787379B (en) 2020-07-06 2020-07-06 Interactive method for generating video collection file, display device and intelligent terminal
CN2020106403815 2020-07-06
CN2020108501225 2020-08-21
CN202010850122 2020-08-21
CN202011148295.9A CN112351323A (en) 2020-08-21 2020-10-23 Display device and video collection file generation method
CN2020111483129 2020-10-23
CN202011148312.9A CN114079812A (en) 2020-08-21 2020-10-23 Display equipment and camera control method
CN202011149949.XA CN114079829A (en) 2020-08-21 2020-10-23 Display device and generation method of video collection file watermark
CN2020111482959 2020-10-23
CN202011149949X 2020-10-23
PCT/CN2021/098617 WO2022007568A1 (en) 2020-07-06 2021-06-07 Display device, smart terminal, and video highlight generation method

Publications (1)

Publication Number Publication Date
CN116391358A true CN116391358A (en) 2023-07-04

Family

ID=79552253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180046688.5A Pending CN116391358A (en) 2020-07-06 2021-06-07 Display equipment, intelligent terminal and video gathering generation method

Country Status (2)

Country Link
CN (1) CN116391358A (en)
WO (1) WO2022007568A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8943140B1 (en) * 2014-03-26 2015-01-27 Ankit Dilip Kothari Assign photographers on an event invite and automate requesting, uploading, and sharing of photos and videos for an event
CN105120191A (en) * 2015-07-31 2015-12-02 小米科技有限责任公司 Video recording method and device
CN110557565B (en) * 2019-08-30 2022-06-17 维沃移动通信有限公司 Video processing method and mobile terminal
CN110602546A (en) * 2019-09-06 2019-12-20 Oppo广东移动通信有限公司 Video generation method, terminal and computer-readable storage medium
CN111163274B (en) * 2020-01-21 2022-04-22 海信视像科技股份有限公司 Video recording method and display equipment
CN111787379B (en) * 2020-07-06 2022-06-14 海信视像科技股份有限公司 Interactive method for generating video collection file, display device and intelligent terminal
CN114079812A (en) * 2020-08-21 2022-02-22 海信视像科技股份有限公司 Display equipment and camera control method

Also Published As

Publication number Publication date
WO2022007568A1 (en) 2022-01-13

Similar Documents

Publication Publication Date Title
CN111787379B (en) Interactive method for generating video collection file, display device and intelligent terminal
JP5337257B2 (en) TV tutorial widget
CN111464844A (en) Screen projection display method and display equipment
CN111327931B (en) Viewing history display method and display device
CN112351323A (en) Display device and video collection file generation method
WO2022007545A1 (en) Video collection generation method and display device
CN111836109A (en) Display device, server and method for automatically updating column frame
CN113938731A (en) Screen recording method and display device
WO2021169168A1 (en) Video file preview method and display device
CN114095776A (en) Screen recording method and electronic equipment
CN112506859B (en) Method for maintaining hard disk data and display device
CN116391358A (en) Display equipment, intelligent terminal and video gathering generation method
CN102881303A (en) Play method and play device of multimedia file
CN114116622A (en) Display device and file display method
CN111314414B (en) Data transmission method, device and system
CN114915810A (en) Media asset pushing method and intelligent terminal
WO2023130965A1 (en) Display device, and audio and video data playing method
US20230017626A1 (en) Display apparatus
CN115086771B (en) Video recommendation media asset display method, display equipment and server
CN113573115B (en) Method for determining search characters and display device
CN114915818B (en) Media resource pushing method and intelligent terminal
WO2023093108A1 (en) Display device and file presentation method
CN116939279A (en) Video file storage method and display device
CN116801031A (en) Program recording method and display equipment
CN117812210A (en) Display equipment and program recording method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination