WO2022007545A1 - Procédé de génération de collecte de vidéo et dispositif d'affichage - Google Patents

Procédé de génération de collecte de vidéo et dispositif d'affichage Download PDF

Info

Publication number
WO2022007545A1
WO2022007545A1 PCT/CN2021/097699 CN2021097699W WO2022007545A1 WO 2022007545 A1 WO2022007545 A1 WO 2022007545A1 CN 2021097699 W CN2021097699 W CN 2021097699W WO 2022007545 A1 WO2022007545 A1 WO 2022007545A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
collection
controller
camera
file
Prior art date
Application number
PCT/CN2021/097699
Other languages
English (en)
Chinese (zh)
Inventor
王光强
王学磊
Original Assignee
聚好看科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010640381.5A external-priority patent/CN111787379B/zh
Priority claimed from CN202010710550.8A external-priority patent/CN113973216A/zh
Application filed by 聚好看科技股份有限公司 filed Critical 聚好看科技股份有限公司
Publication of WO2022007545A1 publication Critical patent/WO2022007545A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]

Definitions

  • the present application relates to the technical field of smart devices and video detection, and in particular, to a method for generating a video collection and a display device.
  • Video Highlights is a compilation of different videos or video clips with similar content into one video. For example, video highlights can be obtained by splicing active videos of different time periods.
  • users are required to use professional photography tools to obtain video clips, watch and filter each video, and then manually intercept and stitch them into a video collection.
  • Some embodiments of the present application provide a display device, including: a camera; a microphone; a display screen for displaying a user interface; a first controller, configured to: in response to a received trigger signal, control the camera to acquire video clips ; Perform image recognition on the video clips in the preset time period to obtain a first video set, which is a video clip set containing the same elements; splicing the video clips in the first video set into the first video collection file.
  • Some embodiments of the present application provide a method for generating a video collection, the method includes: in response to a received trigger signal, controlling the camera to acquire video clips; performing image recognition on the video clips within a preset time period to obtain A first video set, where the first video set is a set of video clips containing the same elements; the video clips in the first video set are spliced into a first video highlight file.
  • Some embodiments of the present application provide a display device, including: a camera; a microphone; a display screen for displaying a user interface; a first controller configured to: in response to a received trigger signal, control the camera to perform a preset time period
  • the video clips containing the shooting target are acquired in the system to obtain a first video collection; the video clips in the first video collection are spliced to generate a first video collection.
  • Some embodiments of the present application provide a method for generating a video highlight, the method comprising: in response to a received trigger signal, controlling the camera to acquire a video clip containing a shooting target within a preset time period to obtain a first video Collection; splicing the video clips in the first video collection to generate a first video collection.
  • FIG. 1 is a schematic diagram of an operation scenario between a display device and a control device in some embodiments
  • FIG. 2 is a structural block diagram of a display device in some embodiments
  • 3 is an interface schematic diagram of a user interface in some embodiments.
  • FIG. 4 is a schematic interface diagram of a function effect display interface in some embodiments.
  • FIG. 5 is a schematic interface diagram of a video recording function startup page in some embodiments.
  • FIG. 6 is a schematic diagram of a first interface of an image material list interface in some embodiments.
  • FIG. 7 is a schematic diagram of a second interface of an image material list interface in some embodiments.
  • FIG. 8 is a schematic diagram of a TV application interface in some embodiments.
  • FIG. 9 is a schematic diagram of an interface for selecting a video highlight application slave program in some embodiments.
  • FIG. 10 is a schematic diagram of an interface after the video collection is generated in some embodiments.
  • FIG. 11 is a schematic diagram of an interface after a video collection is generated in some embodiments.
  • FIG. 12 is a schematic diagram of an interface for playing a first video collection in some embodiments.
  • FIG. 13 is a schematic flowchart of a method for generating a video collection in some embodiments.
  • FIG. 15 is a schematic diagram of displaying a watermark in some embodiments.
  • 16 is a schematic diagram of an interactive interface for executing a function of sharing to a circle of relatives and friends in some embodiments;
  • 17 is a schematic diagram of an interactive interface for performing a function of deleting a selected record in some embodiments
  • 18 is a schematic diagram of an interactive interface for performing a function of clearing all records in some embodiments.
  • 19 is a schematic diagram of an interactive interface for performing an operation of closing this function in some embodiments.
  • 20 is a schematic diagram of an interface for displaying a video highlight file in an image material list interface in some embodiments
  • 21 is a schematic diagram of an interface for displaying push notifications in some embodiments.
  • Figure 22 is an interface schematic diagram of a user interface in some embodiments.
  • FIG. 23 is a schematic interface diagram of a video list interface in some embodiments.
  • 24 is a schematic interface diagram of a setting interface in some embodiments.
  • FIG. 25 is an interface schematic diagram of an operation interface in some embodiments.
  • gesture used in this application refers to a user's behavior that is used by a user to express an expected thought, action, purpose/or result through an action such as a change of hand shape or hand movement.
  • FIG. 1 exemplarily shows a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment.
  • a user may operate the display apparatus 200 through the mobile terminal 300 and the control apparatus 100 .
  • the control apparatus 100 can control the display device 200 by a remote control, including infrared protocol communication or Bluetooth protocol communication, and other short-distance communication methods, etc., by wireless or other wired methods.
  • the user can control the display device 200 by inputting user instructions through keys on the remote control, voice input, control panel input, or the like.
  • the user can control the display device 200 by inputting corresponding control commands through the volume up/down key, channel control key, up/down/left/right movement keys, voice input key, menu key, power-on/off key, etc. on the remote control. function.
  • mobile terminals, tablet computers, computers, notebook computers, and other smart devices may also be used to control the display device 200 .
  • the display device 200 is controlled using an application running on the smart device.
  • the app can be configured to provide users with various controls in an intuitive user interface (UI) on the screen associated with the smart device.
  • UI intuitive user interface
  • the mobile terminal 300 may install a software application with the display device 200, and realize connection communication through a network communication protocol, so as to realize the purpose of one-to-one control operation and data communication.
  • a control command protocol can be established between the mobile terminal 300 and the display device 200
  • the remote control keyboard can be synchronized to the mobile terminal 300
  • the function of controlling the display device 200 can be realized by controlling the user interface on the mobile terminal 300.
  • the audio and video content displayed on the mobile terminal 300 may also be transmitted to the display device 200 to implement a synchronous display function.
  • the display device 200 also performs data communication with the server 400 through various communication methods.
  • the display device 200 may be allowed to communicate via local area network (LAN), wireless local area network (WLAN), and other networks.
  • the server 400 may provide various contents and interactions to the display device 200 .
  • the display device 200 interacts by sending and receiving information, and electronic program guide (EPG), receiving software program updates, or accessing a remotely stored digital media library.
  • EPG electronic program guide
  • the server 400 may be in one group, or in multiple groups, or in one or more types of servers.
  • Other network service contents such as video-on-demand and advertising services are provided through the server 400 .
  • the display device 200 may be a liquid crystal display, an OLED display, or a projection display device.
  • the specific display device type, size and resolution are not limited. Those skilled in the art can understand that the display device 200 can make some changes in performance and configuration as required.
  • the display device 200 may additionally provide an intelligent network television function that provides a computer-supported function in addition to the function of broadcasting and receiving television. Examples include Internet TV, Smart TV, Internet Protocol TV (IPTV), and the like.
  • IPTV Internet Protocol TV
  • Some embodiments of the present application may be applied to various types of display devices (including but not limited to: smart TVs, set-top boxes, and other devices).
  • display devices including but not limited to: smart TVs, set-top boxes, and other devices.
  • the technical solution will be described below with regard to the relevant UI for generating video highlights on the TV.
  • the display device adopts the usage scenario of the camera, for example, to record the daily life of pets or people by means of video recording.
  • the existing display device is recording video of the pet, it is usually necessary for the user to operate the display device in real time to call the camera to realize image acquisition, and then the video file can be obtained. Therefore, in order for the user to be away from home during the day, the display device can still call the camera to record the wonderful life of the pet, and the video recording function can be configured in the display device to improve the richness of the usage scenarios of the display device using the camera.
  • the display device can call the camera in real time to collect moving image material of the pet.
  • the process of invoking the camera to collect images by the display device in real time does not require real-time operation by the user. As long as the display device is powered on and connected to the Internet, the display device can automatically realize the process.
  • the display device can also generate a video highlight file based on multiple moving image materials collected by the camera in one day, and push it to the user, so as to prevent the user from missing the wonderful life of pets after leaving home during the day.
  • the video recording function can also record other sports targets, such as the user's baby at home.
  • the video recording function When the video recording function is activated on the display device, as long as moving objects such as pets or babies appear in the shooting area of the camera, the camera will collect moving image materials, and finally generate a video collection file from multiple moving image materials collected in one day. Pushed to the user for viewing the next time the user turns on the display device. The user can know the daily life of the pet or baby at home according to the video collection file, which will bring the user a good experience.
  • FIG. 2 exemplarily shows a structural block diagram of a display device according to some embodiments.
  • a display device 200 including a display 275 and a controller 250, and also A camera 232 may be included.
  • the display is configured to present a user interface displaying a video recording start entry for enabling a function to record a moving image and generate a video highlight file by video recording within a preset recording duration; the camera is configured to follow the preset recording duration.
  • the camera 232 may be built-in to the display device, or may be configured by the user later.
  • the controller is respectively connected with the camera and the display, and the controller is used to start the video recording function configured in the display device, and call the camera to collect the moving image material of the moving target, and finally generate a video collection file.
  • an interactive method for generating a video collection file is provided, which can instruct the user how to start the video recording function and how to perform an interactive operation on the video collection file.
  • a target application program is configured in the display device, specifically, the target application program is configured in the controller, and the target application program is displayed on the system home page interface of the display device.
  • the target application is used to provide the entry to start the video recording function.
  • the controller is further configured to:
  • Step 011A In response to the device startup instruction generated when the user starts the display device, a system home page interface is presented in the display, and the target application program is displayed on the system home page interface.
  • Step 012A In response to the activating instruction generated when the target application is triggered, present a user interface on the display, and the user interface displays a video recording startup entry.
  • the controller can call up the system home page interface according to the device startup instruction, and display it on the display, and the target application program is displayed on the system home page interface.
  • the user triggers the target application program through the remote control or voice to generate a call-up command, and the controller calls up the user interface according to the call-up command and displays it on the display. At this time, the content displayed on the display is switched from the home page interface of the display system to the user interface.
  • the user interface is the main page of the target application.
  • the video recording startup entry is displayed in the user interface.
  • FIG. 3 An interface diagram of a user interface according to some embodiments is exemplarily shown in FIG. 3 .
  • the target application is "Xiaojuan at home”
  • the user interface displayed after triggering "Xiaojuan at home” will display a video recording start entrance, such as "A Day of My Pet” displayed in the upper right corner of the user interface. Trigger "One day for cute pets” to turn on the video recording function of the display device.
  • a video recording startup entry can also be added to the system home page interface in the form of a shortcut, so that after turning on the display device, the user can quickly turn on the display device through the video recording startup entry on the system home page interface.
  • Video recording function can also be added to the system home page interface in the form of a shortcut, so that after turning on the display device, the user can quickly turn on the display device through the video recording startup entry on the system home page interface.
  • the user triggers the video recording start entry on the user interface or the system home page interface to generate a video recording start instruction, so that the controller enables the video recording function of the display device, that is, the camera is called to start capturing images.
  • the controller generates a video recording start instruction when executing the triggering video recording start entry, and is further configured as:
  • Step 021A In response to the start-up instruction generated when the video recording start-up entry is triggered, a function effect display interface is presented on the display, and a confirmation start entry is displayed on the function effect display interface.
  • Step 022A in response to triggering the confirmation command generated by the confirmation start entry, cancel the display of the function effect display interface on the display, and generate a video recording start command.
  • a function effect display interface can be displayed on the display when the video recording startup entry is triggered.
  • the content displayed on the function effect display interface includes the function introduction and the effect display that can be obtained.
  • FIG. 4 exemplarily shows a schematic interface diagram of a function effect display interface according to some embodiments.
  • the pages shown in (a) and (b) of FIG. 4 are exemplary display interfaces of the function effect display interface, and the user can have a certain understanding of the video recording function according to the content displayed on the function effect display interface.
  • the user triggers the video recording start entry on the user interface or the system homepage interface through the remote control, and a start command will be generated.
  • the controller can call up the function effect display interface shown in FIG. 4 according to the start instruction and display it on the display. At this time, the content displayed on the display is switched from the display user interface to the function effect display interface.
  • a confirmation start entry may be displayed on the last interface of the function effect display interface, such as the "enable" control shown in (b) of FIG. 4 .
  • the user triggers the confirmation start entry through the remote control, and a confirmation instruction will be generated, indicating that the user wants to start the video recording function.
  • the controller cancels the display of the function effect display interface on the display according to the confirmation instruction, and generates a video recording start instruction to enable the video recording function of the display device.
  • the user if the user activates the video recording function of the display device for the second or subsequent times (the user activates the video recording function for the first time and then turns it off), when the user triggers the video recording start entry on the user interface, the user can no longer Display the function effect display interface, but directly display the video recording function start interface.
  • FIG. 5 exemplarily shows a schematic interface diagram of a video recording function startup page according to some embodiments.
  • the user triggers the video recording startup entry in the user interface shown in FIG. 3 to generate a startup instruction, and the controller calls up the video recording function startup interface according to the startup instruction and displays it on the display. At this time, the content displayed in the display The display is switched from the user interface shown in FIG. 3 to the video recording function startup interface shown in FIG. 5 .
  • a confirmation startup entry is displayed on the video recording function startup interface, such as the “ON” control shown in FIG. 5 .
  • a confirmation command will be generated.
  • the controller cancels the display of the video recording function startup interface on the display according to the confirmation instruction, and generates a video recording startup instruction to enable the video recording function of the display device.
  • an image material list interface is displayed on the display, that is, the controller calls up the image material list interface and displays it on the display in response to the confirmation instruction.
  • the image material list interface is used to display moving image materials. and Video Highlights files.
  • the content displayed on the display is switched from displaying the function effect display interface shown in FIG. 4 (or the video recording function startup interface shown in FIG. 5 ) to the image material list interface.
  • the user starts the video recording function of the display device for the first time, it means that the camera has not captured any moving image material, and no video collection file has been generated, therefore, nothing is displayed in the image material list interface. .
  • FIG. 6 exemplarily shows a first interface diagram of an image material list interface according to some embodiments.
  • the content displayed in the image material list interface is as shown in (a) of FIG. 6 .
  • the prompt content "No pet has been found” is displayed in the image material list interface, and the operation prompt item "Press the menu button to close the function or share and delete the video" is displayed in the upper right corner of the image material list interface.
  • the menu key is triggered at this time, and only the closing function can be realized.
  • the user triggers the menu key configured on the remote control to call up the floating layer, as shown in (b) in Figure 6, "Close this function”.
  • the user can then trigger the confirm button on the remote control to execute "close this function” to close the video recording function.
  • the user starts the video recording function of the display device for the second or subsequent times instead of starting it for the first time, it means that the camera has collected moving image material, and may also have generated a video highlight file.
  • the image In the material list interface the moving image material collected by the camera and the video collection file generated by the controller history will be displayed.
  • FIG. 7 exemplarily shows a second interface diagram of an image material list interface according to some embodiments.
  • the content displayed in the image material list interface is as shown in FIG. 7 .
  • the upper right corner of the image material list interface presents the operation prompt item "Press the menu key to close the function or share and delete the video".
  • the video recording function of the display device can be turned on. That is to say, the display device can actively call the camera in real time to collect the moving image material of the moving target without the user's real-time operation, and after completing the video recording of the day, generate a video collection file from multiple moving image materials.
  • the file is a record of the daily life of the sports target.
  • the controller of the display device when executing the interactive method for generating a video highlight file, is configured to perform the following steps:
  • S11A Receive a video recording start instruction generated when a video recording start entry is triggered, where the video recording start instruction is used to instruct the camera to collect moving image material of a moving target according to a preset collection frequency within a preset recording duration.
  • the user triggers the video recording start entry on the user interface or the system home page interface, and the video recording start command can be generated.
  • Moving image footage The moving image material includes moving pictures or moving videos.
  • the preset recording duration is the time period during which the controller automatically calls the camera to collect moving image materials.
  • the preset recording duration may be set to 6:00-18:00, that is, the controller invokes the camera to capture the moving image material within the time period of 6:00-18:00. In other embodiments, the preset recording duration may also be set to other time periods according to the actual application situation of the user.
  • the camera When the camera collects the moving image material of the moving target, if the moving target always maintains a posture and does not move within the shooting range of the camera, then the moving image material collected by the camera has the same content. In order to prevent the camera from collecting the same material for a long time and occupying too much storage space, in some embodiments, the camera may be controlled to collect the moving image material of the moving target according to the preset collection frequency.
  • the camera is further configured to perform the following steps when performing acquisition of moving image material of a moving target according to a preset acquisition frequency:
  • the controller After the controller receives the video recording start instruction, it can call the camera for image acquisition. To this end, after the controller receives the video recording start command, it generates a camera call command and sends it to the camera, and the camera can start and start collecting moving image materials of the moving target according to the camera call command.
  • the controller cannot call the camera to collect the moving image material of the moving target. To this end, the controller needs to first determine that the camera can be called, and if it can be called, send a camera calling instruction to the camera.
  • the controller is further configured to perform the following steps before sending the camera invocation instruction to the camera, i.e., when executing the video recording start instruction in response to the execution of the camera invocation instruction:
  • Step 1111A Acquire the current operating state of the camera in response to the video recording start instruction.
  • Step 1112A If the current operating state of the camera is not called, generate a camera calling instruction, and the camera calling instruction is used to call the camera to collect moving image materials.
  • the controller obtains the current operating state of the camera according to the video recording start instruction to determine whether the camera can be called. And when it is determined that the camera is not in a state of being called, it means that at this time, the camera can be called to execute the collection of moving image materials of the moving target.
  • the controller generates a camera call instruction and sends it to the camera to start the camera and call the camera to collect moving image materials.
  • the controller After the video recording function is activated, as long as the camera is not occupied by other applications, the controller calls the camera to continuously collect the moving image material of the moving target.
  • a moving object refers to a non-static object in the image captured by the camera.
  • the moving target representation is a pet, a person, a movable object.
  • a moving image refers to an image containing a moving object.
  • S112A in response to the camera calling instruction, from the initial moment of the preset recording duration, collect the moving image material of the moving target according to the preset collection frequency.
  • the camera After the camera receives the camera calling instruction sent by the controller, the camera will execute the instruction every day to collect moving image materials for the moving target. Therefore, the moment when the image material of the camera starts to be collected is the initial moment of the preset recording duration.
  • the camera collects the moving image material of the moving target according to the preset collection frequency every day from the initial moment of the preset recording duration, until the end time of the preset recording duration is reached. For example, the camera starts to collect moving image material of the moving target at 6:00 every day, and ends the collection at 18:00.
  • the controller When the end time of the preset recording duration is reached, the controller generates a camera stop calling instruction and sends it to the camera to control the camera to end the collection of moving image material of the moving target.
  • the camera After the camera is called, it does not always collect images, but starts to collect moving image materials after the moving target appears within the shooting range of the camera, so as to prevent the camera from collecting useless image materials and causing waste of resources.
  • the controller collects the moving image material of the moving target according to the preset collection frequency from the initial moment when the camera performs the preset recording duration, is further configured to perform the following steps:
  • Step 1121A Acquire a preview image captured by the camera from the initial moment of the preset recording duration.
  • Step 1122A Perform image recognition on the preview image to determine whether there is a moving object in the preview image.
  • Step 1123A If there is a moving object in the preview image, call the camera to collect the moving image material of the moving object according to the preset collection frequency.
  • the camera After the camera is called, from the initial moment of the preset recording duration, the camera will capture the preview image in real time. Since the preview image indicates that no video recording is performed, the problem of occupying storage space will not occur.
  • the controller performs image recognition on the preview image collected by the camera in real time, and determines whether the moving target appears in the shooting area of the camera according to the image recognition result. Only when the moving target appears in the shooting area of the camera, will the camera be called to collect the moving image material of the moving target.
  • the controller can send a control command to the camera.
  • the camera receives the control command and can collect the moving image material of the moving target according to the preset collection frequency.
  • the collection process may be performed only once when the moving object appears in the shooting area of the camera.
  • the preset collection frequency collects the moving image material of the moving object, and is further configured to: when there is a moving object in the preview screen, call the camera to perform a process of collecting the moving image material of the moving object.
  • the controller determines that there is a moving target in the camera shooting area through image recognition, it only calls the camera to collect a piece of moving image material of the moving target. If the moving target exits the camera shooting area and enters the camera shooting area again, it calls the camera again A moving image footage of a moving target.
  • the camera is called to collect the first piece of moving image material, and the moving object exits the camera shooting area at 9:10;
  • the camera is called again to collect the second piece of moving image material, and the moving target exits the shooting area of the camera at 10:40.
  • the camera only collects a piece of moving image material, and between 10:23 and 10:40, the camera also collects only one piece of moving image material.
  • the acquisition process of the camera is analogous and will not be repeated here.
  • the preset collection frequency includes a preset collection duration and a preset collection interval; then, the camera is further configured to:
  • Step 11231A From the moment when it is determined that there is a moving object in the preview screen, collect the first piece of moving image material of the moving object according to the preset collection duration.
  • Step 11232A After the preset collection interval has elapsed, collect the second segment of moving image material of the moving target according to the preset collection time.
  • the first piece of moving image material of the moving object starts to be collected, and the duration of the first piece of moving image material is the preset collection time.
  • the camera In order to avoid the situation that the camera collects the moving target that has not changed its motion state in a short time, it is necessary to control the camera to wait for a period of time after completing the collection of a piece of moving image material, and then collect the next piece of moving image material. That is, after collecting the first piece of moving image material, the camera waits for the preset collection interval, and then collects the second piece of moving image material of the moving target again, and the duration of the second piece of moving image material is the preset collection time.
  • first and second are only the process of distinguishing two adjacent acquisitions by the camera, and do not limit the number of segments of the moving image material.
  • the preset collection duration may be set to 5s, and the preset collection interval duration may be set to 2s. That is to say, when the moving target is located in the shooting area of the camera, the moving image material collected by the camera is 5s long, and the interval between two adjacent collections is 2s.
  • the controller performs image recognition on the image captured by the camera again. If it is recognized that there is no moving object in the shooting area of the camera, the camera is controlled to stop collecting moving images.
  • the moving target may appear in the shooting area of the camera for many times, and there will be various motion states in the shooting area of the camera, so the camera may collect multiple moving images according to the preset collection frequency material.
  • the controller After the display device starts the video recording function, as long as the camera is not occupied by other applications, it will always be in the state of collecting image materials.
  • the moment when the camera stops capturing the moving image material of the moving target is the end moment when the preset recording duration is reached. For example, when the current 18:00 is reached, the controller generates a stop control instruction and sends it to the camera to control the camera to stop capturing motion. Targeted moving image footage.
  • the camera may collect several moving image materials, and the controller may acquire several moving image materials to synthesize the video highlight file.
  • the specified number may be set to 6 segments, and the 6 segments of moving image materials are synthesized in the order of acquisition time. For example, if the duration of each moving image material can be 5s, the total duration of the video highlight file synthesized from 6 segments of the moving image material is 30s.
  • Each moving image material includes the movement of the moving object, and the video collection file is to record the movement of the moving object within the preset recording duration.
  • users can watch the wonderful life of a moving target, such as a pet or a baby, within a preset recording time.
  • the camera may collect a large number of moving image materials, or may only collect a few pieces of moving image materials. Therefore, in order to ensure that the controller can generate a short video that meets the requirements of the duration when generating the video highlight file, before generating the video highlight file, it can be judged whether the total number of moving image materials collected by the camera within the current preset recording duration meets the specified number. requirements.
  • the controller performs splicing and synthesis of a specified number of moving image materials to generate a video collection that records the moving object within a preset recording duration. file, is further configured to perform the following steps:
  • S131A Acquire the total number of moving image materials collected by the camera within a preset recording time period.
  • S134A splicing and synthesizing a specified number of target moving image materials in a chronological order to generate a video collection file that records the moving target within a preset recording duration.
  • the controller can count the total number of moving image materials collected by the camera.
  • the total duration of the synthesized video collection file is about 30s, which meets the duration requirement of short videos. That is to say, a maximum of 6 moving image materials are required for synthesis. If the total number of materials exceeds 6, the From the total number of materials, 6 suitable moving image materials are selected for synthesis.
  • the controller may select a specified number of target moving image materials in the moving image material set corresponding to the total number of materials.
  • the specified number of target moving image materials are then spliced and synthesized in time sequence to generate a video highlight file that records the moving target within the preset recording duration.
  • the specified number is set to 6 segments, and the total number of moving image materials is 5 segments, then the 5 segments of moving image materials can be directly spliced and synthesized to obtain a video collection file; and if the total number of moving image materials is 5 segments 10 segments, then it is necessary to select a suitable target moving image material from the 10 segments of moving image materials, and then splicing and synthesizing the 6 segments of the target moving image material to obtain a video highlight file.
  • the time span between the selected target moving image materials may be large according to the Some. For example, if the camera captures moving image materials in the first, middle, and last time periods of the preset recording duration, then when selecting the target moving image material, it is necessary to ensure that there are moving materials in the first, middle, and last time periods. was selected to ensure a richer life situation for moving objects contained in composite video highlight files.
  • step 133 when the controller performs step 133, that is, if the total number of materials is greater than the specified number, selecting a specified number of target moving image materials in the moving image material set, is further configured to perform the following steps:
  • Step 1331A Acquire the start time of acquisition of each piece of moving image material.
  • Step 1332A Divide the start acquisition time corresponding to each piece of moving image material into multiple material collection periods according to the period of the start acquisition time, and each material acquisition period includes multiple motion image materials.
  • Step 1333A Select a target number of moving image materials in each material collection period, respectively, as the target moving image material, and the sum of the respectively selected target numbers in each material collection period is equal to the specified number.
  • the camera When the camera is collecting moving image material, it will use the start time to mark the time of each moving image material. Therefore, the time period when each moving image material is collected can be determined, that is, before, in the middle or after the preset recording time. period of time.
  • the former period may refer to the morning period
  • the middle period may refer to the noon period
  • the latter period may refer to the afternoon period.
  • the division of the time period will also be different, and the number of divisions of the time period may also be determined according to the actual situation.
  • each piece of moving image material After each piece of moving image material is marked with the time period to which it belongs, it can be divided into multiple material collection periods according to the start time of collection, for example, the material collection periods are morning period, noon period and afternoon period. Each material collection period will include at least one piece of moving image material.
  • n 1 moving image materials are selected in the morning period
  • n 2 moving image materials are selected in the noon period
  • n 3 moving image materials are selected in the afternoon period.
  • it is necessary to ensure that n 1 +n 2 +n 3 the specified number.
  • n 1 moving image materials located in the morning period are spliced according to the chronological order of the starting time of collection Synthesize to get a video highlight file.
  • background music may be automatically superimposed when synthesizing the video collection file, and a start-up watermark may be added to the video collection file.
  • the controller in order to improve the viewing effect of the video highlight file, is further configured to:
  • S142A Acquire a preset audio file and a broadcast start watermark, where the start broadcast watermark is used to identify product information of the display device.
  • the start-up watermark can be added in the lower right corner of the screen, and the actual addition position can be set according to the user's preference.
  • the start-up watermark is used to identify the product information of the display device.
  • FIGS. 8-12 show schematic diagrams of operation interfaces of video highlights in a TV according to some embodiments of the present application.
  • FIG. 8 shows a schematic diagram of a possible TV application interface in some embodiments of the present application.
  • this function may be performed by a separate application, or may be a function integrated in a system application.
  • the figure shows the application UI interface of the TV displayed on the display screen.
  • the application UI interface includes 4 applications installed on the TV, namely news headlines, theater on-demand, video highlights, K-songs, and so on. Different applications, or other function buttons can be selected by moving the focus on the display using a controller such as a remote control.
  • the TV display screen is configured to display other interactive elements while displaying the application UI interface
  • the interactive elements may include, for example, TV homepage controls, search controls, message button controls, mailbox controls, browsing Controls, Favorites Controls, Signal Bar Controls, etc.
  • the first controller of the display device in some embodiments of the present application controls the UI of the TV in response to the operation of the interactive element. For example, when a user clicks a search control through a controller such as a remote control, the search UI can be displayed on top of other UIs, that is, the UI of the application component that controls the mapping of interactive elements can be enlarged, or run and displayed in full screen.
  • the interactive element may also be operable by a sensor, which may be, but is not limited to, an acoustic input sensor, such as a microphone, that can detect voice commands that include an indication of the desired interactive element.
  • a sensor which may be, but is not limited to, an acoustic input sensor, such as a microphone, that can detect voice commands that include an indication of the desired interactive element.
  • the user may identify the desired interactive element using "video highlights" or any other suitable identification, such as a search control, and may also describe the desired action to be performed in relation to the desired interactive element.
  • the first controller may recognize the voice command and submit data characterizing the interaction to the UI or its processing component or engine.
  • FIG. 9 shows a schematic diagram of an interface for selecting a slave program of a video collection application according to some embodiments of the present application.
  • the user can control the focus of the display screen through the remote control, select the Video Highlights application, and engage in having its icon highlighted on the display screen; then by clicking on the highlighted icon, it can be opened Icon mapping application.
  • FIG. 10 shows a schematic interface diagram of how to browse the video collection after the video collection is generated according to some embodiments of the present application.
  • the first controller of the display device provided by the present application generates multiple video highlights according to different elements, as shown in FIG. 10 .
  • the first video collection file is generated with pet elements
  • the second video collection file is generated with highlights as elements
  • the third video collection file is generated with smiles as elements
  • the fourth video collection file is generated with other content as elements.
  • the video collection icon includes a schematic diagram and an editing area defined to display the type of elements of the video collection, or the time it was generated, or other information.
  • the first controller will highlight it and lock the other video collections in gray.
  • the element types of the video collection in the figure are only examples.
  • This application describes the technical solutions and display devices for generating video collection files with the theme of pets, wonderful moments, and smiles.
  • the video collection is summarized into other types of fourth video collection files.
  • the subject type of the first video highlight file may also be "father”, “stranger”, or "designated person”, and the first controller of the display device uses an image recognition algorithm to identify the video highlight file to configure.
  • FIG. 11 shows a schematic diagram of an interface after a video collection is generated according to another embodiment of the present application.
  • the first controller of the display device selects part of the videos uniformly according to the time sequence of the acquired video clips to generate a video highlight file.
  • the generation date of the first video collection file is June 2
  • the generation date of the second video collection file is June 3
  • the generation date of the third video collection file is June 4.
  • the first controller automatically generates video highlight files at fixed time intervals, and displays them on the UI, for example, the fixed time can be implemented as 8:30 am every day.
  • the element type of the first video collection file may be preset, or by default, the constituent video segments of the first video collection file are moving target videos randomly obtained by a TV camera.
  • the video clips such as the active person and the active pet captured by the camera may constitute the first video highlight file.
  • the display device is configured to display the most recent 30 video collections on its display screen, and when the generated video collection files exceed 30, the first controller deletes the video collection files with an earlier generation time.
  • the first controller actively deletes the video clips involved in the video highlight file, so as to optimize the storage resources of the TV.
  • the video clips collected by the display device and the generated video highlight files may be stored in a NAS (Network Attached Storage: network attached storage). For example, it can be implemented to allow up to 30 video highlight files to be stored on the NAS to optimize storage resources. In some embodiments, if the NAS space is insufficient, the video collection file with an earlier generation time can be deleted.
  • NAS Network Attached Storage: network attached storage
  • FIG. 13 shows a schematic flowchart of a method for generating a video highlight in some embodiments of the present application.
  • step 701 in response to the received trigger signal, the camera is controlled to acquire video clips.
  • the display device includes a camera, a microphone, a display screen, and a first controller.
  • the camera is a TV image collector, which can be used to collect external environmental scenes;
  • the display screen is used to display a user interface;
  • the first controller correspondingly displays the trigger signal received by the device, and controls the camera to obtain its monitoring range video clips inside.
  • the trigger signal includes a signal of movement of an object within a detection range monitored by the camera or a sound signal within a detection range monitored by the microphone.
  • the first controller controls the camera to capture a preset moving target and record and obtain corresponding video clips.
  • video recording is started to obtain video clips.
  • the movement of objects and sounds within the detection range are monitored by a camera and a microphone, and the first controller adjusts the camera so that the shooting target is located at the center of the video clip.
  • the first controller will adjust the steering and angle of the camera to achieve automatic follow focus on the shooting target, so that the shooting target is always dynamically in the center of the video clip during the recording stage, so as to improve the performance of the camera.
  • the image quality of the video clip is adjusted.
  • the microphone is a sound collector of a TV, and is used to receive the user's voice, control commands, or ambient sound.
  • the first controller adjusts the steering and angle of the camera according to the sound source localization algorithm, so that the shooting target enters the field of view of the camera, and further uses the camera to accurately capture the shooting target through the above method.
  • different users are first determined by collecting the voiceprint features of the users.
  • everyone's pronunciation capacity and pronunciation frequency are different, which leads to the formation of each person's unique voiceprint.
  • the voiceprint feature can be extracted according to the audio collected by the microphone, such as through Mel Frequency Cepstral Coefficents (MFCC), short-term energy, short-term average amplitude, short-term average zero-crossing rate, resonance Peak, Linear Prediction Cepstral Coefficient (LPCC) and other methods.
  • MFCC Mel Frequency Cepstral Coefficents
  • LPCC Linear Prediction Cepstral Coefficient
  • sound source localization technology is used to determine the speaker's position in the space. Specifically, the position of the speaker corresponding to the audio can be determined through the time delay of the audio collected by the multiple sound collection modules.
  • the first controller adjusts the camera so that the speaker corresponding to the audio is located in the center of the camera's shooting screen, and the adjustment includes adjusting the camera's shooting angle and/or adjusting the camera's focal length. Based on the located position, the bearing and distance of the speaker corresponding to the audio relative to the camera can be determined.
  • the adjustment can be to adjust the shooting angle of the camera, so that the adjusted camera is aimed at the speaker corresponding to the audio; it can also be to adjust the focal length of the camera, so as to ensure the proportion of the speaker's portrait in the collected image and ensure that the audience is watching.
  • the speaker can be accurately identified through the image; it is also possible to adjust the shooting angle and focal length of the camera at the same time, which is determined according to the actual situation, that is, according to the determined distance and orientation to determine whether the shooting angle and focal length need to be adjusted.
  • the shooting target may also be pets or other elements.
  • the camera is configured to stop acquiring the video clip for the first controller when called by other applications.
  • the first controller stops acquiring video clips; when the video highlight function is enabled, the first controller starts to control the camera to acquire the video clips.
  • the video collection function of the TV is turned on, the first controller will control the camera to automatically capture video clips and store them.
  • the first controller detects and determines that the camera is called by another application, it stops acquiring video clips, or suspends acquiring video clips. For example, when the camera is used for a video call, the Video Highlights application pauses, or stops capturing video clips. If the video highlight function of the TV is turned off, even if the camera is not occupied, the first controller will not control the camera to record video clips, and the first controller stops the camera from performing monitoring activities.
  • step 702 image recognition is performed on the video clips within a preset time period to obtain a first video set, where the first video set is a video clip set containing the same elements.
  • the first controller performs image recognition on multiple video clips obtained by the camera based on an image recognition algorithm, and takes all the video clips containing the pet cat as the first video set.
  • the pets contained therein are also called elements, that is, the first video set is a set of video clips containing the same element of pet cats.
  • the first controller identifies that the video segment can generate a plurality of video collections, corresponding to different video collection files.
  • the first video collection file of the pet type corresponds to the first video collection
  • the second video collection file of the highlight type type corresponds to the second video collection
  • the third video collection file of the smile type corresponds to the third video Collection
  • other types of fourth video collection files corresponding to the fourth video collection.
  • the first video collection is a collection of video clips containing the same elements, which may be implemented as highlights, or pets, or smiles, or people, or others.
  • the first controller will identify a video clip containing a person and construct a first video set, wherein the first video set can be implemented as a video set of the same person, or can also be implemented A collection of videos for different people.
  • the elements are considered as highlight moments; another example It can be determined whether the key frame in the video clip contains a character or object movement or whether the key action in the change contains a preset motion trajectory, such as a flip, or a preset stance, etc. By detecting whether the voice contained in the video clip matches the preset voice model, for example, the model is set to "great", “really amazing”, etc., it is determined that the elements of the video clip are wonderful moments.
  • identifying and classifying video clips whose elements are pets can identify and classify whether there are active pets in the video clips; Identify and classify the presence of active characters; identify the video clips whose elements are smiles, for example, by identifying whether there is a smiling face in the video clips for identification and classification; Video clips of smiles, people, highlight elements are categorized as other video collections.
  • the first controller splicing the video clips in the first video set into a first video clip file includes the first controller performing a video clip in the first video set The importance is scored to obtain the first video sequence; the first video sequence is sorted from high to low according to the importance score, and the video clips of the pre-set number of splices are set as the target video; the first video sequence is The target video in the splicing is performed to generate a first video collection file.
  • the first controller evaluates the importance degree of the video clips in the first video set through the identification algorithm, and evaluates the importance score to obtain the first video sequence, and then sorts the video clips from high to low according to the importance score of each video clip , and set the video clips of the previous preset splicing number as the target video.
  • the image set of the same recognition target may also be divided into different sets according to time periods.
  • the same identification target within a predetermined time period may generate highlights corresponding to different time periods.
  • the first controller is configured to classify the importance of the cheering sound by identifying the length of the cheering sound in the 8 video clips. value, the longer the cheering sound, the higher its importance score, then determine the video clips in the first video sequence with the pre-set number of splicing as the target video, and splicing the determined target video to generate the first video highlight file .
  • the first controller inputs the video clips into the trained neural network model, analyzes and obtains the video dimension features of the video clips, and then inputs the video dimension features into the binary classification algorithm model to obtain the wonderful probability distribution and the non-wonderful probability distribution of the video clips. , so as to obtain the first video sequence.
  • the neural network model can be implemented as an I3D model, which has a pre-trained model on the largest 400-category dataset in the recognition field, kinetics.
  • the kinetics 400-category dataset has about 250,000 video clips, which can fully and effectively train the neural network and make the network Has a certain generalization ability.
  • the output of the two-class algorithm model is a 2-dimensional data, which is also the probability distribution of the wonderful and non-wonderful video clips.
  • the output two-dimensional data is [0.8, 0.2], then 0.8 means that the corresponding video clip is The probability of a highlight moment is 0.8, and 0.2 means that the probability that the corresponding video clip is a non-highlight moment is 0.2.
  • the first controller may be implemented to perform the importance score by identifying the color feature value of the video segment, the image quality The higher the feature value is, the higher the importance score is, and the first video sequence whose importance score is arranged from high to low is obtained.
  • the picture quality feature value is implemented as a weighted summation of the chrominance, luminance, and saturation of the images in the video segment.
  • the chromaticity is the sum of the background chromaticity of the image and the foreground chromaticity
  • the background chromaticity refers to the chromaticity of the background area of the image
  • the foreground chromaticity refers to the chromaticity of the foreground area of the image.
  • the size of the weight can be preset according to the actual situation, for example, the weights of chroma, brightness, and saturation are set to 0.5, 0.3, and 0.2 respectively; then the chroma, brightness, and saturation of the image are weighted and summed to obtain a video clip image quality characteristic value.
  • the first controller may be implemented to identify people, pets, or smiles in the video clips.
  • the first video sequence is obtained by scoring the importance of the number of frames. The higher the number of frames, the higher the importance score. It can be determined that the video clips with the pre-set number of splicing can be set as the target video, and the determined target video can be spliced to generate First Video Highlights file.
  • the execution of the first video set and the subsequent processing steps by the first controller may be performed at fixed time intervals, so that the television can obtain enough video clip materials. It can also be implemented as triggering the generation of the first video collection in real time after receiving the user's instruction.
  • the first controller processes the video clips to generate the first video collection within a preset time period after the user turns on for the first time every day; for another example, the first controller may be configured to generate the first video collection at a fixed time every day; and For example, the first controller may be configured to execute the generation of the first video collection at a preset time point every day when the TV is turned on.
  • the first controller performs image recognition on the video clips within a preset time period to obtain the first video set, including the first controller receiving an input bright screen instruction; after receiving the bright screen When the time of the screen command is earlier than the preset time point, continue to control the camera to obtain video clips; when the time when the screen brightening command is received is later than the preset time point, the video clips within the preset time period are processed.
  • Image recognition is performed to obtain a first video set, and the video clips in the first video set are spliced into a first video collection file.
  • the preset time point is configured as 20:00
  • the preset time period is configured as 6:00-22:00.
  • the first controller receives the input Bright screen command, since the time when the display device receives the screen bright command is earlier than the preset time point 20:00, the first controller continues to control the camera of the display device to obtain video clips; when the user uses the remote control at 20:10 in the evening Turn on the TV, and the first controller receives the input bright-screen instruction. Since the time when the display device receives the bright-screen instruction is later than the preset time point 20:00, the first controller is responsible for the preset time period from 6:00 to 20:00. Perform image recognition on the video clips within 22:00 to obtain a first video set, and splicing the video clips in the first video set into a first video highlight file.
  • the first controller is further configured to: after monitoring the generated first video highlight file, control the display to display a video highlight interface, so that the The video collection interface is provided with a control for jumping to the playback interface of the first video collection file; when the generated first video collection file is not detected, the display is not controlled to display the video collection interface.
  • the first controller detects the video collection file produced on the previous day, and the first controller will display the video collection UI, where the video collection UI includes a plurality of video collection files of different types of elements, or all
  • the video collection UI may include a plurality of video collection files generated on different dates.
  • the video collection UI is further provided with a control for jumping to the playback interface of the first video collection file. For example, by clicking the highlighted icon of the first video collection file, you can enter all Describe the playback UI of the first video collection file.
  • the first controller performing image recognition on the video clips within a preset time period to obtain the first video set includes that, at a preset time point, the first controller performs image recognition on the video clips within the preset time period.
  • the first video set is obtained by performing image recognition on the video clips of , wherein the preset time point is located after the preset time period.
  • the preset time point is configured as 20:00
  • the preset time period is configured as 8:00-17:00.
  • the first controller presets A first video set is obtained by performing image recognition on the video clips acquired in the time period of 8:00-17:00, and the video clips in the first video set are spliced into a first video highlight file.
  • the user configures the display device to generate a first video collection after the display device is turned on, and the first controller splicing the video clips in the first video collection into a first video collection file includes that the first controller receives the input brightness.
  • a screen instruction in response to the screen bright instruction, splicing the video clips in the first video collection into a first video collection file.
  • the preset time period is configured as 8:00-17:00.
  • the first controller receives the input bright screen instruction, and the first controller responds to the preset time period 8:00-17. : Perform image recognition on the video clips within 00 to obtain a first video collection, and splicing the video clips in the first video collection into a first video collection file.
  • the first controller When the user's power-on time is earlier than the preset time period, the first controller performs image recognition on the video clips in the preset time period 8:00-17:00 of the previous day to obtain a first video set, and converts the The video clips in the first video collection are spliced into a first video collection file; when the user's power-on time is in the preset time period, the first controller converts the video collected from the time period 8:00 to the current power-on time Perform image recognition on the clip to obtain a first video set, and splicing the video clips in the first video set into a first video clip file; when the user's boot time is later than the preset time period, the first control The device performs image recognition on the video clips in the preset time period of the day from 8:00 to 17:00 to obtain a first video set, and splices the video clips in the first video set into a first video highlight file.
  • step 703 the video clips in the first video collection are spliced into a first video collection file.
  • the first video sequence with the importance score from high to low in the first video set set the video clips of the pre-set splicing quantity as the target video, and splicing the target video to obtain the first video collection file.
  • the first video collection file consists of 5 video clips
  • the first video sequence includes 8 video clips
  • the 5 video clips constituting the final first video collection are the target videos.
  • the first controller sorts the first video sequence from high to low according to the importance score, and sets the video segments of the first preset number of splicing as the target video.
  • the preset number of splicing is configured as the number of video clips constituting the first video collection. For example, the preset number of splices is set to 5, the first video sequence includes 8 video clips, and is sorted according to the importance score from high to low. The first controller will select the first 5 video clips and splicing them to generate the first video collection file.
  • the first controller deletes the video segment.
  • the first controller After the first video sequence is generated, if the preset number of splices is determined, in order to save the storage resources of the TV, the first controller will delete other video clips except the target video, so that the first video set and the first video sequence are Only the target video is included; or after the first video collection file is generated, in order to save the storage resources of the TV, the first controller will delete all video clips in the first video collection.
  • the first controller deletes the video segment. After the first video collection file is generated, in order to save the storage resources of the TV, the first controller will delete all video clips in the first video collection.
  • the first controller configures different background music for the first video highlight file.
  • the first controller configures different background music for the first video collection according to different elements, that is, different theme types and different types of video clip materials. For example, in the first video collection file about pets in FIG. 10 , its background music is configured as light music; for the second video collection file about highlight moments, its background music is configured as rock music and so on.
  • the first controller pushes the first video collection file to display on the display screen, and deletes the video clips in the first video collection.
  • the first controller pushes the generated first video collection to the user on the TV side.
  • the pushed video collection file that the user receives every day is not necessarily the last generated video collection file, and it can also be implemented as a random push video collection file.
  • Generated and saved video highlight files are not necessarily the last generated video collection file, and it can also be implemented as a random push video collection file.
  • the first controller will delete its corresponding video segment.
  • FIG. 12 shows a schematic diagram of an interface for playing the first video collection in some embodiments of the present application.
  • the element of the first video collection is pets.
  • the area of the play area of the first video highlight file is relatively large, for example, the play area occupies at least two-thirds of the display UI, so that the content can be displayed more clearly, the UI of the TV Corresponding application components can be defined.
  • the playback area of the first video collection file may further define a timeline component, and the user can control the playback progress of the first video collection file by operating the timeline component.
  • the present application also provides a display device and a method for generating a video collection.
  • a display device and a method for generating a video collection.
  • FIG. 14 shows a method for generating a video collection according to another embodiment of the present application.
  • step 801 in response to the received trigger signal, the camera is controlled to acquire a video clip containing a shooting target within a preset time period to obtain a first video set.
  • the first controller controls the camera to acquire a video clip containing a shooting target within a preset time period to obtain a first video set.
  • the first control is to control the camera to acquire a video clip containing a shooting target within a preset time period, and obtain a first video set.
  • the shooting target can be preset, such as pets, people, or smiles.
  • the detection period can be set according to the actual situation, which is not specifically limited in this application, wherein, the preset period can be implemented as, for example, 0:00-22:00. During the preset period, the camera is in When it is not occupied, the monitoring action will be carried out all the time.
  • the shooting target is defined as a pet cat
  • the first controller recognizes that the video picture is a pet cat through the camera, it will perform video recording to obtain different video clips of multiple time periods.
  • the first controller configures the time length of the video segment to be less than or equal to a second threshold; and the first controller configures the time length of the first video highlight file to a fixed value.
  • the length of the video clip recorded by the first controller is less than or equal to 6 seconds since the pet cat is detected.
  • the monitor only obtains the video clips of the first 6 seconds, and the length of the video clips is 6 seconds; when the pet cat is within the video capture range of the camera for less than 6 seconds, that is, it leaves the capture range of the camera within 6 seconds, and the first control
  • the device only obtains the video clips of the pet cat within the scope of the camera, and the length of the video clips is less than 6 seconds.
  • the first controller when the fixed value is 30 seconds, the first controller will perform length detection on the generated first video collection file, and remove content clips that exceed 30 seconds in the video clips to ensure that the first video Highlights files will not exceed a preset fixed length of time.
  • the first controller adds a temporal watermark to the video segment.
  • the first video collection file finally generated also contains the corresponding time watermark, and the user can know the occurrence time of the video content when viewing the video collection file.
  • step 802 the video clips in the first video collection are spliced to generate a first video collection.
  • the first controller determines that some or all of them are target videos, and splices the target videos to generate a first video highlight.
  • the first controller when the number of video clips in the first video set is greater than a first threshold, the first controller approximately uniformly selects a first threshold number of video clips according to their shooting order as the target video for splicing; otherwise, The first controller splices all the video clips according to the shooting sequence.
  • the first controller when the first threshold is 5, and the TV is turned on for 2 minutes the next day, the first controller will select 5 video clips from the first video collection for splicing to generate a first video collection file.
  • the 5 video clips will be uniformly selected according to their shooting time, and the time span of shooting will be increased as much as possible.
  • the first controller directly splices all the 3 video clips.
  • the first controller splices the video segments in the first video collection at fixed time intervals to generate the first video collection.
  • the present application further provides a display device, comprising: a camera; a microphone; a display screen for displaying a user interface; a first controller, configured to: in response to a received triggering a signal to control the camera to acquire video clips; perform image recognition on the video clips within a preset time period to obtain a first video set, where the first video set is a video clip set containing the same elements; The video clips in a video collection are spliced into a first video collection file.
  • the specific operation method and steps of the display device have been described in detail in the corresponding video collection generation method above, and will not be repeated here.
  • the present application further provides another display device, comprising: a camera; a microphone; a display screen for displaying a user interface; a first controller, configured to: in response to receiving a trigger signal, control the camera to obtain video clips containing shooting targets within a preset time period to obtain a first video collection; splicing the video clips in the first video collection to generate a first video collection .
  • a first controller configured to: in response to receiving a trigger signal, control the camera to obtain video clips containing shooting targets within a preset time period to obtain a first video collection; splicing the video clips in the first video collection to generate a first video collection .
  • the beneficial effects of some embodiments of the present application are that, by constructing a first video set, the classification of video clips can be realized; further, by constructing the first video sequence, the screening of video clips that meet the requirements can be realized; and further by constructing a target video, it can be realized Secondary screening of video clips; further optimization of TV storage resources can be achieved by deleting video clips after obtaining video highlight files; furthermore, by constructing preset time points and preset time periods, the automation of video highlight file acquisition can be realized; further By controlling the camera to dynamically track the shooting target, the effective acquisition of video clips can be achieved; further, by constructing the first threshold and the second threshold, the length of the video collection file can be controlled, so as to achieve automatic acquisition of video clips, increase the editing speed, and reduce the error rate of video splicing , Intelligently generate video collection files.
  • a watermark can be added to the recorded video file, and the watermark can be preset or set according to the user's choice.
  • FIG. 15 exemplarily shows a schematic diagram of displaying a watermark enabled according to some embodiments.
  • the start-up watermark can include title watermark and product watermark, see (a) in Figure 15, when the video file starts to play, the title watermark "My Family Diary, April 30, 2020” is displayed, and the title watermark is displayed a few seconds later disappear; see (b) in Figure 15, add the product watermark "XX Social TV, see you now, get together on one screen” in the lower right corner of the video file screen, the product watermark is always displayed on the video file screen.
  • a video highlight file that records the moving target within the preset recording duration can be generated.
  • the moving image material collected by the camera can be stored and displayed on the display. Users can perform functions such as deletion and editing according to the displayed moving image material.
  • the controller is further configured to: when the camera starts to collect the moving image material of the moving object in response to the video recording start instruction, acquire each piece of moving image captured by the camera material; generate an image material list interface in the user interface, and display each piece of moving image material in the image material list interface in chronological order.
  • the user interface displays a video recording startup entry, and after the user triggers the video recording startup entry, an image material list interface can be presented on the display.
  • the image material list interface is used to display moving image material and video collection files. Each piece of moving image material collected by the camera will be displayed in the image material list interface.
  • the list interface may also only display the spliced video collection, or only display the moving image material. It is also possible to display video highlights in one state and moving image material in another state.
  • the image material list interface is shown in FIG. 7 , and the image material list interface is displayed in sequence according to the chronological order of the start acquisition time of each piece of moving image material.
  • the user can operate the moving image material displayed in the image material list interface to realize different functions, such as sharing to the circle of friends, deleting the selected record, clearing the record and closing the video recording function, etc.
  • the upper right corner of the image material list interface presents the operation prompt item "Press the menu button to disable the function or share and delete the video", and the user can perform the operation indicated by the operation prompt item to realize the above-mentioned interactive function.
  • the controller in order to facilitate the user to perform corresponding operations on the moving image material displayed in the image material list interface, the controller is further configured to perform the following steps:
  • Step 151A Receive an interactive function invocation instruction generated when the user performs a corresponding operation according to the operation prompt item.
  • Step 152A In response to the interactive function invocation instruction, a functional interactive floating layer is presented in the image material list interface, and the functional interactive floating layer includes a plurality of function controls.
  • Step 153A Execute the target operation corresponding to the target function control in response to the target control instruction generated when the target function control is triggered.
  • the interactive function entry is not displayed. If the user wants to realize an interactive function, he needs to call up the interactive function entrance first, that is, the user can call up the interactive function entrance by triggering the menu key through the remote control according to the content of the operation prompt item.
  • an interactive function invocation instruction is generated, and the controller invokes the interactive function invocation instruction according to the interactive function invocation instruction, and calls out the functional interactive floating layer as the interactive function entrance, and displays it on the display.
  • the function interaction floating layer is covered at the bottom of the image material list interface in the form of a floating layer. When the function interaction floating layer is displayed, the image material list interface does not disappear.
  • the function interaction floating layer includes multiple function controls, for example, including the "Share to friends and relatives” control, the "Delete selected record” control, the “Clear all records” control, the “Close this function” control and so on. If the user wants to perform a certain interactive function on the moving image material, he can trigger the corresponding function control through the remote control.
  • the "Share to Friends and Family” control is used to perform the sharing operation
  • the "Delete Selected Record” control is used to perform the operation of deleting a moving image material or video collection file
  • the "Clear All Records” control is used to perform the clearing of the image material list interface.
  • the "Turn this off” control is used to perform operations that turn off the video recording function of the display device.
  • FIG. 16 exemplarily shows a schematic diagram of an interactive interface for executing the function of sharing to a circle of friends and relatives according to some embodiments.
  • the user wants to share a certain moving image material or video collection file to the circle of friends, after selecting the file to be shared, select the "Share to circle of relatives and friends" control as the target function through the remote control controls.
  • the sharing interface you can share a certain moving image material or video collection file to the circle of friends.
  • FIG. 17 exemplarily shows a schematic diagram of an interactive interface for performing a function of deleting a selected record according to some embodiments.
  • the user wants to delete a certain moving image material or video collection file, after selecting the file to be shared, select the “delete selected record” control as the target function control through the remote control.
  • the delete interface is shown in Figure 17(b).
  • the user can execute the delete operation by triggering the "Delete” control based on the delete interface, and the prompt "Deleted" is displayed in the delete interface.
  • FIG. 18 exemplarily shows a schematic diagram of an interactive interface for performing the function of clearing all records according to some embodiments.
  • the user wants to clear all the moving image material collected by the camera or the generated video collection file, he can select the "Clear All Records" control as the target function control through the remote control.
  • FIG. 19 exemplarily shows a schematic diagram of an interactive interface for performing an operation of closing this function according to some embodiments.
  • the user wants to turn off the video recording function of the display device, he can select the “turn off this function” control as the target function control through the remote control.
  • the video recording function of the display device can be turned off.
  • the user can perform the closing operation by triggering the "close” control based on the closing interface, and the prompt "closed” is displayed in the closing interface.
  • the user can also click the sharing link to view the video previously shared by the user. If the user wants to enable this function again, he can continue to view the moving image material and the generated video highlight file of the previous camera.
  • a time watermark may be added to each moving image material when the moving image material is displayed in the image material list interface.
  • controller is further configured to perform the following steps when performing the chronological display of each piece of moving image material in the image material list interface:
  • Step 161A Acquire the start acquisition time of each piece of moving image material.
  • Step 162A Generate a time watermark including the start time of acquisition, and the time watermark is in one-to-one correspondence with the moving image material.
  • Step 163A Display each piece of moving image material and the corresponding time watermark in the image material list interface according to the time sequence of the start of collection, and display the time watermark at the bottom or top of the corresponding moving image material.
  • a time watermark can be used to mark it.
  • the time watermark is a watermark generated according to the start time of each piece of moving image material, and each piece of moving image material has an independent corresponding time watermark.
  • Each piece of moving image material is displayed in the image material list interface in the chronological order of the starting time of collection.
  • each time watermark is also displayed in the image material list interface in the chronological order of the starting time of collection, so that the time watermark is consistent with the moving image material.
  • the time sequence may be in the order from the nearest to the farthest from the current time, so that the newly collected moving image material is displayed in the front row of the image material list interface.
  • the display position of the time watermark can be determined according to the actual usage, for example, it can be displayed on the bottom, top, left or right of the corresponding moving image material. Taking the image material list interface shown in FIG. 7 as an example, the time watermark is displayed at the bottom of the corresponding moving image material, such as "9:28 on March 23" and the like.
  • a time watermark can also be added to the video collection file.
  • the controller when adding a time watermark to the video highlight file, the controller is further configured to perform the following steps:
  • Step 171A Obtain the generation time of each video collection file.
  • Step 172A creating a generation time watermark including the generation time.
  • Step 173A Display each video collection file and the corresponding generation time watermark in the image material list interface according to the time sequence of generation time, and display the generation time watermark at the bottom or top of the corresponding video collection file.
  • Step 174A Delete the moving image material used to generate the video collection file.
  • each video collection file In order to identify each video collection file, it can be marked by generating a time watermark.
  • the generation time watermark is a watermark generated according to the generation time of each video collection file, and each video collection file has an independent corresponding generation time watermark.
  • Each video collection file is displayed in the image material list interface in the chronological order of the generation time.
  • each generation time watermark is also displayed in the image material list interface in the chronological order of the generation time, so that the generation time watermark and the video collection file are One-to-one correspondence.
  • the time sequence may be in order from the current time to the farthest, so that the newly generated video collection file is displayed in the front row of the image material list interface, and the video collection file is displayed before the moving image material.
  • FIG. 20 exemplarily shows a schematic diagram of an interface for displaying a video collection file in an image material list interface according to some embodiments.
  • the display position of the generated time watermark may be determined according to the actual usage, for example, it may be displayed at the bottom, top, left or right of the corresponding video collection file. Taking the image material list interface shown in FIG. 20 as an example, the generation time watermark is displayed at the bottom of the corresponding video collection file, such as "19:30 on March 23" and so on.
  • the controller Since the controller will synthesize a video collection file based on the moving image material collected by the camera on that day every day, the controller will store each moving image material and video collection file. In order to avoid occupying too much storage space of the display device, the controller may delete the moving image material that has been synthesized into the video highlight file.
  • the rest of the moving image materials for which no video highlight file is generated can be saved in the controller, and displayed together with the video highlight file in the image material list interface, so that the user can interact with the moving image material or the video highlight file.
  • the video highlight file and the moving image material for which the video highlight file is not generated are stored in the home cloud storage NAS. If the display device is not equipped with a home cloud storage NAS, it is stored locally on the display device. Video collection files and moving image materials without video collection files can be saved for 30 days. After 30 days, the corresponding video collection files or moving image materials without video collection files will be deleted to avoid excessive storage in the display device. files that take up a lot of space.
  • the moving image material collected by the camera and the video collection file synthesized by the controller are displayed in the image material list interface, in order to distinguish the video collection file from the moving image material, the user can quickly select the desired operation.
  • the target video highlight file or the target moving image material are displayed in the image material list interface, in order to distinguish the video collection file from the moving image material.
  • the distinction may take the form of adding a video record mark to the video highlight file.
  • the controller is further configured to perform the following steps: creating a video record mark, the video record mark is used to identify the video highlight file generated according to the video record; adding the video record mark to the video highlight file displayed in the image material list interface on the thumbnail.
  • a video record mark may be displayed on the thumbnail of the video collection file when the video collection file is displayed in the image material list interface.
  • the video record mark can be created when the video highlight file is generated, or it can be stored in the controller in advance, that is, when the video highlight file is generated, the controller will call the video record mark, and the video highlight file will be displayed in the image material list interface. , synchronously displays the video record mark on the thumbnail of the video highlight file.
  • a video log marker may be added to a corner of the thumbnail of the video highlight file, and the marker content may be "Vlog", such as the markers on the first and second thumbnails as shown in FIG. 20 .
  • the video collection file displayed in the image material list interface can be clearly distinguished from the moving image material, which is convenient for users to view and select.
  • the video collection file synthesized on the day needs to be pushed to the user.
  • the controller is further configured to perform the following steps: generate a push notification according to the video highlight file, the push notification includes push content and operation prompts; when the display device is started, on the display Push notifications are displayed on the rendered system home page interface.
  • the video collection file is generated from moving image material collected from a moving target within the preset recording duration of the day, the user may not activate the display device within the preset recording duration. Therefore, when the user starts the display device for the first time after the video highlight file is generated, the video highlight file can be pushed to the user. That is, when the user starts the display device for the first time, a push notification is generated according to the video highlight file and displayed on the system home page interface presented on the display.
  • FIG. 21 exemplarily shows a schematic diagram of an interface for displaying push notifications according to some embodiments.
  • the push notification includes push content and operation prompts, such as "your cute pet exclusive Vlog has been generated, come and have a look", and operation prompts such as "press the menu button to view details”. Users can view the pushed video collection files by triggering the menu button on the remote control.
  • the video collection file is pushed only once, and is pushed when the user starts the display device for the first time after the generation of the video collection file. For example, if the preset recording time is 6:00-18:00, when it reaches 18:00, the controller controls the camera to stop collecting moving image materials, and at the same time, splicing and synthesizing the specified number of moving image materials to obtain a video collection document. After 18:00, if the user starts the display for the first time at 19:00, the video collection file will be displayed on the system home page interface in the form of a push notification to prompt the user to view and record the wonderful moments of the pet or baby's day.
  • the display is configured to present a user interface displaying a video recording start entry, and the video recording start entry is used to start a video recording function of the display device.
  • the controller receives the video recording start instruction generated when the video recording start entry is triggered, so as to instruct the camera to collect the moving image material of the moving target according to the preset collection frequency within the preset recording duration; when the end time of the preset recording duration is reached, The specified number of moving image materials are spliced and synthesized to generate a video collection file that records the moving target within the preset recording time.
  • the controller can actively call the camera in real time to collect the moving image material of the moving target without the need for the user to operate in real time, and synthesize the video collection after the image collection is completed.
  • the user can know the movement of the moving target within the preset recording time according to the video collection file, which will bring a good experience to the user.
  • the implementation process of starting the video recording function of the display device provided by the above embodiments is realized by the controller in the display device, and in other embodiments, the device that can realize the video recording function of starting the display device may also be an intelligent device connected to the display device. terminal. On the smart terminal side, it supports the activation of the video recording function of the display device, the browsing and editing of moving image materials, and the viewing and sharing of video collection files.
  • the device for capturing images may be a first terminal device, and the first terminal device may share the captured video with the second terminal device.
  • an embodiment of the invention provides an intelligent terminal, when executing an interactive method for generating a video highlight file, its processor is configured to perform the following steps:
  • S21A Receive a video recording function activating instruction generated when a specified application program is triggered.
  • a designated application program In order to start the video recording function of the display device on the intelligent terminal side, a designated application program needs to be configured in the processor of the intelligent terminal, and the designated application program is displayed on the terminal homepage. Specifies that the application provides the entry point for starting the video recording function.
  • the user manually triggers the designated application program to generate a video recording function activating instruction, and the processor can call up the entry for starting the video recording function according to the video recording function activating instruction.
  • the smart terminal is the second terminal device.
  • the application in the second terminal device may send a call-up instruction to the first terminal device.
  • the processor After receiving the instruction to activate the video recording function, the processor presents the terminal user interface on the display 320 of the second intelligent terminal, and displays the video recording start entry in the terminal user interface.
  • FIG. 22 exemplarily shows a schematic interface diagram of a user interface of a second smart terminal according to some embodiments.
  • the terminal user interface is displayed on the display, and the video recording start portal is presented in the terminal user interface, as shown in FIG. 22 .
  • the page displays the "Enable” control, and the user triggers the "Enable” control to realize the video recording function. on.
  • S23A Receive the video recording start instruction generated by triggering the video recording start entry, and send the video recording start instruction to the controller, where the video recording start instruction is used to instruct the controller to call the camera in the display device to record according to the preset recording duration within the preset recording time.
  • the collection frequency collects the moving image material of the moving object, and splices and synthesizes a specified number of moving image materials to generate a video collection file that records the moving object within the preset recording duration.
  • the user can generate a video recording start instruction based on the video recording startup entry displayed on the user interface of the terminal, and triggering the entry, that is, the "open" control.
  • the processor sends the video recording start instruction to the controller in the display device (the first terminal device).
  • the controller can start the video recording function of the display device, that is, execute the call to the display device.
  • the camera collects the moving image material of the moving target according to the preset collection frequency, and splices and synthesizes the specified number of moving image materials to generate a video collection file that records the moving target within the preset recording time. related steps.
  • the relevant steps for the controller to perform the video recording function may refer to the content of the display device-based part of the foregoing embodiment, which will not be repeated here.
  • the display of the smart terminal will present a video list interface.
  • the processor is further configured to perform the following steps:
  • Step 241A in response to the video recording start instruction, present a video list interface on the display.
  • Step 242A Acquire the moving image material and the video collection file stored in the controller, and display the moving image material and the video collection file in the video list interface.
  • the processor After the processor receives the video recording start command generated when the user triggers the video recording start entry, it sends the video recording start command to the controller to start the video recording function of the display device, and switches the display of the smart terminal according to the video recording start command.
  • the display content that is, the terminal user interface displayed on the display of the second intelligent terminal is switched to display the video list interface.
  • the video list interface is similar to the image material list interface in the display device, and both are used to display moving image materials and video collection files.
  • the moving image material and the synthesized video collection file collected by the camera of the display device will be stored in the controller, and the controller will display the moving image material and the synthesized video collection file on the image material list interface and send it to the second intelligent terminal for processing.
  • the processor displays the moving image material and the synthesized video collection file on the video list interface. That is to say, the content displayed on the video list interface is the same as that displayed on the image material list interface.
  • the user activates the video recording function of the display device for the first time, it means that the camera has not captured any moving image material, and thus no video highlight file has been generated. Therefore, nothing is displayed in the video list interface.
  • FIG. 23 exemplarily shows an interface diagram of a video list interface according to some embodiments.
  • the content displayed in the video list interface is shown in (a) of FIG. 23 .
  • the prompt content "No pet has been found" is displayed in the video list interface.
  • the user starts the video recording function of the display device for the second or subsequent times instead of starting it for the first time, it means that the camera has collected moving image material, and may also have generated a video highlight file.
  • the video In the list interface the moving image material collected by the camera and the video collection file generated by the controller history will be displayed, as shown in the video list interface in (b) of Figure 23.
  • the display order of moving image materials and video collection files in the video list interface can be the same as that in the image material list interface, and will not be repeated here.
  • the processor is further configured to perform the following steps:
  • Step 251A Display personalized operation controls at the bottom of each moving image material and video collection file displayed in the video list interface.
  • Step 2452A In response to triggering the personalized operation instruction generated by the target personalized operation control, execute the target personalized operation corresponding to the target personalized operation control.
  • personalized operation controls are displayed at the bottom of each moving image material and each video highlight file.
  • the personalized operation controls include a "share” control and a “download” control, as shown in Figure 23 ( b) shown.
  • the "Share” control can perform the operation of sharing a certain moving image material or video collection file to other platforms, and the "Download” control is used to perform the operation of downloading a certain moving image material or video collection file to the local.
  • a personalized operation instruction can be generated, and the processor can perform a corresponding personalized operation according to the personalized operation instruction, such as sharing to other platforms, or downloading locally.
  • FIG. 24 exemplarily shows a schematic interface diagram of the setting interface of the second smart terminal according to some embodiments. Since the second smart terminal can perform turning on and off of the video recording function of the display device, a quick start entry can be configured in the terminal setting interface of the second smart terminal. Referring to FIG. 24 , a "setting" entry is displayed in the user interface of the second intelligent terminal, and the user triggers the "setting" entry, and the terminal setting interface can be presented on the display of the second intelligent terminal. The user triggers the quick start entry in the terminal setting interface to enable the video recording function, and triggers the quick start entry again to disable the video recording function.
  • the local intelligent terminal may authorize other intelligent terminals to operate the video list interface.
  • the processor is further configured to perform the following steps:
  • Step 261A Receive a device sharing instruction generated when the device sharing control is triggered, where the device sharing instruction is used to establish an association relationship with the second intelligent terminal.
  • Step 262A Send the device sharing instruction to at least one second intelligent terminal to establish an association relationship between the local intelligent terminal and at least one second intelligent terminal, and the device sharing instruction is used to instruct the second intelligent terminal to execute the video collection file according to the association relationship. personalized operation.
  • the user interface of the second intelligent terminal displays a "settings" entry, and the user triggers the "settings” entry, and the terminal setting interface is displayed on the display of the second intelligent terminal.
  • the second intelligent terminal setting interface is configured with device sharing controls, such as The Device Sharing control shown in Figure 24.
  • the device sharing instruction can be generated.
  • the processor of the local intelligent terminal sends the device sharing instruction to the second intelligent terminal that needs to perform personalized operation on the video list interface.
  • the second intelligent terminal After receiving the device sharing instruction, the second intelligent terminal can establish an association relationship with the local (or first) intelligent terminal, so that the second intelligent terminal can operate the video list interface.
  • FIG. 25 exemplarily shows a schematic diagram of an interface for presenting an operation interface in a smart terminal according to some embodiments.
  • an operation interface is presented on the display of the second intelligent terminal, as shown in (a) of FIG. 25 .
  • the second user can perform personalized operations on the moving image material and the video collection file in the video list interface, for example, perform viewing, sharing or downloading operations on the moving image material and the video collection file.
  • the operation steps of the second terminal corresponding to the user operating the second intelligent terminal and the operation steps of the local user operating the local intelligent terminal and the page presented by the corresponding display are the same. The only difference is that the second user cannot operate the video of the display device by the second intelligent terminal.
  • the recording function performs on and off operations.
  • the local user does not enable the video recording function of the display device, nothing is displayed in the video list interface.
  • the second user triggers the video recording startup entry on the terminal user interface of the second smart terminal, a prompt will be displayed on the terminal user interface, as shown in (b) in FIG.
  • This function can be viewed only if the administrator has enabled this function”, the administrator refers to the local user. That is, after the local user enables the video recording function of the display device based on the local intelligent terminal, the corresponding user of the second terminal can perform personalized operations on the moving image material and video collection file on the opposite terminal intelligent terminal.
  • the display is configured to present a terminal user interface displaying a video recording start entry, and the video recording start entry is used to start the video recording function of the display device.
  • the processor is connected to the controller in the display device, and the processor is configured to: receive a video recording function activating instruction generated when a specified application is triggered, and present a terminal user interface with a video recording startup entry in the display; receive the trigger video
  • the video recording start command generated by the recording start entry is sent to the controller.
  • the video recording start command is used to instruct the controller to call the camera in the display device to collect the moving image material of the moving target according to the preset collection frequency within the preset recording time.
  • the user enables the video recording function of the display device through the smart terminal, and the display device can realize the function of collecting the moving image material of the moving target and synthesizing the video collection file without the user operating the display device in real time. , the user can know the movement of the moving target within the preset recording duration according to the video collection file, which will bring a good experience to the user.
  • the present application also provides an interactive method for generating a video highlight file, which is applied to a display device, and the method includes:
  • S11A Receive a video recording start instruction generated when the video recording start entry is triggered, where the video recording start instruction is used to instruct the camera to collect moving image material of a moving target according to a preset collection frequency within a preset recording time period, and the The video recording startup entry is used to start the function of recording moving images through video recording within a preset recording duration and generating a video highlight file, and the moving target refers to the need to record moving images through video recording within the preset recording duration to generate The target of the video highlight file;
  • the present application also provides an interactive method for generating a video collection file, which is applied to an intelligent terminal, and the method includes:
  • the video recording function In response to the video recording function activating instruction, present in the display a terminal user interface that displays a video recording start entry, where the video recording start entry is used to start and realize the video recording within a preset recording duration.
  • a video recording function for recording moving images and generating a video highlight file the video recording function is configured in the controller of the display device;
  • S23A Receive a video recording start command generated by triggering the video recording start entry, and send the video recording start command to a controller, where the video recording start command is used to instruct the controller to call the camera in the display device to pre-
  • the moving image material of the moving target is collected according to the preset collection frequency, and a specified number of the moving image material is spliced and synthesized to generate a video collection file that records the moving object within the preset recording time period.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention se rapporte au domaine technique des dispositifs intelligents et de la détection de vidéo et, en particulier, à un procédé de génération de collecte de vidéo et à un dispositif d'affichage. Dans une certaine mesure, les problèmes liés à l'incapacité à obtenir automatiquement un clip vidéo, à une vitesse d'édition lente, à un regroupement source d'erreurs et à l'incapacité à générer de manière intelligente un moment fort de vidéo peuvent être résolus. Le dispositif d'affichage comprend : une caméra, un microphone, un écran de visualisation, utilisé pour afficher une interface utilisateur ; et un premier dispositif de commande, configuré : pour commander, en réponse à un signal de déclenchement reçu, la caméra pour obtenir un clip vidéo ; pour réaliser une reconnaissance d'image sur les clips vidéo pendant une période de temps prédéfinie pour obtenir un premier ensemble vidéo, le premier ensemble vidéo étant un ensemble de clips vidéo contenant le même élément ; et pour regrouper les clips vidéo dans le premier ensemble vidéo en un premier fichier de collecte de vidéo.
PCT/CN2021/097699 2020-07-06 2021-06-01 Procédé de génération de collecte de vidéo et dispositif d'affichage WO2022007545A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010640381.5A CN111787379B (zh) 2020-07-06 2020-07-06 一种生成视频集锦文件的交互方法及显示设备、智能终端
CN202010640381.5 2020-07-06
CN202010710550.8 2020-07-22
CN202010710550.8A CN113973216A (zh) 2020-07-22 2020-07-22 一种视频集锦的生成方法及显示设备

Publications (1)

Publication Number Publication Date
WO2022007545A1 true WO2022007545A1 (fr) 2022-01-13

Family

ID=79552263

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/097699 WO2022007545A1 (fr) 2020-07-06 2021-06-01 Procédé de génération de collecte de vidéo et dispositif d'affichage

Country Status (1)

Country Link
WO (1) WO2022007545A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134659A (zh) * 2022-06-15 2022-09-30 阿里巴巴云计算(北京)有限公司 视频编辑和配置方法、装置、浏览器、电子设备和存储介质
CN115379125A (zh) * 2022-10-27 2022-11-22 北京德风新征程科技有限公司 交互信息发送方法、装置、服务器和介质
WO2023160241A1 (fr) * 2022-02-28 2023-08-31 荣耀终端有限公司 Procédé de traitement vidéo et dispositif associé

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090103887A1 (en) * 2007-10-22 2009-04-23 Samsung Electronics Co., Ltd. Video tagging method and video apparatus using the same
CN102427507A (zh) * 2011-09-30 2012-04-25 北京航空航天大学 一种基于事件模型的足球视频集锦自动合成方法
CN104038705A (zh) * 2014-05-30 2014-09-10 无锡天脉聚源传媒科技有限公司 视频制作方法和装置
CN108288475A (zh) * 2018-02-12 2018-07-17 成都睿码科技有限责任公司 一种基于深度学习的体育视频集锦剪辑方法
CN108521559A (zh) * 2018-04-16 2018-09-11 苏州竺星信息科技有限公司 一种运动视频集锦自动生成方法及系统
CN109121021A (zh) * 2018-09-28 2019-01-01 北京周同科技有限公司 一种视频集锦的生成方法、装置、电子设备及存储介质
CN111787379A (zh) * 2020-07-06 2020-10-16 海信视像科技股份有限公司 一种生成视频集锦文件的交互方法及显示设备、智能终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090103887A1 (en) * 2007-10-22 2009-04-23 Samsung Electronics Co., Ltd. Video tagging method and video apparatus using the same
CN102427507A (zh) * 2011-09-30 2012-04-25 北京航空航天大学 一种基于事件模型的足球视频集锦自动合成方法
CN104038705A (zh) * 2014-05-30 2014-09-10 无锡天脉聚源传媒科技有限公司 视频制作方法和装置
CN108288475A (zh) * 2018-02-12 2018-07-17 成都睿码科技有限责任公司 一种基于深度学习的体育视频集锦剪辑方法
CN108521559A (zh) * 2018-04-16 2018-09-11 苏州竺星信息科技有限公司 一种运动视频集锦自动生成方法及系统
CN109121021A (zh) * 2018-09-28 2019-01-01 北京周同科技有限公司 一种视频集锦的生成方法、装置、电子设备及存储介质
CN111787379A (zh) * 2020-07-06 2020-10-16 海信视像科技股份有限公司 一种生成视频集锦文件的交互方法及显示设备、智能终端

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160241A1 (fr) * 2022-02-28 2023-08-31 荣耀终端有限公司 Procédé de traitement vidéo et dispositif associé
CN115134659A (zh) * 2022-06-15 2022-09-30 阿里巴巴云计算(北京)有限公司 视频编辑和配置方法、装置、浏览器、电子设备和存储介质
CN115379125A (zh) * 2022-10-27 2022-11-22 北京德风新征程科技有限公司 交互信息发送方法、装置、服务器和介质
CN115379125B (zh) * 2022-10-27 2023-01-17 北京德风新征程科技有限公司 交互信息发送方法、装置、服务器和介质

Similar Documents

Publication Publication Date Title
WO2022007545A1 (fr) Procédé de génération de collecte de vidéo et dispositif d'affichage
US11038939B1 (en) Analyzing video, performing actions, sending to person mentioned
KR102290419B1 (ko) 디지털 컨텐츠의 시각적 내용 분석을 통해 포토 스토리를 생성하는 방법 및 장치
CN113475092B (zh) 一种视频处理方法和移动设备
CN111464844A (zh) 一种投屏显示方法及显示设备
CN111787379B (zh) 一种生成视频集锦文件的交互方法及显示设备、智能终端
CN107396177A (zh) 视频播放方法、装置及存储介质
JP2013502637A (ja) メタデータのタグ付けシステム、イメージ検索方法、デバイス及びそれに適用されるジェスチャーのタグ付け方法
CN112333509B (zh) 一种媒资推荐方法、推荐媒资的播放方法及显示设备
TWI747031B (zh) 視頻播放方法、裝置和多媒體資料播放方法
US20220223181A1 (en) Method for synthesizing videos and electronic device therefor
US20220272406A1 (en) Method for displaying interactive interface, method for generating interactive interface, and electronic device thereof
CN113938731A (zh) 一种录屏方法及显示设备
CN111836109A (zh) 显示设备、服务器及自动更新栏目框的方法
WO2023173850A1 (fr) Procédé de traitement vidéo, dispositif électronique et support lisible
CN111818378B (zh) 显示设备及人物识别展示的方法
CN114079812A (zh) 一种显示设备及摄像头的控制方法
US20230018502A1 (en) Display apparatus and method for person recognition and presentation
WO2021169168A1 (fr) Procédé de prévisualisation de fichier vidéo et dispositif d'affichage
WO2023160241A1 (fr) Procédé de traitement vidéo et dispositif associé
WO2022012271A1 (fr) Dispositif d'affichage et serveur
WO2023001152A1 (fr) Procédé de recommandation de clip vidéo, dispositif électronique et serveur
WO2022007568A1 (fr) Dispositif d'affichage, terminal intelligent et procédé de génération de moment-phare de vidéo
CN115250357A (zh) 终端设备、视频处理方法和电子设备
CN113938634A (zh) 一种多路视频通话处理方法及显示设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21837624

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/04/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21837624

Country of ref document: EP

Kind code of ref document: A1