CN116684665A - Method, device, terminal equipment and storage medium for editing highlight of doll machine - Google Patents

Method, device, terminal equipment and storage medium for editing highlight of doll machine Download PDF

Info

Publication number
CN116684665A
CN116684665A CN202310770780.7A CN202310770780A CN116684665A CN 116684665 A CN116684665 A CN 116684665A CN 202310770780 A CN202310770780 A CN 202310770780A CN 116684665 A CN116684665 A CN 116684665A
Authority
CN
China
Prior art keywords
video
prize
highlight
time
clipped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310770780.7A
Other languages
Chinese (zh)
Other versions
CN116684665B (en
Inventor
曾琪贺
曾嘉宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Xingyun Kaiwu Technology Co ltd
Original Assignee
Guangdong Xingyun Kaiwu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Xingyun Kaiwu Technology Co ltd filed Critical Guangdong Xingyun Kaiwu Technology Co ltd
Priority to CN202310770780.7A priority Critical patent/CN116684665B/en
Publication of CN116684665A publication Critical patent/CN116684665A/en
Application granted granted Critical
Publication of CN116684665B publication Critical patent/CN116684665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a method, a device, a terminal device and a storage medium for editing a highlight of a doll machine, wherein the method comprises the following steps: acquiring videos shot by a baby machine every time when the baby machine is started, and acquiring a plurality of initial videos; taking the initial video with the corresponding prize-winning information as a video to be clipped; wherein the prize information includes: the time of the prize; for each video to be clipped, marking the video frame at the corresponding moment in the video to be clipped according to the prize-winning time to obtain marked video frames; and backtracking the video to be clipped by taking the marked video frames as references, and extracting a plurality of video frames before the marked video frames and the marked video frames to obtain corresponding highlight video clips. By implementing the invention, the shot video can be automatically marked and traced back according to the prize-giving information of the baby machine, so that the highlight video clips can be quickly obtained, and the clipping efficiency of the highlight video clips of the baby machine and the consumption experience of users are improved.

Description

Method, device, terminal equipment and storage medium for editing highlight of doll machine
Technical Field
The invention relates to the field of video editing of a baby machine prize, in particular to a method, a device, terminal equipment and a storage medium for editing a baby machine highlight.
Background
The traditional baby machine belongs to independent equipment and generally does not have a networking function, along with the technical development, part of the baby machines can realize the networking function, all starting parameters in the baby machines can be obtained constantly and can be photographed in real time, most merchants today photograph the baby machines through the baby machines, then manually cut video clips when the baby machines play prizes from a photographed complete video, so that the requirements of users on obtaining the wonderful video clips when the baby machines play prizes are met, a great deal of time and effort are required in the process, the video clipping efficiency of the merchants is greatly influenced, the consumption experience of the users is reduced, and therefore, how to improve the clipping efficiency of the merchants on the wonderful video clips of the baby machines is a technical problem to be solved urgently.
Disclosure of Invention
The invention provides a method, a device, a terminal device and a storage medium for editing a highlight clip of a baby machine, which can improve the editing efficiency of a merchant to the highlight video clip of the baby machine and improve the consumption experience of a user.
The invention provides a method for editing a highlight of a baby machine, which comprises the following steps: acquiring videos shot by a baby machine every time when the baby machine is started, and acquiring a plurality of initial videos;
taking the initial video with the corresponding prize-winning information as a video to be clipped; wherein the prize information includes: the time of the prize;
for each video to be clipped, marking the video frame at the corresponding moment in the video to be clipped according to the prize-winning time to obtain marked video frames;
and backtracking the video to be clipped by taking the marked video frames as references, and extracting a plurality of video frames before the marked video frames and the marked video frames to obtain corresponding highlight video clips.
Further, the prize information further includes: number of awards and type of awards;
after obtaining the highlight video clip, further comprising:
generating a prize-winning information label according to the prize-winning type and the prize-winning number;
and adding the prize information tag to the highlight video clip.
Further, the method further comprises the following steps:
acquiring the starting time and the closing time of the baby machine each time;
determining a starting period of the doll machine according to the starting time and the closing time;
and generating a time period label according to the starting time period of the baby machine, and adding the time period label to the corresponding initial video.
Further, the method further comprises the following steps:
grouping the initial videos according to the time period labels of the initial videos to generate a plurality of video groups; the method comprises the steps that all initial videos in each video group are sequenced according to the time of the starting period of the corresponding baby machine, and the time interval of the starting period of the corresponding baby machine of the adjacent initial videos is within a preset time interval;
deleting the initial video which does not belong to the video to be clipped in each video group;
adding the same user identifier to each of the remaining videos to be clipped in the same video group;
and merging the highlight clips corresponding to the video to be clipped with the same user identification.
On the basis of the method item embodiments, the invention correspondingly provides device item embodiments;
the invention provides a device for editing a highlight of a baby machine, which comprises: the system comprises a video acquisition module, a video module to be clipped, a video frame marking module and a video backtracking module;
the video acquisition module is used for acquiring videos shot by the baby machine every time when the baby machine is started, and obtaining a plurality of initial videos;
the video module to be clipped is used for taking the initial video with the corresponding prize-winning information as the video to be clipped; wherein the prize information includes: the time of the prize;
the video frame marking module is used for marking the video frames at the corresponding moments in the video to be clipped according to the prize-winning time for each video to be clipped to obtain marked video frames;
the video backtracking module is used for backtracking the video to be clipped by taking the marked video frames as the reference, and extracting a plurality of video frames before the marked video frames and the marked video frames to obtain the corresponding highlight video clips.
Further, the prize information further includes: number of awards and type of awards;
after obtaining the highlight video clip, further comprising:
generating a prize-winning information label according to the prize-winning type and the prize-winning number;
and adding the prize information tag to the highlight video clip.
Further, the method further comprises the following steps: a starting period module and a label adding module;
the starting period module is used for acquiring the starting time and the closing time of the baby machine each time and determining the starting period of the baby machine according to the starting time and the closing time;
the label adding module is used for generating a time period label according to the starting time period of the doll machine and adding the time period label to the corresponding initial video.
Further, the method further comprises the following steps: the system comprises a video ordering module, a video screening module, an identification adding module and a video merging module;
the video ordering module is used for grouping the initial videos according to the time period labels of the initial videos to generate a plurality of video groups; the method comprises the steps that all initial videos in each video group are sequenced according to the time of the starting period of the corresponding baby machine, and the time interval of the starting period of the corresponding baby machine of the adjacent initial videos is within a preset time interval;
the video screening module is used for deleting the initial video which does not belong to the video to be clipped in each video group;
the identification adding module is used for adding the same user identification corresponding to each video to be clipped remained in the same video group;
and the video merging module is used for merging highlight fragments corresponding to the video to be clipped with the same user identification.
On the basis of the method item embodiment, the invention correspondingly provides a terminal equipment item embodiment;
the invention provides a terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor realizes the method for clipping the highlight segments of the doll machine according to any one of the invention when executing the computer program.
Based on the method item embodiment, the invention correspondingly provides a storage medium item embodiment;
the invention provides a storage medium which comprises a stored computer program, wherein the computer program is used for controlling a terminal device where the storage medium is located to execute the method for editing the highlight clips of the doll machine.
The embodiment of the invention has the following beneficial effects:
the invention provides a method for editing a highlight of a baby machine; after obtaining the prize-giving information and the shot video when the baby machine is started, determining whether the shot video has a highlight according to the prize-giving information of the baby machine, marking the corresponding position of the shot video according to the prize-giving time when the shot video has the highlight, and then backtracking according to the mark to obtain the video fragment in a first preset time period before the prize is taken, namely obtaining the highlight video fragment; when the video is not going out, video backtracking is not needed. By implementing the invention, the shot video can be automatically marked and traced back according to the prize-giving information of the baby machine, so that the highlight video clips can be quickly obtained, and the clipping efficiency of the highlight video clips of the baby machine and the consumption experience of users are improved.
Drawings
Fig. 1 is a schematic flow chart of a method for editing a highlight of a doll machine according to an embodiment of the present invention;
FIG. 2 is a flowchart of a video information management method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for editing a highlight of a doll machine according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, an embodiment provides a method for editing a highlight clip of a doll machine, including:
step S101, acquiring videos shot by a baby machine every time when the baby machine is started, and obtaining a plurality of initial videos;
step S102, taking the initial video with the corresponding prize-winning information as a video to be clipped; wherein the prize information includes: the time of the prize;
step S103, marking video frames at corresponding moments in the video to be clipped according to the prize-winning time for each video to be clipped to obtain marked video frames;
and step S104, backtracking the video to be clipped by taking the marked video frames as references, and extracting a plurality of video frames before the marked video frames and the marked video frames to obtain corresponding highlight video clips.
For step S1, during each start-up of the doll machine, the camera mounted on the doll machine will simultaneously take a photograph and cache the photographed video in a memory card in the doll machine.
For step S2, if the user grabs a prize on the doll machine, after the video shooting is finished, the prize-winning information and the video are stored together, the prize-winning information and the video are uniquely corresponding, and the initial video with the corresponding prize-winning information is used as the video to be clipped.
In step S3, for each video to be clipped in the memory card, marking the video frame at the corresponding time in the video to be clipped according to the prize-winning time in the corresponding prize-winning information to obtain a marked video frame, and the marking mode is not specifically limited.
In a preferred embodiment, the prize information further includes: number of awards and type of awards;
after obtaining the highlight video clip, further comprising: generating a prize-winning information label according to the prize-winning type and the prize-winning number; and adding the prize information tag to the highlight video clip.
Specifically, the prize-drawing information further comprises prize-drawing quantity and prize-drawing types; the prize-giving number refers to the number of prizes which are caught by a user in the process of using the baby machine each time, and the types of prizes are the types of the prizes which are caught; each prize type in the baby machine corresponds to a number, and when the prizes in the baby machine are placed in the pick-up port, the prize-out quantity and prize-out type information are read through a sensor preset in the pick-up port. The sensor may be an infrared sensor, and the type and number of the sensors are not particularly limited.
After obtaining the corresponding highlight video clips, generating a prize-winning information tag according to the prize-winning type and the prize-winning number corresponding to the original video to be clipped, and adding the prize-winning information tag as a tag corresponding to the highlight video clips.
And for the step S4, taking the marked video frames as references, wherein the references are the time when the baby machine pick-up port detects the prize in the video, only needing to trace back the shot video according to the marked video frames, extracting the video frames about the highlight in the video, and deleting the video frames not belonging to the highlight to obtain the corresponding highlight.
In an alternative embodiment, the video may be traced back by setting a timing task, and tracing may be performed after the doll is idle for a certain period of time, tracing may be performed when the local cache is full, or tracing may be performed when it is determined that the user changes.
In an alternative embodiment, for a doll machine needing to scan a code and order, whether the user belongs to the same user or not can be determined according to user information when the user orders and pays.
In a preferred embodiment, the start time and the shut down time of each of the moppet machines are obtained;
determining a starting period of the doll machine according to the starting time and the closing time;
and generating a time period label according to the starting time period of the baby machine, and adding the time period label to the corresponding initial video.
Specifically, the starting time and the corresponding closing time of each starting time of the baby machine are recorded to determine each starting time period of the baby machine, then a corresponding time period label is generated according to each starting time period, and the time period label is added to the corresponding initial video.
In a preferred embodiment, grouping each initial video according to a time period label of each initial video to generate a plurality of video groups; the method comprises the steps that all initial videos in each video group are sequenced according to the time of the starting period of the corresponding baby machine, and the time interval of the starting period of the corresponding baby machine of the adjacent initial videos is within a preset time interval;
deleting the initial video which does not belong to the video to be clipped in each video group;
adding the same user identifier to each of the remaining videos to be clipped in the same video group;
and merging the highlight clips corresponding to the video to be clipped with the same user identification.
Specifically, whether the two highlight clips belong to the same player is determined according to that the time interval between the two starts is smaller than a certain time, if the time interval is smaller than 30 seconds, the two highlight clips are judged to be operated by the same player; the specific time interval may be set according to actual situations, and is not particularly limited.
After a plurality of original videos shot when the baby machine is started are acquired, grouping according to time period labels on each video, sequencing the videos in each group according to shooting sequence, and enabling time intervals of starting time periods of the baby machines corresponding to the videos between adjacent videos to be within a preset time interval; the preset time interval includes, but is not limited to, 30 seconds.
After grouping, backtracking each video to be clipped in each video group, only reserving video clips related to the prize, keeping the sequence of the highlight video clips unchanged, combining the highlight video clips to obtain the prize-appearing video of the same user, and actively uploading the prize-appearing video to a server; when uploading, the method also comprises the steps of corresponding equipment numbers, equipment starting time, order numbers and the like of all the fragments in each group of videos, and simultaneously reporting the detection information of the doll machine, wherein the detection information comprises the prize-drawing time, the prize-drawing number and the prize-drawing type in the prize-drawing information, and is not particularly limited. It is understood that the server herein refers to a combination of several computers at the back end. Which may include OSS servers and other business servers. Referring to fig. 2, oss is mainly used to store and manage video data.
In another optional embodiment, the doll machine may upload each highlight video clip to the server, then the server adds a tag to the video clip according to the situation, and the server may find the corresponding information of the player and the merchant, the commodity information reported by the merchant, the equipment type and the like through the equipment number and the starting time or the order number; the labels comprise user labels, merchant labels, gift fragments, one clip with more clips and categories; wherein successive segments belonging to the same user may be stitched into one segment.
Illustratively, tagging herein means tagging corresponding tags according to information of the video. For example, when a video clip is reported, it will carry information about whether it is awarded. At this point, the video may be labeled with a highlight. For part of the video, secondary analysis can also be performed in the server, for example, the type of prize, doll, toy, blind box, etc. corresponding to the video can be identified by image recognition. And the prizes recorded by the merchants can be searched in the server according to the equipment for producing the video. In turn, the prizes may correspond to tags of some specific cartoon characters. For example, the inside of the baby robot is a Mickey mouse, and at this time, the video is played on the Mickey mouse in the tag. And the corresponding merchant can be found according to the equipment number corresponding to the video, and then the merchant is marked with the tags, and the final purpose of the tags is to facilitate retrieval and classification. It should be noted that: and judging the highlight video of the baby machine to be the winning video clip of the baby machine.
In an alternative embodiment, the server may splice after acquiring the corresponding highlight video or highlight video clips, and deliver the spliced video to a live broadcast or video platform.
Specifically, the delivering can be performed according to a configuration strategy, wherein the configuration strategy comprises that videos of a preset label are screened, spliced into video streams according to the acquisition sequence, and the video streams are pushed to a live channel for playing. Of course, part of the highlight clips may also be put into the short video numbers of all merchants. In the configuration strategy, an administrator can select videos of specific merchants or specific types of devices to deliver, and can also select specific tags: such as video of tags with multiple prizes at a time. The video stream is encoded according to the requirements of a third party platform.
In an alternative embodiment, after the server obtains the video clip pointing to a certain user ID, the prompt information of the highlight is pushed to the user; the pushing mode can be short message pushing and micro-message public signal pushing, and the pushing mode is not particularly limited.
If the user enters the server to connect and select to acquire the video, the server can send the spliced prize-giving video to the user terminal, and the prize-giving video can be shared in a social software connection mode.
If the cloud storage video is selected by the user, the server stores the corresponding prize-winning video in a preset area.
In an alternative embodiment, in the starting page of the doll machine, after each start, the user may obtain the current video by actively clicking a button, and the operation is self-triggered by the user, that is, the user may actively click to obtain the relevant video clip, where after the user obtains the relevant video clip, the relevant video clip is also uploaded to the server at the same time.
Of course, if the cache memory is smaller, there may be two processing modes, one is to delete the corresponding local cache after the server uploads the highlight video clips, and because the server will distinguish whether the clips belong to the same user based on the information such as the order, even if a certain clip in a highlight clip is uploaded in advance for the user to obtain, there may be a server, and when the server reports other clips in the highlight clip, the server may determine the order of the clips, so as to splice the completed clips. In another mode, after the video clips are uploaded, whether the video clips are identical or not is judged according to the equipment number and the starting time, if the video clips are identical, the video clips are not received, and by adopting the scheme, the terminal equipment only needs to clean the cache regularly or in a FIFO mode.
In an alternative embodiment, as shown in FIG. 2, for the convenience of managing the information related to the winning video, an OSS server is used, and the OSS server (OSS (Operation Support Systems) system, i.e. operation support system, is a support platform necessary for telecommunication service development and operation).
The IOT service obtains temporary access credentials from the server side to the STS (ali cloud STS (Security Token Service) is a temporary access rights management service provided by ali cloud). The temporary access credential is then sent to the communication box while the IOT service caches the credential. The communication box constructs a temporarily valid OSS and API request using STS credentials and uploads the video file. After the box is successfully uploaded, the url of the OSS file is reported to the IOT service, and at this time, the IOT service can acquire the video based on the OSS address.
On the basis of the method item embodiments, the invention correspondingly provides the device item embodiments.
As shown in fig. 3, an embodiment of the present invention provides a device for editing a highlight of a doll machine, including: comprising the following steps: the system comprises a video acquisition module, a video module to be clipped, a video frame marking module and a video backtracking module;
the video acquisition module is used for acquiring videos shot by the baby machine every time when the baby machine is started, and obtaining a plurality of initial videos;
the video module to be clipped is used for taking the initial video with the corresponding prize-winning information as the video to be clipped; wherein the prize information includes: the time of the prize;
the video frame marking module is used for marking the video frames at the corresponding moments in the video to be clipped according to the prize-winning time for each video to be clipped to obtain marked video frames;
the video backtracking module is used for backtracking the video to be clipped by taking the marked video frames as the reference, and extracting a plurality of video frames before the marked video frames and the marked video frames to obtain the corresponding highlight video clips.
In a preferred embodiment, the prize information further includes: number of awards and type of awards;
after obtaining the highlight video clip, further comprising:
generating a prize-winning information label according to the prize-winning type and the prize-winning number;
and adding the prize information tag to the highlight video clip.
In a preferred embodiment, further comprising: a starting period module and a label adding module;
the starting period module is used for acquiring the starting time and the closing time of the baby machine each time and determining the starting period of the baby machine according to the starting time and the closing time;
the label adding module is used for generating a time period label according to the starting time period of the doll machine and adding the time period label to the corresponding initial video.
In a preferred embodiment, further comprising: the system comprises a video ordering module, a video screening module, an identification adding module and a video merging module;
the video ordering module is used for grouping the initial videos according to the time period labels of the initial videos to generate a plurality of video groups; the method comprises the steps that all initial videos in each video group are sequenced according to the time of the starting period of the corresponding baby machine, and the time interval of the starting period of the corresponding baby machine of the adjacent initial videos is within a preset time interval;
the video screening module is used for deleting the initial video which does not belong to the video to be clipped in each video group;
the identification adding module is used for adding the same user identification corresponding to each video to be clipped remained in the same video group;
and the video merging module is used for merging highlight fragments corresponding to the video to be clipped with the same user identification.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the invention, the connection relation between the modules represents that the modules have communication connection, and can be specifically implemented as one or more communication buses or signal lines. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
It will be clearly understood by those skilled in the art that, for convenience and brevity, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
On the basis of the method item embodiment, the invention correspondingly provides a terminal equipment item embodiment.
Another embodiment of the present invention provides a terminal device including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor; when the processor executes the computer program, the method for editing the highlight clips of the doll machine according to any embodiment of the invention is realized.
Illustratively, in this embodiment the computer program may be partitioned into one or more modules, which are stored in the memory and executed by the processor to perform the present invention. The one or more module elements may be a series of computer program instruction segments capable of performing a specific function, the instruction segments describing the execution of the computer program in the device;
the terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The device may include, but is not limited to, a processor, a memory;
the processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the device, and which connects various parts of the entire device using various interfaces and lines;
the memory may be used to store the computer program and/or modules, and the processor may implement various functions of the device by running or executing the computer program and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; in addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
Based on the method item embodiments, the invention correspondingly provides storage medium item embodiments.
Another embodiment of the present invention provides a storage medium, where the storage medium includes a stored computer program, where the computer program, when executed, controls a device in which the storage medium is located to perform the method for editing the highlight clips of the doll machine according to any one of the embodiments of the present invention.
In this embodiment, the storage medium is a computer-readable storage medium, and the computer program includes computer program code, where the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form, and so on. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
By implementing the embodiments of the invention, the editing efficiency of the merchant on the highlight video clips of the doll machine and the consumption experience of users can be improved.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also deemed to be the invention as set forth in the claims.

Claims (10)

1. A method of editing a highlight of a doll machine, comprising:
acquiring videos shot by a baby machine every time when the baby machine is started, and acquiring a plurality of initial videos;
taking the initial video with the corresponding prize-winning information as a video to be clipped; wherein the prize information includes: the time of the prize;
for each video to be clipped, marking the video frame at the corresponding moment in the video to be clipped according to the prize-winning time to obtain marked video frames;
and backtracking the video to be clipped by taking the marked video frames as references, and extracting a plurality of video frames before the marked video frames and the marked video frames to obtain corresponding highlight video clips.
2. The method of assembling a highlight of a doll machine according to claim 1, wherein the prize-giving information further comprises: number of awards and type of awards;
after obtaining the highlight video clip, further comprising:
generating a prize-winning information label according to the prize-winning type and the prize-winning number;
and adding the prize information tag to the highlight video clip.
3. The method of assembling a highlight of a doll machine according to claim 2, further comprising:
acquiring the starting time and the closing time of the baby machine each time;
determining a starting period of the doll machine according to the starting time and the closing time;
and generating a time period label according to the starting time period of the baby machine, and adding the time period label to the corresponding initial video.
4. The method of assembling a highlight of a doll machine as claimed in claim 3 further comprising:
grouping the initial videos according to the time period labels of the initial videos to generate a plurality of video groups; the method comprises the steps that all initial videos in each video group are sequenced according to the time of the starting period of the corresponding baby machine, and the time interval of the starting period of the corresponding baby machine of the adjacent initial videos is within a preset time interval;
deleting the initial video which does not belong to the video to be clipped in each video group;
adding the same user identifier to each of the remaining videos to be clipped in the same video group;
and merging the highlight clips corresponding to the video to be clipped with the same user identification.
5. A device for editing a highlight of a doll machine, comprising: the system comprises a video acquisition module, a video module to be clipped, a video frame marking module and a video backtracking module;
the video acquisition module is used for acquiring videos shot by the baby machine every time when the baby machine is started, and obtaining a plurality of initial videos;
the video module to be clipped is used for taking the initial video with the corresponding prize-winning information as the video to be clipped; wherein the prize information includes: the time of the prize;
the video frame marking module is used for marking the video frames at the corresponding moments in the video to be clipped according to the prize-winning time for each video to be clipped to obtain marked video frames;
the video backtracking module is used for backtracking the video to be clipped by taking the marked video frames as the reference, and extracting a plurality of video frames before the marked video frames and the marked video frames to obtain the corresponding highlight video clips.
6. The apparatus for editing a highlight of a doll machine according to claim 5, wherein the prize-giving information further comprises: number of awards and type of awards;
after obtaining the highlight video clip, further comprising:
generating a prize-winning information label according to the prize-winning type and the prize-winning number;
and adding the prize information tag to the highlight video clip.
7. The apparatus for editing a highlight of a doll machine according to claim 6, further comprising: a starting period module and a label adding module;
the starting period module is used for acquiring the starting time and the closing time of the baby machine each time and determining the starting period of the baby machine according to the starting time and the closing time;
the label adding module is used for generating a time period label according to the starting time period of the doll machine and adding the time period label to the corresponding initial video.
8. The apparatus for editing a highlight of a doll machine according to claim 7, further comprising: the system comprises a video ordering module, a video screening module, an identification adding module and a video merging module;
the video ordering module is used for grouping the initial videos according to the time period labels of the initial videos to generate a plurality of video groups; the method comprises the steps that all initial videos in each video group are sequenced according to the time of the starting period of the corresponding baby machine, and the time interval of the starting period of the corresponding baby machine of the adjacent initial videos is within a preset time interval;
the video screening module is used for deleting the initial video which does not belong to the video to be clipped in each video group;
the identification adding module is used for adding the same user identification corresponding to each video to be clipped remained in the same video group;
and the video merging module is used for merging highlight fragments corresponding to the video to be clipped with the same user identification.
9. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the method of editing the highlight of a doll machine according to any one of claims 1 to 4 when the computer program is executed.
10. A storage medium comprising a stored computer program, wherein the computer program, when run, controls a terminal device in which the storage medium resides to perform the method of editing a highlight of a doll machine as claimed in any one of claims 1 to 4.
CN202310770780.7A 2023-06-27 2023-06-27 Method, device, terminal equipment and storage medium for editing highlight of doll machine Active CN116684665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310770780.7A CN116684665B (en) 2023-06-27 2023-06-27 Method, device, terminal equipment and storage medium for editing highlight of doll machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310770780.7A CN116684665B (en) 2023-06-27 2023-06-27 Method, device, terminal equipment and storage medium for editing highlight of doll machine

Publications (2)

Publication Number Publication Date
CN116684665A true CN116684665A (en) 2023-09-01
CN116684665B CN116684665B (en) 2024-03-12

Family

ID=87789077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310770780.7A Active CN116684665B (en) 2023-06-27 2023-06-27 Method, device, terminal equipment and storage medium for editing highlight of doll machine

Country Status (1)

Country Link
CN (1) CN116684665B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105009599A (en) * 2012-12-31 2015-10-28 谷歌公司 Automatic identification of a notable moment
US20170169108A1 (en) * 2015-12-10 2017-06-15 Le Holdings (Beijing) Co., Ltd. Bright spot prompting method and device based on search key
US20180301169A1 (en) * 2015-02-24 2018-10-18 Plaay, Llc System and method for generating a highlight reel of a sporting event
CN111182328A (en) * 2020-02-12 2020-05-19 北京达佳互联信息技术有限公司 Video editing method, device, server, terminal and storage medium
CN111311814A (en) * 2020-02-19 2020-06-19 北京顶喜乐科技有限公司 Online ball grabbing machine based on RFID
CN114205534A (en) * 2020-09-02 2022-03-18 华为技术有限公司 Video editing method and device
US20230079785A1 (en) * 2021-03-25 2023-03-16 Tencent Technology (Shenzhen) Company Limited Video clipping method and apparatus, computer device, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105009599A (en) * 2012-12-31 2015-10-28 谷歌公司 Automatic identification of a notable moment
US20180301169A1 (en) * 2015-02-24 2018-10-18 Plaay, Llc System and method for generating a highlight reel of a sporting event
US20170169108A1 (en) * 2015-12-10 2017-06-15 Le Holdings (Beijing) Co., Ltd. Bright spot prompting method and device based on search key
CN111182328A (en) * 2020-02-12 2020-05-19 北京达佳互联信息技术有限公司 Video editing method, device, server, terminal and storage medium
CN111311814A (en) * 2020-02-19 2020-06-19 北京顶喜乐科技有限公司 Online ball grabbing machine based on RFID
CN114205534A (en) * 2020-09-02 2022-03-18 华为技术有限公司 Video editing method and device
US20230079785A1 (en) * 2021-03-25 2023-03-16 Tencent Technology (Shenzhen) Company Limited Video clipping method and apparatus, computer device, and storage medium

Also Published As

Publication number Publication date
CN116684665B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN106484858B (en) hot content pushing method and device
US20200183977A1 (en) Providing relevant cover frame in response to a video search query
US10951928B2 (en) Execution of cases based on barcodes in video feeds
US20220172476A1 (en) Video similarity detection method, apparatus, and device
US10929460B2 (en) Method and apparatus for storing resource and electronic device
CN107797717B (en) Push method, display method, client device and data processing system
CN108924109B (en) Data transmission method and device and processing equipment
CN112131224B (en) Application installation package processing and installation source determining method, device and traceability system
CN110719332A (en) Data transmission method, device, system, computer equipment and storage medium
CN108966316B (en) Method, device and equipment for displaying multimedia resources and predicting connection waiting duration
CN114629929B (en) Log recording method, device and system
CN115086752B (en) Recording method, system and storage medium for browser page content
CN109710827B (en) Picture attribute management method and device, picture server and business processing terminal
CN113515997A (en) Video data processing method and device and readable storage medium
CN107295075B (en) Cross-terminal application state migration method based on session maintenance
US20210279372A1 (en) Fabric detecting and recording method and apparatus
CN116684665B (en) Method, device, terminal equipment and storage medium for editing highlight of doll machine
CN103731634A (en) Media monitoring method and media monitoring system
US20210241019A1 (en) Machine learning photographic metadata
CN110750749A (en) Community maintenance method, electronic device and computer-readable storage medium
CN116193174A (en) Media resource processing method and system
CN116132625A (en) Supervision method and device for transaction flow
CN111510746A (en) Media resource delivery method and device, storage medium and electronic device
CN114328392A (en) Advertising media material management system, method, equipment and medium
CN113190750A (en) Anonymous content matching and pushing method based on friend circle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant