CN113821653A - Video management method and device and cloud server - Google Patents

Video management method and device and cloud server Download PDF

Info

Publication number
CN113821653A
CN113821653A CN202011313485.1A CN202011313485A CN113821653A CN 113821653 A CN113821653 A CN 113821653A CN 202011313485 A CN202011313485 A CN 202011313485A CN 113821653 A CN113821653 A CN 113821653A
Authority
CN
China
Prior art keywords
video
video image
target
cover
determined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011313485.1A
Other languages
Chinese (zh)
Inventor
吴启琦
应晓磊
吴建元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongtuo Internet Information Technology Co Ltd
Original Assignee
Suzhou Zhongtuo Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongtuo Internet Information Technology Co Ltd filed Critical Suzhou Zhongtuo Internet Information Technology Co Ltd
Priority to CN202011313485.1A priority Critical patent/CN113821653A/en
Publication of CN113821653A publication Critical patent/CN113821653A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application discloses a video management method, a video management device and a cloud server, wherein the method comprises the following steps: receiving an interaction instruction from a client, determining at least one target video in a video database according to video key features, determining a target video cover poster corresponding to the video key features as a dynamic video cover poster of the target video, and generating an interaction response according to the dynamic video cover poster, an interaction information result and a play address of each target video; the method provides different video cover posters for the same video according to different video key characteristics, and the video cover posters corresponding to the video key characteristics are dynamically used as dynamic video cover posters of the target video according to the video key characteristics, so that each video cover poster of each recommendation result contains elements corresponding to the video key characteristics, the video cover poster display accuracy of the video recommendation technology is improved, the threshold operation difficulty of a user is reduced, and the viscosity of the user is improved.

Description

Video management method and device and cloud server
Technical Field
The disclosure relates to the technical field of video management, in particular to a video management method and a cloud server.
Background
With the development of internet applications, many applications can recommend content according to user requirements, such as video recommendation of short videos and the like.
In practical applications, a short video often includes a plurality of different elements (corresponding to video key features), for example, different themes, different UP owners, and the like, and a video cover poster specified by an UP owner or a video cover poster defaulted by a server often cannot include these different elements at the same time, which may result in that when a recommendation server feeds back a short video to a user according to a video key feature input by the user, the video cover poster corresponding to the short video may not include an element corresponding to the video key feature, and the user needs to perform manual searching, attention adding, and collection to achieve the purpose when the user wants to obtain a related similar video, which causes excessive operations to the user and is not favorable for enhancing the user's viscosity.
Disclosure of Invention
In order to overcome at least the above disadvantages in the prior art, the present disclosure is directed to a video management method, an apparatus and a cloud server.
In a first aspect, the present disclosure provides a video management method, including:
receiving an interaction instruction from a client, wherein the interaction instruction carries video key features;
determining at least one target video in a video database according to the video key features, wherein the matching degree of the video image features of the target video and the video key features meets interaction conditions;
when the target video comprises a plurality of video cover posters corresponding to different video image characteristics to be determined, determining the target video cover posters corresponding to the video key characteristics as dynamic video cover posters of the target video;
generating an interactive response according to the dynamic video cover posters, the interactive information results and the playing addresses of the target videos;
and sending the interactive response to the client so that the client can display the dynamic video cover posters and the interactive information results of all the target videos, and after the target videos are selected, the client can approve and collect the target videos based on the playing addresses and automatically pay attention to the uploading users of the target videos.
In a second aspect, the present disclosure provides a video recommendation apparatus, including:
the receiving module is used for receiving an interaction instruction from a client, wherein the interaction instruction carries video key characteristics;
the video searching module is used for determining at least one target video in a video database according to the video key features, and the matching degree of the video image features of the target video and the video key features meets interaction conditions;
the cover determining module is used for determining the target video cover poster corresponding to the video key feature as a dynamic video cover poster of the target video when the target video comprises a plurality of video cover posters corresponding to different video image features to be determined;
the response construction module is used for generating interactive response according to the dynamic video cover posters, the interactive information results and the playing addresses of all the target videos;
and the sending module is used for sending the interactive response to the client so that the client can display the dynamic video cover posters and the interactive information results of all the target videos, approve and collect the target videos based on the playing addresses after the target videos are selected, and automatically pay attention to the uploading users of the target videos.
In a third aspect, an embodiment of the present disclosure further provides a cloud server, where the cloud server includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be communicatively connected to at least one client, the machine-readable storage medium is configured to store a program, an instruction, or code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium to perform the video management method in the first aspect or any possible design of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, in which instructions are stored, and when executed, cause a computer to perform the video management method in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the above aspects, the present invention provides a new video management method and apparatus, where the method first receives an interaction instruction from a client, determines at least one target video in a video database according to the video key features, determines a target video cover poster corresponding to the video key features as a dynamic video cover poster of the target video when the target video includes a plurality of video cover posters corresponding to different video image features to be determined, generates an interaction response according to the dynamic video cover poster, an interaction information result, and a play address of each target video, sends the interaction response to the client, so that the client displays the dynamic video cover poster and the interaction information result of each target video, and clicks and collects the target video based on the play address after the target video is selected, automatically paying attention to the uploading user of the target video; the method and the device improve the display accuracy of the video cover poster of the video recommendation technology, reduce the operation threshold difficulty of the user, and further improve the result conversion rate of video search and the viscosity of the user to the target video.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present disclosure and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings may be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of an application scenario of a video management system according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a video management method according to an embodiment of the present disclosure;
fig. 3 is a functional block diagram of a video management apparatus according to an embodiment of the disclosure;
fig. 4 is a block diagram schematically illustrating a structure of a cloud server for implementing the video management method according to the embodiment of the present disclosure.
Detailed Description
The present disclosure is described in detail below with reference to the drawings, and the specific operation methods in the method embodiments can also be applied to the device embodiments or the system embodiments.
Fig. 1 is an interaction diagram of a video management system 10 according to an embodiment of the present disclosure. The video management system 10 may include a cloud server 100 and a client 200 communicatively connected to the cloud server 100. The video management system 10 shown in fig. 1 is only one possible example, and in other possible embodiments, the video management system 10 may include only a portion of the components shown in fig. 1 or may include other components.
In this embodiment, the client 200 may comprise a mobile device, a tablet computer, a laptop computer, etc., or any combination thereof. In some embodiments, the mobile device may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include control devices of smart electrical devices, smart monitoring devices, smart televisions, smart cameras, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, and the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include various virtual reality products and the like.
In this embodiment, the cloud server 100 and the client 200 in the video management system 10 may cooperatively perform the video management method described in the following method embodiment, and the specific steps performed by the cloud server 100 and the client 200 may refer to the detailed description of the following method embodiment.
To solve the technical problem in the foregoing background, fig. 2 is a flowchart illustrating a video management method according to an embodiment of the present disclosure, where the video management method provided by this embodiment may be executed by the cloud server 100 shown in fig. 1, and the video management method is described in detail below.
Step S110, receiving an interaction instruction from a client, wherein the interaction instruction carries video key characteristics;
step S120, determining at least one target video in a video database according to the video key features, wherein the matching degree of the video image features of the target video and the video key features meets interaction conditions;
step S130, when the target video comprises a plurality of video cover posters corresponding to different video image characteristics to be determined, determining the target video cover posters corresponding to the video key characteristics as dynamic video cover posters of the target video;
step S140, generating an interactive response according to the dynamic video cover posters, the interactive information results and the playing addresses of the target videos;
step S150, the interactive response is sent to the client, so that the client displays the dynamic video cover posters and the interactive information results of all the target videos, and after the target videos are selected, the client approves and collects the target videos based on the playing addresses, and automatically pays attention to the uploading users of the target videos.
In a possible embodiment, step S110 further includes:
step S010, constructing a frame data base which comprises video image characteristics to be determined and identification images corresponding to the video image characteristics to be determined;
step S020, acquiring candidate video image characteristics of each video in the video database;
step S030, based on the frame database, performing to-be-determined video image feature marking on the candidate video image features of each video;
and step S040, determining the video cover posters corresponding to the video image features to be determined of each video according to the video image features to be determined corresponding to the candidate video image features of each video.
In one possible embodiment, step S020 further includes:
step S021: analyzing and processing a video to obtain all video frames of the video;
step S022: and screening all video frames of the video based on the interaction instruction of the user to obtain candidate video image characteristics of the video.
In one possible embodiment, step S022 further comprises:
step S0221, selecting conditions according to a preset quantity to carry out primary screening on all video frames of the video, and carrying out secondary screening on primary screening results according to preset video image characteristic conditions to obtain candidate video image characteristics of the video;
or executing a step S0222, screening all video frames of the video according to a preset quantity selection condition to obtain candidate video image characteristics of the video;
or executing step S0223, and screening all video frames of the video according to preset video image characteristic conditions to obtain candidate video image characteristics of the video.
In one possible embodiment, step S030 further includes:
step S031, use the recognition model trained, carry on the identity recognition of picture to candidate video image characteristic and recognition image, receive the identity recognition result of picture;
and step S032, according to the image consistency identification result, marking the video image characteristic to be determined corresponding to the identification image with the image consistency meeting the marking condition with the candidate video image characteristic as the video image characteristic to be determined corresponding to the candidate video image characteristic.
In one possible embodiment, step S031 further comprises:
step 0311, obtain a trained face recognition system as the trained recognition model;
step 0312, use the face recognition system trained while being said, carry on the identity recognition of human face one by one with recognition image in the said frame data base the said candidate video image characteristic;
and step 0313, using the face consistency recognition result as the image consistency recognition result.
In one possible embodiment, step S040 further includes:
step S041, sorting the to-be-determined video image features corresponding to the candidate video image features of the video to obtain a plurality of candidate video image features corresponding to the to-be-determined video image features;
step S042, according to the image consistency between the identification image corresponding to the video image feature to be determined and the candidate video image features corresponding to the video image feature to be determined and the interaction instruction of the user, one of the candidate video image features corresponding to the video image feature to be determined is selected as the video cover poster corresponding to the video image feature to be determined.
Fig. 3 is a schematic diagram of functional modules of a video management device 300 according to an embodiment of the present disclosure, and in this embodiment, the video management device 300 may be divided into the functional modules according to a method embodiment executed by the cloud server 100, that is, the following functional modules corresponding to the video management device 300 may be used to execute the method embodiments executed by the cloud server 100. The video management apparatus 300 may include a receiving module 310, a video searching module 320, a cover page determining module 330, a response constructing module 340, and a sending module 350, wherein the functions of the functional modules of the video management apparatus 300 are described in detail below.
The receiving module 310 may be configured to perform the step S110 described above, that is, to receive an interaction instruction from a client, where the interaction instruction carries a video key feature.
The video search module 320 may be configured to execute the above step S120, that is, to determine at least one target video in a video database according to the video key features, where a matching degree between the video image features of the target video and the video key features satisfies an interaction condition.
The cover determining module 330 may be configured to execute the step S130, that is, when the target video includes video cover posters corresponding to a plurality of different video image features to be determined, determine the target video cover poster corresponding to the video key feature as a dynamic video cover poster of the target video.
The response construction module 340 may be configured to execute the step S140, namely, to generate an interactive response according to the dynamic video cover posters, the interactive information result, and the playing addresses of the target videos.
The sending module 350 may be configured to execute the above-mentioned step S150 of video management, that is, to send the interaction response to the client, so that the client displays the dynamic video cover posters and the interaction information results of each target video, and after a target video is selected, approves and collects the target video based on the play address, and automatically focuses on an uploading user of the target video.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the receiving module 310 may be a processing element separately set up, or may be implemented by being integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and the processing element of the apparatus calls and executes the functions of the receiving module 310. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, when some of the above modules are implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call the program code. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Fig. 4 shows a hardware structure diagram of the cloud server 100 for implementing the control device provided by the embodiment of the present disclosure, and as shown in fig. 4, the cloud server 100 may include a processor 110, a machine-readable storage medium 120, a bus 130, and a transceiver 140.
In a specific implementation process, the at least one processor 110 executes computer-executable instructions stored in the machine-readable storage medium 120 (for example, included in the video management apparatus 300 shown in fig. 3), so that the processor 110 may perform the video management method according to the above method embodiment, where the processor 110, the machine-readable storage medium 120, and the transceiver 140 are connected through the bus 130, and the processor 110 may be configured to control transceiving actions of the transceiver 140, so as to perform data transceiving with the aforementioned client 200.
For a specific implementation process of the processor 110, reference may be made to the above-mentioned method embodiments executed by the cloud server 100, and implementation principles and technical effects are similar, which are not described herein again.
In the embodiment shown in fig. 4, it should be understood that the processor may be a Central Processing Unit (CPU), other general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The machine-readable storage medium 120 may comprise high-speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus 130 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus 130 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
In addition, the embodiment of the disclosure also provides a readable storage medium, in which a computer executing instruction is stored, and when a processor executes the computer executing instruction, the video management method is implemented.
The readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (10)

1. A video management method, comprising:
receiving an interaction instruction from a client, wherein the interaction instruction carries video key features;
determining at least one target video in a video database according to the video key features, wherein the matching degree of the video image features of the target video and the video key features meets interaction conditions;
when the target video comprises a plurality of video cover posters corresponding to different video image characteristics to be determined, determining the target video cover posters corresponding to the video key characteristics as dynamic video cover posters of the target video;
generating an interactive response according to the dynamic video cover posters, the interactive information results and the playing addresses of the target videos;
and sending the interactive response to the client so that the client can display the dynamic video cover posters and the interactive information results of all the target videos, and after the target videos are selected, the client can approve and collect the target videos based on the playing addresses and automatically pay attention to the uploading users of the target videos.
2. The video management method according to claim 1, further comprising, before the step of receiving the interactive command from the client:
constructing a frame data base which comprises video image features to be determined and identification images corresponding to the video image features to be determined;
acquiring candidate video image characteristics of each video in the video database;
based on the frame data base, carrying out to-be-determined video image feature marking on the candidate video image features of each video;
and determining the video cover posters corresponding to the video image features to be determined of each video according to the video image features to be determined corresponding to the candidate video image features of each video.
3. The video management method according to claim 2, wherein the step of obtaining candidate video image features of each video in the video database comprises:
analyzing and processing a video to obtain all video frames of the video;
and screening all video frames of the video based on the interaction instruction of the user to obtain candidate video image characteristics of the video.
4. The video management method according to claim 3, wherein the step of filtering all video frames of the video based on the user's interactive instruction comprises:
performing first screening on all video frames of the video according to a preset quantity selection condition, and performing second screening on a first screening result according to a preset video image characteristic condition to obtain candidate video image characteristics of the video;
or screening all video frames of the video according to a preset quantity selection condition to obtain candidate video image characteristics of the video;
or screening all video frames of the video according to a preset video image characteristic condition to obtain candidate video image characteristics of the video.
5. The video management method according to claim 2, wherein the step of labeling the candidate video image features of each video with the to-be-determined video image features based on the frame database comprises:
using the trained recognition model to perform image consistency recognition on the candidate video image features and the recognition image to obtain an image consistency recognition result;
and according to the image consistency identification result, marking the video image characteristics to be determined corresponding to the identification image with the image consistency meeting the marking condition of the candidate video image characteristics as the video image characteristics to be determined corresponding to the candidate video image characteristics.
6. The video management method according to claim 5, wherein the step of performing image consistency recognition on the candidate video image features and the recognition image by using the trained recognition model comprises:
acquiring a trained face recognition system as the trained recognition model;
using the trained face recognition system to perform face consistency recognition on the candidate video image features and the recognition images in the frame data base one by one;
and taking the face consistency recognition result as the image consistency recognition result.
7. The video management method according to claim 5, wherein the step of determining the video cover poster corresponding to the video image feature to be determined of each video according to the video image feature to be determined corresponding to the candidate video image feature of each video comprises:
sorting the video image features to be determined corresponding to the candidate video image features of the video to obtain a plurality of candidate video image features corresponding to the video image features to be determined;
and selecting one of the candidate video image features corresponding to the video image feature to be determined as the video cover poster corresponding to the video image feature to be determined according to the image consistency between the identification image corresponding to the video image feature to be determined and the candidate video image features corresponding to the video image feature to be determined and the interaction instruction of the user.
8. A video recommendation apparatus, comprising:
the receiving module is used for receiving an interaction instruction from a client, wherein the interaction instruction carries video key characteristics;
the video searching module is used for determining at least one target video in a video database according to the video key features, and the matching degree of the video image features of the target video and the video key features meets interaction conditions;
the cover determining module is used for determining the target video cover poster corresponding to the video key feature as a dynamic video cover poster of the target video when the target video comprises a plurality of video cover posters corresponding to different video image features to be determined;
the response construction module is used for generating interactive response according to the dynamic video cover posters, the interactive information results and the playing addresses of all the target videos;
and the sending module is used for sending the interactive response to the client so that the client can display the dynamic video cover posters and the interactive information results of all the target videos, approve and collect the target videos based on the playing addresses after the target videos are selected, and automatically pay attention to the uploading users of the target videos.
9. A computer readable storage medium storing instructions/executable code which, when executed by a processor of an electronic device, causes the electronic device to implement the method of any one of claims 1-7.
10. A cloud server, characterized in that the cloud server comprises a processor, a machine-readable storage medium, and a network interface, the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is used for being connected with at least one client in a communication manner, the machine-readable storage medium is used for storing programs, instructions, or codes, and the processor is used for executing the programs, instructions, or codes in the machine-readable storage medium to execute the video management method according to any one of claims 1 to 7.
CN202011313485.1A 2020-11-20 2020-11-20 Video management method and device and cloud server Withdrawn CN113821653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011313485.1A CN113821653A (en) 2020-11-20 2020-11-20 Video management method and device and cloud server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011313485.1A CN113821653A (en) 2020-11-20 2020-11-20 Video management method and device and cloud server

Publications (1)

Publication Number Publication Date
CN113821653A true CN113821653A (en) 2021-12-21

Family

ID=78924954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011313485.1A Withdrawn CN113821653A (en) 2020-11-20 2020-11-20 Video management method and device and cloud server

Country Status (1)

Country Link
CN (1) CN113821653A (en)

Similar Documents

Publication Publication Date Title
CN110007916B (en) Interface rendering method and device of business system and server
CN112839223B (en) Image compression method, image compression device, storage medium and electronic equipment
US20210064919A1 (en) Method and apparatus for processing image
CN105515887B (en) application testing method, server and system
CN113271428A (en) Video conference user authentication method, device and system
CN113613075A (en) Video recommendation method and device and cloud server
CN110058992B (en) Text template effect feedback method and device and electronic equipment
US11048745B2 (en) Cognitively identifying favorable photograph qualities
CN113747104A (en) Method and device for displaying document in video conference and cloud server
CN113313662A (en) Image processing method, device, equipment and storage medium
WO2021164173A1 (en) Bidding method for bidding cloud host, apparatus, system, and storage medium
CN112084386A (en) Cloud hosting client information management method and device and server
CN111275071A (en) Prediction model training method, prediction device and electronic equipment
CN113821653A (en) Video management method and device and cloud server
CN110337074B (en) Interactive information transmission method, system and terminal equipment
CN113207026A (en) Video recommendation method and device and cloud server
CN113794906A (en) Video recommendation method and device and cloud video server
CN112785444A (en) Intelligent investment data processing method, device and system based on mass financial data
CN107222559B (en) Information calling method
CN110825908B (en) Object migration method and device, electronic equipment and storage medium
CN113285912A (en) Security management method and device for monitoring content and cloud server
CN114398515A (en) Video searching method and device and cloud server
WO2022062179A1 (en) Cloud hosting customer information management method, apparatus and server
CN113901116A (en) Cloud member information management system, method and storage platform
CN113286171A (en) Video cover determination method and device and cloud server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20211221

WW01 Invention patent application withdrawn after publication