CN115701093A - Video shooting information acquisition method and video shooting and processing indication method - Google Patents

Video shooting information acquisition method and video shooting and processing indication method Download PDF

Info

Publication number
CN115701093A
CN115701093A CN202110801309.0A CN202110801309A CN115701093A CN 115701093 A CN115701093 A CN 115701093A CN 202110801309 A CN202110801309 A CN 202110801309A CN 115701093 A CN115701093 A CN 115701093A
Authority
CN
China
Prior art keywords
field
shooting
information
mirror
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110801309.0A
Other languages
Chinese (zh)
Inventor
申子宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hode Information Technology Co Ltd
Original Assignee
Shanghai Hode Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hode Information Technology Co Ltd filed Critical Shanghai Hode Information Technology Co Ltd
Priority to CN202110801309.0A priority Critical patent/CN115701093A/en
Priority to PCT/CN2022/098711 priority patent/WO2023284469A1/en
Publication of CN115701093A publication Critical patent/CN115701093A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Abstract

The embodiment of the application provides a video shooting information acquisition method, which comprises the following steps: determining a target video for analysis, wherein the target video corresponds to a time axis representing video progress; analyzing the target video to obtain a plurality of shooting information; and marking the shooting information on the time axis, wherein each piece of shooting information is distributed at the corresponding position of the time axis. The video shooting information acquisition method provided by the embodiment of the application deconstructs shooting and clipping related shooting information of a target video (high-quality video), and distributes the shooting information obtained by deconstruction on a time axis according to the position of the shooting information in the target video. Based on the shooting information distributed on the time axis, the client can be instructed to advance various shooting information, such as scene arrangement, character arrangement, shooting means, and the like, to be used over time when shooting or clipping a piece of video.

Description

Video shooting information acquisition method and video shooting and processing indication method
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method, a system, computer equipment and a computer readable storage medium for acquiring video shooting information, and a method for shooting and processing indication of video.
Background
With the reduction of the threshold of video shooting, more and more users become creators to shoot and create videos. High quality video requires a large number of professional and complex filming means. However, it is difficult for a user who has not learned a professional shooting technique to correctly apply shooting means, and a satisfactory video cannot be shot and efficiency is low.
Disclosure of Invention
An object of an embodiment of the present application is to provide a video shooting information obtaining method, a video shooting information obtaining system, a computer device, a computer-readable storage medium, and a video shooting and processing instruction method, which are used for solving the following problems: for users who have not learned professional shooting techniques, it is difficult to correctly apply shooting means, resulting in failure to shoot satisfactory video and inefficiency.
One aspect of the embodiments of the present application provides a method for acquiring video shooting information, where the method includes:
determining a target video for analysis, wherein the target video corresponds to a time axis representing video progress;
analyzing the target video to acquire a plurality of shooting information; and
and marking the shooting information on the time axis, wherein each shooting information is respectively distributed at the corresponding position of the time axis.
Optionally, the plurality of shooting information includes a plurality of shooting parameters;
the analyzing the target video to obtain a plurality of shooting information includes:
performing field segmentation on the target video to obtain a plurality of fields;
performing mirror segmentation on the plurality of fields to obtain a plurality of mirrors, each field comprising one or more mirrors;
analyzing each mirror of the plurality of mirrors to obtain shooting parameters of each mirror; and
obtaining field information of each field according to the shooting parameters of each mirror in each field;
wherein the field information of each field includes the shooting parameters of the respective mirrors in the field, and the position distribution of the shooting parameters of the respective mirrors in the field on the time axis corresponds to the position distribution of the respective mirrors in the field.
Optionally, the plurality of shooting information includes a plurality of shooting parameters;
the dividing the target video
Analyzing to obtain a plurality of shooting information, including:
performing field segmentation on the target video to obtain a plurality of fields;
performing mirror segmentation on the plurality of fields respectively to obtain a plurality of mirrors, each field comprising one or more mirrors;
analyzing the plurality of fields to obtain a topic for each field;
analyzing each mirror of the plurality of mirrors to obtain shooting parameters of each mirror; and
obtaining field information of each field according to the theme of each field and the shooting parameters of each mirror in each field;
wherein the field information of each field includes a subject of the field and photographing parameters of the respective mirrors within the field, and a position distribution of the photographing parameters of the respective mirrors within the field on the time axis corresponds to a position distribution of the respective mirrors within the field.
Optionally, the shooting parameters of each mirror include one or more of the following: scene, shooting angle, character information, mirror type, mirror moving operation and scene.
Optionally, the method further includes: respectively generating a plurality of sub-mirror scripts for each field according to the shooting parameters of each mirror in each field;
and the position distribution of the plurality of mirror scripts in each field on the time axis corresponds to the position distribution of the mirrors in the field.
An aspect of the embodiments of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the video capture information acquisition method as described above when executing the computer program.
An aspect of the embodiments of the present application further provides a computer-readable storage medium, in which a computer program is stored, where the computer program is executable by at least one processor, so that when the computer program is executed by the at least one processor, the steps of the video shooting information acquiring method are implemented.
An aspect of an embodiment of the present application further provides a video capturing and processing instruction method, including:
receiving request information of a client;
acquiring target field information according to the request information, wherein the target field information comprises a plurality of shooting information marked on the same time axis, and the position of each shooting information on the time axis represents the time sequence of each shooting information; and
and returning the target field information to the client to indicate the client to shoot or process videos according to the shooting information and the marked positions of the shooting information on the time axis.
Optionally, the target field information is field information of a target field in a plurality of fields; the method further comprises pre-acquiring the target field information:
performing field segmentation on the target video to obtain the target field;
performing mirror segmentation on the target field to obtain one or more mirrors;
analyzing each mirror to obtain shooting parameters of each mirror; and
obtaining the target field information according to the shooting parameters of each mirror;
wherein the target field information includes a shooting parameter of each mirror, and a position distribution of the shooting parameter of each mirror on the time axis has a corresponding relationship with a position distribution of each mirror in the target field.
Optionally, the target field information is field information of a target field in a plurality of fields; the method further comprises pre-acquiring the target field information:
performing field segmentation on the target video to obtain the target field;
performing mirror segmentation on the target field to obtain one or more mirrors;
analyzing the target field to obtain a theme of the target field;
analyzing each mirror to obtain shooting parameters of each mirror; and
obtaining the target field information according to the theme and the shooting parameters of each mirror;
the target field information comprises the subject and shooting parameters of each mirror, and the position distribution of the shooting parameters of each mirror on the time axis has a corresponding relation with the position distribution of each mirror in the target field.
The video shooting information acquisition method, system, computer device and computer-readable storage medium provided by the embodiment of the application deconstruct shooting and clipping related shooting information of a target video (high-quality video), and distribute the shooting information obtained by deconstruction on a time axis according to the position of the shooting information in the target video. Based on the shooting information distributed on the time axis, the client can be instructed to advance various shooting information, such as scene arrangement, character arrangement, shooting means, and the like, to be used over time when shooting or clipping a piece of video. According to the method and the device, the client can be instructed to shoot or cut the video similar to the high-quality video, and the shooting or cutting efficiency is improved.
Drawings
Fig. 1 schematically shows an application environment diagram of a video capture information acquisition method according to an embodiment of the present application;
fig. 2 schematically shows a flowchart of a video capture information acquisition method according to a first embodiment of the present application;
FIG. 3 is a flowchart illustrating sub-steps of step S202 in FIG. 2;
FIG. 4 is a flowchart illustrating sub-steps of step S202 in FIG. 2;
fig. 5 is a flowchart schematically illustrating additional steps of a video capture information acquisition method according to a first embodiment of the present application;
fig. 6 schematically illustrates a specific operation example of a video shooting information acquisition method according to a first embodiment of the present application;
fig. 7 schematically shows a flow chart of a video capture and processing indication method according to a second embodiment of the present application;
FIG. 8 is a diagram schematically illustrating a specific operation example based on the second embodiment of the present application;
fig. 9 schematically shows a block diagram of a video shooting information acquisition system according to a third embodiment of the present application;
fig. 10 schematically shows a block diagram of a video shooting information acquisition system according to a fourth embodiment of the present application;
fig. 11 schematically shows a hardware architecture diagram of a computer device according to a fifth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the descriptions relating to "first", "second", etc. in the embodiments of the present application are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
The inventor knows that high-quality video shooting relates to information such as scene layout, shot distribution, mirror movement distribution and the like, needs to perform professional movie and television creation learning, and performs video creation on the condition of abundant shooting experience. The method comprises the following specific steps:
the high-quality video comprises the following aspects:
(1) Video quality, namely recording videos in a professional shooting scene by using professional shooting equipment on the basis of visual sense;
(2) Shooting content, and shooting a video based on a specially designed script;
(3) The shooting technique is based on professional shooting techniques, picture layout, composition, scene distribution, and the like.
In summary, to shoot a high-quality video, a plurality of elements are required, and not only high-requirement software and hardware are required, but also a video creator is required to have a professional shooting technology, so that the threshold of high-quality video creation is invisibly improved.
In view of the above, the present application aims to utilize video understanding technology to guide a user to shoot or clip high-quality video in a video creation process, for example: high-quality videos are screened by an intelligent means, and related shooting skills for constructing high-quality videos are quantitatively analyzed by using an artificial intelligence related algorithm. The user can obtain shooting information for shooting high-quality videos of similar types by inputting related requirement keywords, such as: the elements such as scene switching, mirror moving operation, shooting angle, lens switching and the like on the time axis are distributed under the story line as required. Consequently, through providing shooting information, can be so that the user knows professional shooting technique and carries out the video creation to quick and have corresponding video creation of carrying on, reduced the threshold of video creation. The method comprises the following specific steps:
first, a high quality video is analyzed.
The high-quality video may include: through professional video shooting hardware, a video with high definition and excellent image quality is shot; and by professional video capture means (layout design): the method comprises the steps of field design, mirror distribution, distribution of various scenes, operation of mirror moving methods, image stabilization or later-stage image stabilization technology utilization, and video quality improvement.
Secondly, by an artificial intelligence correlation algorithm, for the intra-field: scene, scene type, moving mirror, and person are detected, and in the time direction, the time distribution of each imaging element in a point (for scene, person, scene type) and a segment (for moving mirror) is counted.
Thirdly, the user can determine the personalized shooting subject and search the high-quality video matched with the personalized shooting subject according to the shooting subject. And the user takes each shooting element of the matched high-quality video as a guide to shoot or record the video so as to shoot the high-quality video with similar type. The user may also fine tune and photograph or record video based on the fine tuning based on the various photographing elements.
Fourthly, after the user can shoot the video, the shooting skill (such as a scene, a mirror moving mode) of the high-quality video matched with the video can be searched, and the video is edited according to the shooting skill of the high-quality video, so that the shooting skill of the high-quality video is applied and embodied in the video, the video quality is effectively improved, and the creation threshold is reduced.
The present application provides various embodiments for introducing video capture information acquisition schemes and video capture or processing indication schemes, with particular reference to the following.
In the description of the present application, it should be understood that the numerical references before the steps do not identify the order of performing the steps, but merely serve to facilitate the description of the present application and to distinguish each step, and therefore should not be construed as limiting the present application.
The following are the term explanations of the present application:
indications, including controls, guidance, and/or prompts.
AI (Artificial Intelligence) tab: video content and shooting means are detected through artificial intelligence. Shooting parameters of the video are acquired in the mode, such as: shooting technique and related statistical distribution information, such as content of each shot, scene scheduling, shooting mode, scene, clip, sound, picture, rhythm, performance, machine position, etc.
Mirror, representing a video segment with a time start and a time end. A feature film is typically composed of 400-600 lenses.
A field, corresponding to a video bridge segment, which may be made up of one or more mirrors.
And (3) landscape identification: the different distances between the camera and the subject cause differences in the size of the range that the subject appears in the camera video recorder. The scenes sequentially comprise from near to far: close-up (meaning above the shoulders of the human body), close-up (meaning above the chest of the human body), medium view (meaning above the knees of the human body), panoramic view (the environment of the whole and surrounding parts of the human body), and distant view (the environment of the object to be photographed).
Carrying out mirror transportation: and the motion modes of the shooting device such as pushing, pulling, shaking, moving, standing and the like of the lens in the shooting process.
The storyboard script (storage script) refers to various image media such as movies, animations, dramas, advertisements, MTV, etc., and before actual shooting, the composition of the images is described in the form of a story chart, and continuous pictures are decomposed in units of one time mirror movement, and the mirror movement mode, time length, dialogue, special effects, etc. are marked. In this way, the needed shooting content is briefly recorded in the early stage of shooting, and each lens is reminded in the shooting process.
Fig. 1 schematically shows an environment application diagram according to an embodiment of the application. As shown in fig. 1:
the computer device 10000 can be connected to the client 30000 through the network 20000.
The computer device 10000 can provide a service such as providing photographing information to control or prompt a photographing action of the client 30000.
Computer device 10000 can be located in a data center, such as a single site, or distributed across different geographic locations (e.g., at multiple sites). Computer device 10000 can provide services via one or more networks 20000. The network 20000 includes various network devices such as routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, proxy devices, and/or the like. Network 20000 may comprise physical links, such as coaxial cable links, twisted pair cable links, fiber optic links, combinations thereof, and the like. The network 20000 may include wireless links such as cellular links, satellite links, wi-Fi links, etc.
Computer device 10000 can be implemented by one or more computing nodes. One or more compute nodes may include virtualized compute instances. The virtualized computing instance may include an emulation of a virtual machine, such as a computer system, operating system, server, and the like. The computing node may load a virtual machine by the computing node based on the virtual image and/or other data defining the particular software (e.g., operating system, dedicated application, server) used for emulation. As the demand for different types of processing services changes, different virtual machines may be loaded and/or terminated on one or more compute nodes. A hypervisor may be implemented to manage the use of different virtual machines on the same compute node.
Client 30000 may be configured to access content and services of computer device 10000. Client 30000 can include any type of electronic device that supports camera functionality, such as a mobile device, tablet device, camera, and so forth.
The client 30000 can output shooting information (e.g., skill quantization information) and the like to the user.
The following description will be made by way of various embodiments. The scheme may be implemented by the computer device 10000.
Example one
Fig. 2 schematically shows a flowchart of a video capture information acquisition method according to a first embodiment of the present application.
As shown in fig. 2, the video photographing information acquiring method may include steps S200 to S204, in which:
step S200, determining a target video for analysis, wherein the target video corresponds to a time axis representing video progress.
The target video may be a video manuscript based on various video formats, such as: AVI (Audio Video Interleaved) format, h.264/AVC (Advanced Video Coding), h.265/HEVC (High Efficiency Video Coding) h.265 format, and the like.
The target video can be preferably high-quality video, such as video with high definition and excellent image quality shot by professional video shooting hardware; and the video quality is improved by professional video shooting means (layout design: field design, mirror distribution, distribution of various scenes and operation of mirror movement methods) or later-stage image stabilization technology.
Step S202, analyzing the target video to acquire a plurality of shooting information.
The plurality of photographing information may correspond to various photographing elements involved in video photographing or clipping. That is, based on the distribution of the various capturing elements in the target video (e.g., total time of occurrence, duration), it is possible to analyze the capturing information of the capturing means (skill) of the target video in the layout of the target video, and the like.
The target video may be analyzed in units of "field" and "mirror" to obtain the plurality of shot information.
(1) The theme of each field is acquired in units of "fields". As an example, the topic of each field may be identified by ECO (Efficient convolution Network for Online Video Understanding with Masked transform for Online Video Understanding).
(2) The 'mirror' is taken as a unit, and the scene type, the shooting angle, the character information, the mirror type, the mirror moving and the scene of each mirror are obtained.
The scene can include: long shot, full shot, medium shot, close-up, big feature.
The scene can include: indoor and outdoor. The scene may be further refined into offices, squares, coffee shops, etc.
The personal information may include: character position, pose (orientation, etc.), identity (man, woman, old man, police, lawyer, etc.).
As an example, the above shooting information is calculated, estimated, and counted, etc. by an artificial intelligence correlation algorithm. Such as:
the scene and the mirror motion of each mirror may be identified by a uniform Framework for Shot Type Classification Based on Subject center shots (Unified frame for Shot Type Classification Based on Subject center Lens).
The shooting angle of each mirror can be identified by Back to the Feature (Learning from Pixels to positions) for shooting angle detection, which is Robust to Pose Learning.
The orientation of each person within each mirror can be detected by Fine-Grained Head Pose Estimation (Towards Fast, accurate and Stable 3D Dense Face Alignment Fine-Grained Head position Estimation Without Keypoints).
The identity of each Person in each mirror can be detected by a Face Recognition Model (tracking area-of-the-Art Face Model with out Manual interpretation Person Search in video with One portal Through Visual and Temporal Links).
Based on the above exemplary contents, the number and distribution of the respective shooting information within the target video, and the like can be known.
Step S204, the shooting information is marked on the time axis, and each shooting information is distributed at the corresponding position of the time axis.
Namely, each shot information is displayed in a distributed manner in the direction of the time axis.
The video shooting information acquiring method provided in this embodiment deconstructs shooting and clipping related shooting information of a target video (a high-quality video), and distributes the shooting information obtained by deconstruction on a time axis according to the position of the shooting information in the target video. Based on the shooting information distributed on the time axis, the client can be instructed to advance various shooting information needed to be used over time when shooting or editing a video, such as scene arrangement, character arrangement, shooting means and the like, so as to shoot or edit videos similar to high-quality videos, and the shooting or editing efficiency is improved.
As an example, the plurality of photographing information includes a plurality of photographing parameters. As shown in fig. 3, the step S202 may include: step S300, performing field segmentation on the target video to obtain a plurality of fields; step S302, performing mirror segmentation on the plurality of fields respectively to obtain a plurality of mirrors, wherein each field comprises one or more mirrors; step S304, analyzing each mirror in the plurality of mirrors to obtain shooting parameters of each mirror; step S306, obtaining the field information of each field according to the shooting parameters of each mirror in each field; wherein the field information of each field includes the shooting parameters of the respective mirrors in the field, and the position distribution of the shooting parameters of the respective mirrors in the field on the time axis corresponds to the position distribution of the respective mirrors in the field. In this embodiment, the target video is analyzed by taking "field" and "mirror" as units to obtain the shooting parameters of each mirror in different video bridge segments in the target video, which is convenient for storage, classification and user query.
As an example, the plurality of photographing information includes a plurality of photographing parameters. As shown in fig. 4, the step S202 may include: step S400, performing field segmentation on the target video to obtain a plurality of fields; step S402, performing mirror segmentation on the plurality of fields respectively to obtain a plurality of mirrors, wherein each field comprises one or more mirrors; step S404, analyzing the plurality of fields to obtain the theme of each field; step S406, analyzing each mirror of the plurality of mirrors to obtain shooting parameters of each mirror; step S408, obtaining field information of each field according to the theme of each field and the shooting parameters of each mirror in each field; wherein the field information of each field includes a subject of the field and photographing parameters of the respective mirrors within the field, and a position distribution of the photographing parameters of the respective mirrors within the field on the time axis corresponds to a position distribution of the respective mirrors within the field. In this embodiment, the target video is analyzed by taking "field" and "mirror" as units to obtain the subjects of different video bridge sections in the target video and the shooting parameters of each mirror in different video bridge sections, so as to facilitate storage, classification and information query by a user according to the subjects of different bridge sections.
As an example, the shooting parameters of each mirror include one or more of: scene, shooting angle, character information, mirror type, mirror moving operation and scene.
As an example, a graphical split-mirror script may be generated for each mirror, respectively, to better prompt the user.
As shown in fig. 5, the video capture information acquisition method may further include: step S500, respectively generating a plurality of split mirror scripts for each field according to the shooting parameters of each mirror in each field; and the position distribution of the plurality of split mirror scripts in each field on the time axis corresponds to the position distribution of the mirrors in the field. The method comprises the following specific steps:
(1) The corresponding element and element vector information including the size, pose, and relative position of the element in the key frame within the corresponding mirror, etc. may be generated from the character information, etc. The above character information can be classified by the identity of the character, such as children, old people, etc., and can be classified by the occupation of the character, such as police, lawyer, etc.
(2) Vector elements associated with the element categories may be obtained from a vector elements repository based on the element categories of the elements.
(3) And acquiring the appointed canvas matched with the scene from a canvas material library according to the scene in the corresponding mirror.
(4) And according to the element vector information, setting each vector element on a specified canvas to generate a split mirror script. For example: determining the size of the vector element in the specified canvas according to the size of the element; determining the posture of the vector element in the specified canvas according to the posture of the element; determining the relative position of the vector element in the designated canvas according to the relative position of the element in the reference image.
The elements in the key frames in the corresponding mirrors are identified, and the matched vector elements are correspondingly arranged in the appointed canvas, so that the mirror splitting script in a vector diagram form is obtained, the mirror splitting script is more efficient and easier to manufacture, and the user experience is effectively improved.
Illustratively, the scene type, shooting angle, character information, mirror type, and mirror movement operation may be literally added to the specified canvas.
Illustratively, the split mirror script may be an editable vector image that can be vectorized for modification using user requirements (habits). The vectoring modification comprises at least one of: modifying a size of the vector element, modifying a pose of the vector element, modifying a relative position of the vector element in the designated canvas, deleting the vector element, or adding a new vector element. Therefore, the user can realize the personalized provision of the split-mirror script.
Illustratively, the arrangement of the vector elements in the specified canvas is adjusted through user habits, portraits and the like so as to more accurately generate the split-mirror script which accords with the drawing habits of the user, thereby further improving the script creation efficiency and the user stickiness.
For ease of understanding, an example of operation is provided below in connection with FIG. 6:
s600: and finding a high-quality video as a target video.
S602: the target video is segmented to obtain a plurality of fields. As shown in fig. 6, the target video may be divided into a plurality of fields.
S604: each field is divided to obtain a plurality of mirrors, as shown in fig. 6, where one scene is composed of 5 mirrors.
S606: various kinds of detection are performed for each mirror to deconstruct various kinds of shot information, i.e., information corresponding to various kinds of shot/clip elements.
Such as moving mirror detection, scene detection, person analysis (person identity analysis, person orientation analysis), shooting angle analysis, and the like.
S608, field information of each field is obtained in units of fields.
The field information of each field includes detection information of the respective mirrors in this field, such as moving mirror, scene, person, shooting angle, and the like.
The above various information are marked on the time axis.
When multiple persons need to shoot or clip the same video at the same time, the video can be sent to each client in a segmented manner according to various information marked on a time axis, different clients are respectively instructed to execute different operations, and cooperation is achieved.
When there are no more people, multiple times or multiple angle shooting (for example, the creator has only 1 person, or 1 shooting device, or a scene such as a concert recording that cannot repeat multiple views), after a single video is available, the single video can be edited based on the above-mentioned scene information to obtain a plurality of videos switched from different scenes.
Example two
The present embodiment provides a video capture and processing indication method, and some technical details and effects can be referred to above.
Fig. 7 schematically shows a flowchart of a video capturing and processing instruction method according to the second embodiment of the present application.
As shown in fig. 7, the video photographing and processing instructing method may include steps S700 to S704, in which:
step S700, receiving request information of the client.
The request information may include the following:
(1) The character information comprises shooting scenes, themes, shooting places, scenes and the like;
(2) Video information that has been captured, such as a video tag, etc.
Step S702, obtaining target field information according to the request information, where the target field information includes multiple pieces of shooting information marked on the same time axis, and a position of each piece of shooting information on the time axis represents a time sequence of each piece of shooting information.
Based on the requested information, the best matching video bridge segment (target field) is searched from the database.
The target field has the distribution of the shooting information (mirror, scene, moving mirror, character, etc.) of the corresponding video bridge segment on the time axis.
Step S704, returning the target field information to the client to instruct the client to perform video shooting or video processing according to the shooting information and the marked position of the shooting information on the time axis.
The client can guide the user to shoot, clip or automatically shoot and automatically clip according to the field information of the target field. Take automatic clipping as an example: and shooting to obtain videos of the same scene (panorama), and editing the videos into diversified videos switched by various scenes by referring to the searched shooting information at the later stage, thereby obtaining an effect similar to a high-quality video bridge section.
According to the video shooting and processing indication method provided by the embodiment, the high-quality video bridge section which meets the shooting period of the user is found according to the theme, the content and the like which are input by the user and are to be shot or clipped, shooting information (field information) obtained by deconstructing the high-quality video bridge section is returned to the client, and the shooting information obtained by deconstructing is distributed on a time axis. Based on the shooting information distributed on the time axis, the client can be instructed to advance various shooting information, such as scene arrangement, character arrangement, shooting means, clipping, and the like, to be used over time when shooting or clipping a piece of video.
As an example, the target field information is field information of a target field of a plurality of fields; the method further comprises pre-acquiring the target field information:
performing field segmentation on the target video to obtain the target field;
performing mirror segmentation on the target field to obtain one or more mirrors;
analyzing each mirror to obtain shooting parameters of each mirror; and
obtaining the target field information according to the shooting parameters of each mirror;
wherein the target field information includes a shooting parameter of each mirror, and a position distribution of the shooting parameter of each mirror on the time axis has a corresponding relationship with a position distribution of each mirror in the target field.
As an example, the target field information is field information of a target field among a plurality of fields; the method further comprises pre-acquiring the target field information:
performing field segmentation on the target video to obtain the target field;
performing mirror segmentation on the target field to obtain one or more mirrors;
analyzing the target field to obtain a theme of the target field;
analyzing each mirror to obtain shooting parameters of each mirror; and
obtaining the target field information according to the theme and the shooting parameters of each mirror;
the target field information comprises the subject and shooting parameters of each mirror, and the position distribution of the shooting parameters of each mirror on the time axis has a corresponding relation with the position distribution of each mirror in the target field.
For ease of understanding, an example of operation is provided below in connection with FIG. 8:
s800: the client 30000 receives search content input by a user, and initiates a search request based on the search content.
The search content may include a video subject, a shooting location, and the like.
S802: the computer device 10000 searches the database according to the search request.
S804: the computer device 10000 returns the searched field information (which may include a plurality of shot information, each of which is distributed on the time axis) of the video bridge segment most relevant to the searched content to the client 30000.
S806: the client 30000 performs shooting or clipping based on each piece of shooting information and the position of each piece of shooting information on the time axis.
S808: the client 30000 generates a video having a high-quality shooting means and content from the shooting or clipping result.
EXAMPLE III
Fig. 9 schematically shows a block diagram of a video capture information acquisition system according to a third embodiment of the present application, which may be partitioned into one or more program modules, stored in a storage medium, and executed by one or more processors to implement the embodiments of the present application. The program modules referred to in the embodiments of the present application refer to a series of computer program instruction segments that can perform specific functions, and the following description will specifically describe the functions of each program module in the embodiments of the present application.
As shown in fig. 9, the video capture information acquisition system 900 may include a determination module 910, an analysis module 920, and a labeling module 930, wherein:
a determining module 910, configured to determine a target video for analysis, where the target video corresponds to a time axis representing a video progress;
an analysis module 920, configured to analyze the target video to obtain a plurality of shooting information; and
a labeling module 930, configured to label the plurality of shooting information onto the time axis, where each shooting information is distributed at a corresponding position of the time axis.
As an example, the plurality of photographing information includes a plurality of photographing parameters; the analysis module 920 is further configured to:
performing field segmentation on the target video to obtain a plurality of fields;
performing mirror segmentation on the plurality of fields respectively to obtain a plurality of mirrors, each field comprising one or more mirrors;
analyzing each mirror of the plurality of mirrors to obtain shooting parameters of each mirror; and
obtaining field information of each field according to the shooting parameters of each mirror in each field;
wherein the field information of each field includes the shooting parameters of the respective mirrors in the field, and the position distribution of the shooting parameters of the respective mirrors in the field on the time axis corresponds to the position distribution of the respective mirrors in the field.
As an example, the plurality of photographing information includes a plurality of photographing parameters; the analysis module 920 is further configured to:
performing field segmentation on the target video to obtain a plurality of fields;
performing mirror segmentation on the plurality of fields to obtain a plurality of mirrors, each field comprising one or more mirrors;
analyzing the plurality of fields to obtain a topic for each field;
analyzing each mirror of the plurality of mirrors to obtain shooting parameters of each mirror; and
obtaining field information of each field according to the theme of each field and the shooting parameters of each mirror in each field;
wherein the field information of each field includes a subject of the field and photographing parameters of the respective mirrors within the field, and a position distribution of the photographing parameters of the respective mirrors within the field on the time axis corresponds to a position distribution of the respective mirrors within the field.
As an example, the shooting parameters of each mirror include one or more of: scene, shooting angle, character information, mirror type, mirror moving operation and scene.
As an example, the system further comprises a script generation module to:
respectively generating a plurality of sub-mirror scripts for each field according to the shooting parameters of each mirror in each field;
and the position distribution of the plurality of split mirror scripts in each field on the time axis corresponds to the position distribution of the mirrors in the field.
Example four
Fig. 10 schematically shows a block diagram of a video capture and processing instruction system according to a fourth embodiment of the present application, which may be divided into one or more program modules, the one or more program modules being stored in a storage medium and executed by one or more processors to implement the embodiments of the present application. The program modules referred to in the embodiments of the present application refer to a series of computer program instruction segments that can perform specific functions, and the following description will specifically describe the functions of the program modules in the embodiments of the present application.
As shown in fig. 10, the video capture and processing instruction system 1000 may include a receiving module 1010, an obtaining module 1020, and a returning module 1030, wherein:
a receiving module 1010, configured to receive request information of a client;
an obtaining module 1020, configured to obtain target field information according to the request information, where the target field information includes multiple pieces of shooting information marked on a same time axis, and a position of each piece of shooting information on the time axis represents a time sequence of each piece of shooting information; and
a returning module 1030, configured to return the target field information to the client, so as to instruct the client to perform video shooting or video processing according to the shooting information and the marked positions of the shooting information on the time axis.
Optionally, the target field information is field information of a target field in a plurality of fields; the system further comprises a preset acquisition module, configured to acquire the target field information in advance:
performing field segmentation on the target video to obtain the target field;
performing mirror segmentation on the target field to obtain one or more mirrors;
analyzing each mirror to obtain shooting parameters of each mirror; and
obtaining the target field information according to the shooting parameters of each mirror;
wherein the target field information includes a shooting parameter of each mirror, and a position distribution of the shooting parameter of each mirror on the time axis has a corresponding relationship with a position distribution of each mirror in the target field.
Optionally, the target field information is field information of a target field in a plurality of fields; the system further comprises a preset acquisition module, configured to acquire the target field information in advance:
performing field segmentation on the target video to obtain the target field;
performing mirror segmentation on the target field to obtain one or more mirrors;
analyzing the target field to obtain a theme of the target field;
analyzing each mirror to obtain shooting parameters of each mirror; and
obtaining the target field information according to the theme and the shooting parameters of each mirror;
the target field information comprises the theme and the shooting parameters of each mirror, and the position distribution of the shooting parameters of each mirror on the time axis has a corresponding relation with the position distribution of each mirror in the target field.
EXAMPLE five
Fig. 11 schematically shows a hardware architecture diagram of a computer device 10000 according to an embodiment of the present application. In this embodiment, the computer device 10000 is a device capable of automatically performing numerical calculation and/or information processing according to a command set or stored in advance. For example, the server may be a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster composed of a plurality of servers). As shown in fig. 11, computer device 10000 includes at least, but is not limited to: the memory 10010, processor 10020, and network interface 10030 may be communicatively linked to each other via a system bus. Wherein:
the memory 10010 includes at least one type of computer-readable storage medium comprising flash memory, hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc. In some embodiments, the storage 10010 may be an internal storage module of the computer device 10000, such as a hard disk or a memory of the computer device 10000. In other embodiments, the memory 10010 may also be an external storage device of the computer device 10000, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 10000. Of course, the memory 10010 may also comprise both an internal memory module of the computer device 10000 and an external memory device thereof. In the present embodiment, the memory 10010 is generally used for storing an operating system and various types of application software installed on the computer device 10000, such as program codes of a video shooting information acquisition method, a video shooting and processing instruction method, and the like. In addition, the memory 10010 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 10020, in some embodiments, can be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip. The processor 10020 is generally configured to control overall operations of the computer device 10000, such as performing control and processing related to data interaction or communication with the computer device 10000. In this embodiment, the processor 10020 is configured to execute program codes stored in the memory 10010 or process data.
Network interface 10030 may comprise a wireless network interface or a wired network interface, and network interface 10030 is generally used to establish a communication link between computer device 10000 and other computer devices. For example, the network interface 10030 is used to connect the computer device 10000 to an external terminal through a network, establish a data transmission channel and a communication link between the computer device 10000 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), or Wi-Fi.
It should be noted that fig. 11 only illustrates a computer device having components 10010-10030, but it should be understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the video capturing information obtaining method, the video capturing and processing instructing method stored in the memory 10010 can be further divided into one or more program modules and executed by one or more processors (in this embodiment, the processor 10020) to complete the embodiment of the present application.
EXAMPLE six
Embodiments of the present application also provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the video capture information acquisition method, the video capture and processing instruction method in the embodiments.
In this embodiment, the computer-readable storage medium includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the computer readable storage medium may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the computer readable storage medium may be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device. Of course, the computer-readable storage medium may also include both internal and external storage units of the computer device. In the present embodiment, the computer-readable storage medium is generally used for storing an operating system and various types of application software installed in the computer device, for example, program codes of the video capture information acquisition method, the video capture and processing instruction method, and the like in the embodiments. In addition, the computer-readable storage medium may also be used to temporarily store various types of data that have been output or are to be output.
It should be obvious to those skilled in the art that the modules or steps of the embodiments of the present application described above can be implemented by a general-purpose computing device, they can be centralized on a single computing device or distributed on a network composed of a plurality of computing devices, alternatively, they can be implemented by program code executable by the computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, the steps shown or described can be executed in a sequence different from that shown or described, or they can be separately manufactured as individual integrated circuit modules, or a plurality of modules or steps in them can be manufactured as a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (11)

1. A video shooting information acquisition method, characterized in that the method comprises:
determining a target video for analysis, wherein the target video corresponds to a time axis representing video progress;
analyzing the target video to obtain a plurality of shooting information; and
and marking the shooting information on the time axis, wherein each piece of shooting information is respectively distributed at the corresponding position of the time axis.
2. The video shooting information acquisition method according to claim 1, wherein the plurality of pieces of shooting information include a plurality of shooting parameters;
the analyzing the target video to obtain a plurality of shooting information includes:
performing field segmentation on the target video to obtain a plurality of fields;
performing mirror segmentation on the plurality of fields to obtain a plurality of mirrors, each field comprising one or more mirrors;
analyzing each mirror of the plurality of mirrors to obtain shooting parameters of each mirror; and
obtaining field information of each field according to the shooting parameters of each mirror in each field;
wherein the field information of each field includes the shooting parameters of the respective mirrors in the field, and the position distribution of the shooting parameters of the respective mirrors in the field on the time axis corresponds to the position distribution of the respective mirrors in the field.
3. The video shooting information acquisition method according to claim 1, wherein the plurality of pieces of shooting information include a plurality of shooting parameters;
the analyzing the target video to obtain a plurality of shooting information includes:
performing field segmentation on the target video to obtain a plurality of fields;
performing mirror segmentation on the plurality of fields respectively to obtain a plurality of mirrors, each field comprising one or more mirrors;
analyzing the plurality of fields to obtain a topic for each field;
analyzing each mirror of the plurality of mirrors to obtain shooting parameters of each mirror; and
obtaining field information of each field according to the theme of each field and the shooting parameters of each mirror in each field;
wherein the field information of each field includes a subject of the field and shooting parameters of the respective mirrors in the field, and the position distribution of the shooting parameters of the respective mirrors in the field on the time axis corresponds to the position distribution of the respective mirrors in the field.
4. The video shooting information acquisition method according to claim 2 or 3, wherein the shooting parameters of each mirror include one or more of: scene, shooting angle, character information, mirror type, mirror moving operation and scene.
5. The video capture information acquisition method according to claim 2 or 3, further comprising:
respectively generating a plurality of sub-mirror scripts for each field according to the shooting parameters of each mirror in each field;
and the position distribution of the plurality of mirror scripts in each field on the time axis corresponds to the position distribution of the mirrors in the field.
6. A video capture information acquisition system, the system comprising:
the system comprises a determining module, a processing module and a processing module, wherein the determining module is used for determining a target video for analysis, and the target video corresponds to a time axis representing video progress;
the analysis module is used for analyzing the target video to acquire a plurality of shooting information; and
and the marking module is used for marking the shooting information on the time axis, and each shooting information is respectively distributed at the corresponding position of the time axis.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor is adapted to implement the steps of the video capturing information obtaining method according to any of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, having stored therein a computer program, the computer program being executable by at least one processor to cause the at least one processor to perform the steps of the video capture information acquisition method of any one of claims 1 to 5.
9. A video capture and processing instruction method, the method comprising:
receiving request information of a client;
acquiring target field information according to the request information, wherein the target field information comprises a plurality of shooting information marked on the same time axis, and the position of each shooting information on the time axis represents the time sequence of each shooting information; and
and returning the target field information to the client to indicate the client to shoot or process videos according to the shooting information and the marked positions of the shooting information on the time axis.
10. The video shooting and processing indication method of claim 9, wherein the target field information is field information of a target field among a plurality of fields; the method further comprises pre-acquiring the target field information:
performing field segmentation on the target video to obtain the target field;
performing mirror segmentation on the target field to obtain one or more mirrors;
analyzing each mirror to obtain shooting parameters of each mirror; and
obtaining the target field information according to the shooting parameters of each mirror;
wherein the target field information includes a shooting parameter of each mirror, and a position distribution of the shooting parameter of each mirror on the time axis has a corresponding relationship with a position distribution of each mirror in the target field.
11. The video capture and processing indication method of claim 9, wherein the target field information is field information of a target field of a plurality of fields; the method further comprises pre-acquiring the target field information:
performing field segmentation on the target video to obtain the target field;
performing mirror segmentation on the target field to obtain one or more mirrors;
analyzing the target field to obtain a theme of the target field;
analyzing each mirror to obtain shooting parameters of each mirror; and
obtaining the target field information according to the theme and the shooting parameters of each mirror;
the target field information comprises the theme and the shooting parameters of each mirror, and the position distribution of the shooting parameters of each mirror on the time axis has a corresponding relation with the position distribution of each mirror in the target field.
CN202110801309.0A 2021-07-15 2021-07-15 Video shooting information acquisition method and video shooting and processing indication method Pending CN115701093A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110801309.0A CN115701093A (en) 2021-07-15 2021-07-15 Video shooting information acquisition method and video shooting and processing indication method
PCT/CN2022/098711 WO2023284469A1 (en) 2021-07-15 2022-06-14 Video capture information acquisition method, and video capture and processing instruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110801309.0A CN115701093A (en) 2021-07-15 2021-07-15 Video shooting information acquisition method and video shooting and processing indication method

Publications (1)

Publication Number Publication Date
CN115701093A true CN115701093A (en) 2023-02-07

Family

ID=84919029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110801309.0A Pending CN115701093A (en) 2021-07-15 2021-07-15 Video shooting information acquisition method and video shooting and processing indication method

Country Status (2)

Country Link
CN (1) CN115701093A (en)
WO (1) WO2023284469A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305438A1 (en) * 2010-06-15 2011-12-15 Kuniaki Torii Information processing apparatus, information processing method, and program
CN107613235A (en) * 2017-09-25 2018-01-19 北京达佳互联信息技术有限公司 video recording method and device
WO2019075617A1 (en) * 2017-10-16 2019-04-25 深圳市大疆创新科技有限公司 Video processing method, control terminal and mobile device
CN110855893A (en) * 2019-11-28 2020-02-28 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111147779A (en) * 2019-12-31 2020-05-12 维沃移动通信有限公司 Video production method, electronic device, and medium
CN112422831A (en) * 2020-11-20 2021-02-26 广州太平洋电脑信息咨询有限公司 Video generation method and device, computer equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7352952B2 (en) * 2003-10-16 2008-04-01 Magix Ag System and method for improved video editing
US10057537B1 (en) * 2017-08-18 2018-08-21 Prime Focus Technologies, Inc. System and method for source script and video synchronization interface
CN110012237B (en) * 2019-04-08 2020-08-07 厦门大学 Video generation method and system based on interactive guidance and cloud enhanced rendering
CN110139159B (en) * 2019-06-21 2021-04-06 上海摩象网络科技有限公司 Video material processing method and device and storage medium
CN111601039B (en) * 2020-05-28 2021-10-15 维沃移动通信有限公司 Video shooting method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305438A1 (en) * 2010-06-15 2011-12-15 Kuniaki Torii Information processing apparatus, information processing method, and program
CN107613235A (en) * 2017-09-25 2018-01-19 北京达佳互联信息技术有限公司 video recording method and device
WO2019075617A1 (en) * 2017-10-16 2019-04-25 深圳市大疆创新科技有限公司 Video processing method, control terminal and mobile device
CN110855893A (en) * 2019-11-28 2020-02-28 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111147779A (en) * 2019-12-31 2020-05-12 维沃移动通信有限公司 Video production method, electronic device, and medium
CN112422831A (en) * 2020-11-20 2021-02-26 广州太平洋电脑信息咨询有限公司 Video generation method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王伦: "营销类短视频制作Android客户端的设计与实现", 《优秀硕士论文》, 28 May 2021 (2021-05-28) *

Also Published As

Publication number Publication date
WO2023284469A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
CN110139159B (en) Video material processing method and device and storage medium
US8879788B2 (en) Video processing apparatus, method and system
Mavlankar et al. An interactive region-of-interest video streaming system for online lecture viewing
CN112954450B (en) Video processing method and device, electronic equipment and storage medium
JP2002281506A (en) Method and system for extracting partial image area of video image, program for extracting partial image area, distributing method for extracted video image and contents preparing method
KR20160098949A (en) Apparatus and method for generating a video, and computer program for executing the method
CN112543344B (en) Live broadcast control method and device, computer readable medium and electronic equipment
CN112509148A (en) Interaction method and device based on multi-feature recognition and computer equipment
CN113515997A (en) Video data processing method and device and readable storage medium
CN113727039B (en) Video generation method and device, electronic equipment and storage medium
US10924637B2 (en) Playback method, playback device and computer-readable storage medium
CN112291634A (en) Video processing method and device
KR101843025B1 (en) System and Method for Video Editing Based on Camera Movement
CN113825012A (en) Video data processing method and computer device
CN110415318B (en) Image processing method and device
CN116261009B (en) Video detection method, device, equipment and medium for intelligently converting video audience
CN115701093A (en) Video shooting information acquisition method and video shooting and processing indication method
CN114143429B (en) Image shooting method, device, electronic equipment and computer readable storage medium
CN115193039A (en) Interactive method, device and system of game scenarios
US20230419997A1 (en) Automatic Non-Linear Editing Style Transfer
CN109523941B (en) Indoor accompanying tour guide method and device based on cloud identification technology
CN114500879A (en) Video data processing method, device, equipment and storage medium
KR20230006079A (en) Method for face image transformation based on artificial intelligence learning
WO2023284517A1 (en) Method and system for generating storyboard
Zeng et al. Instant video summarization during shooting with mobile phone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination