CN113010738B - Video processing method, device, electronic equipment and readable storage medium - Google Patents

Video processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113010738B
CN113010738B CN202110183174.6A CN202110183174A CN113010738B CN 113010738 B CN113010738 B CN 113010738B CN 202110183174 A CN202110183174 A CN 202110183174A CN 113010738 B CN113010738 B CN 113010738B
Authority
CN
China
Prior art keywords
video
target
input
camera
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110183174.6A
Other languages
Chinese (zh)
Other versions
CN113010738A (en
Inventor
孙兴航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110183174.6A priority Critical patent/CN113010738B/en
Publication of CN113010738A publication Critical patent/CN113010738A/en
Application granted granted Critical
Publication of CN113010738B publication Critical patent/CN113010738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7328Query by example, e.g. a complete video frame or video sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video processing method, a video processing device, electronic equipment and a readable storage medium, and belongs to the technical field of videos. The method comprises the following steps: receiving a first input of a user, wherein the first input is an input for selecting a target camera from N cameras; responding to the first input, and acquiring a target video segment from the target video according to identification information associated with each frame of video image of the target video, wherein each frame of video image of the target video segment is acquired by the target camera; performing target processing on the target video segment; the target video is composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image. Through the video processing method provided by the application, the video clips corresponding to different cameras in the video can be distinguished conveniently, and then the video clips shot by different cameras in the video can be screened conveniently and processed.

Description

Video processing method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of video, and particularly relates to a video processing method, a device, electronic equipment and a readable storage medium.
Background
With the widespread use of short video societies, video capture has become an important application for electronic devices. Currently, many electronic devices often integrate multiple cameras to improve shooting effects, for example, by smooth switching of the multiple cameras, an effect of smooth zooming in a light variation process is simulated. However, in implementing the present application, the inventors found that at least the following problems exist in the prior art: for videos shot by switching a plurality of cameras, video clips shot by different cameras are often difficult to distinguish, so that a user is difficult to screen the video clips shot by different cameras.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video processing method, apparatus, electronic device, and readable storage medium, which can solve the problem that it is difficult for a user to screen different cameras to capture video clips.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a first input of a user, wherein the first input is input for selecting a target camera from N cameras, and N is an integer greater than 1;
Responding to the first input, and acquiring a target video segment from the target video according to identification information associated with each frame of video image of the target video, wherein each frame of video image of the target video segment is acquired by the target camera;
performing target processing on the target video segment;
the target video is composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the first receiving module is used for receiving a first input of a user, wherein the first input is an input for selecting a target camera from N cameras, and N is an integer greater than 1;
the acquisition module is used for responding to the first input, acquiring a target video fragment from the target video according to the identification information associated with each frame of video image of the target video, wherein each frame of video image of the target video fragment is acquired by the target camera;
the processing module is used for carrying out target processing on the target video clips;
the target video is composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, by receiving a first input of a user, the first input is an input of selecting a target camera from N cameras, and N is an integer greater than 1; responding to the first input, and acquiring a target video segment from the target video according to identification information associated with each frame of video image of the target video, wherein each frame of video image of the target video segment is acquired by the target camera; performing target processing on the target video segment; the target video is composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image. According to the video processing method and device, the video clips corresponding to different cameras can be distinguished conveniently through the identification information associated with each frame of video image of the target video, and then the video clips shot by the different cameras in the target video can be screened conveniently and processed.
Drawings
Fig. 1 is a flowchart of a video processing method provided in an embodiment of the present application;
FIG. 2a is a schematic diagram of a video recording interface according to an embodiment of the present application;
FIG. 2b is a second schematic diagram of a video recording interface according to an embodiment of the present disclosure;
FIG. 2c is a third schematic diagram of a video recording interface according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a watermark setting interface of a camera according to an embodiment of the present application;
FIG. 4a is a schematic diagram of a video interface according to an embodiment of the present disclosure;
FIG. 4b is a fifth schematic diagram of a video interface provided in an embodiment of the present application;
FIG. 4c is a diagram illustrating a video interface according to an embodiment of the present application;
FIG. 5a is a schematic diagram of a video browsing interface provided in an embodiment of the present application;
FIG. 5b is a second schematic diagram of a video browsing interface according to an embodiment of the present disclosure;
FIG. 5c is a third schematic diagram of a video browsing interface provided by an embodiment of the present application;
FIG. 6a is a schematic diagram of a video browsing interface provided by an embodiment of the present application;
FIG. 6b is a fifth schematic diagram of a video browsing interface provided by an embodiment of the present application;
FIG. 7 is a diagram illustrating a video browsing interface according to an embodiment of the present disclosure;
FIG. 8a is a diagram of a video browsing interface provided by an embodiment of the present application;
FIG. 8b is a schematic illustration of a video browsing interface provided by an embodiment of the present application;
FIG. 9 is a diagram of a video interface according to an embodiment of the present disclosure;
fig. 10 is a block diagram of a video processing apparatus provided in an embodiment of the present application;
FIG. 11 is a block diagram of an electronic device provided in an embodiment of the present application;
fig. 12 is a block diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the present application, and as shown in fig. 1, the video processing method includes the following steps:
step 101, receiving a first input of a user, wherein the first input is an input for selecting a target camera from N cameras, and N is an integer greater than 1.
In this embodiment of the present application, the first input may include a voice input or a touch input, and the touch input may include, but is not limited to, a sliding input, a clicking input, a dragging input, a pressing input, or the like. The N cameras may be any N different cameras, alternatively, the N cameras may be N cameras corresponding to different zoom multiple ranges, for example, the N cameras may include a wide-angle camera, a conventional camera, a tele camera, and the like.
Specifically, in the case of receiving the first input, the target camera may be determined from the N cameras according to the first input. For example, in the case that the first input is a voice input, the voice input by the voice input may be recognized to obtain a voice recognition result, and a camera matched with the voice recognition result from the N cameras is determined as the target camera; or if the first input is a touch input, determining a camera matched with the touch parameter of the touch input from the N cameras as a target camera.
Step 102, responding to the first input, and acquiring a target video segment from the target video according to identification information associated with each frame of video image of the target video, wherein each frame of video image of the target video segment is acquired by the target camera, the target video consists of video images acquired by the N cameras, and the identification information is used for indicating cameras used for acquiring each frame of video image.
In this embodiment, the target video may include video clips collected by the N cameras respectively, for example, the length of the target video is 20 seconds, where the 1 st to 5 th seconds of the target video are video clips captured by the camera a, the 6 th to 15 th seconds of the target video are video clips captured by the camera B, and the 16 th to 20 th seconds of the target video are video clips captured by the camera C.
The above-mentioned identification information associated with the video image may indicate a camera used for capturing the video image, and it should be noted that, in this embodiment of the present application, the identification information may include text, symbols, images, etc. used for indicating information, and a control or other container may be used as a carrier for displaying information, including but not limited to at least one of text identifier, symbol identifier, image identifier, etc.
For example, the identification information associated with the image may be a watermark of a camera used for capturing the video image, or may be an identification of the camera used for capturing the video image, or may be an identification of a zoom multiple used for capturing the video image, where different zoom multiple ranges correspond to different cameras. Alternatively, the identification information associated with each frame of video image of the target video may be associated during the process of capturing the target video, or may be associated after the completion of capturing the target video, for example, the identification information of each frame of video image may be determined by performing image recognition on each frame of video image of the target video, and the identification information of each frame of video image may be associated with each frame of video image.
In this step, the target camera may be determined according to the first input when the first input of the user is received, and the video segments are obtained by extracting, from the target video, the video images of each frame of the target camera indicated by the identification information according to the identification information associated with the video images of each frame of the target video, where the target video segments are video segments composed of the video images of each frame of the target camera indicated by the identification information in the target video.
And 103, performing target processing on the target video segment.
In this embodiment of the present application, the performing, by using the target video clip, target processing may be playing a target video clip; or may be to clip the target video clip, for example, adjust the exposure parameters of the target video clip or add filters to the target video clip or intercept the target video clip, etc.
According to the video processing method, first input of a user is received, wherein the first input is input for selecting a target camera from N cameras, and N is an integer larger than 1; responding to the first input, and acquiring a target video segment from the target video according to identification information associated with each frame of video image of the target video, wherein each frame of video image of the target video segment is acquired by the target camera; performing target processing on the target video segment; the target video is composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image. According to the video processing method and device, the video clips corresponding to different cameras can be distinguished conveniently through the identification information associated with each frame of video image of the target video, and then the video clips shot by the different cameras in the target video can be screened conveniently and processed.
Optionally, the identification information includes at least one of a watermark, a camera identification, and a zoom factor identification.
In this embodiment of the present application, the identification information associated with the video image may include at least one of a watermark, a camera identifier, and a zoom multiple identifier. The watermarks may be watermarks of cameras, and different cameras correspond to different watermarks. The camera mark is used for marking the cameras, and different cameras can correspond to different camera marks. The zoom multiple identifiers are used for identifying zoom multiple, different zoom multiple correspond to different zoom multiple identifiers, and different cameras correspond to different zoom multiple ranges.
According to the video image processing method and device, the camera used for collecting each frame of video image is indicated through at least one of the watermark, the camera identification and the zoom multiple identification, so that video fragments collected by different cameras in the target video can be distinguished conveniently and accurately based on at least one of the watermark, the camera identification and the zoom multiple identification associated with each frame of video image of the target video, and the implementation is simple.
Optionally, before the target video segment is acquired from the target video according to the identification information associated with each frame of video image of the target video, the method further includes:
And in the process of acquiring video images through the N cameras, storing each frame of video image and the identification information of the camera used for acquiring each frame of video image in an associated mode.
In this embodiment of the present application, in the process of capturing video images by N cameras, each frame of video image and the identification information of the camera used for capturing each frame of video image may be stored in association, for example, in the case that the N cameras include a wide-angle camera, a conventional camera, and a tele camera, in the process of capturing the above-mentioned target video, the sliding zoom bar may be switched to different cameras to capture the target video, and in the process of capturing by different cameras, each frame of video image captured by each camera may be stored in association with the identification information of the camera.
The storing of each frame of video image and the identification information of the camera used for collecting each frame of video image in an associated manner can comprise the steps of packaging each frame of video image and the identification information of the camera used for collecting each frame of video image into one frame of data to store, so that each frame of video image and the identification information associated with each frame of video image can be quickly obtained based on each frame of data; or may include storing each frame of video image, identification information of the camera used for capturing each frame of video image, and association between each frame of video image and identification information of the camera used for capturing each frame of video image, respectively, so that identification information of the camera used for capturing each frame of video image can be rapidly obtained based on the association.
After the video image is collected, a thumbnail of the collected video may be displayed in a first preset area of the video recording interface, for example, a thumbnail of the collected video may be displayed in a bottom area of the video recording interface, and when the user clicks the thumbnail of the video, the user may enter the video browsing interface, and the user may start playing the video by clicking the play control. Alternatively, in the case where a plurality of pieces of video are shot, a thumbnail of each of the plurality of pieces of video may be displayed in a bottom area of a video recording interface, as shown in fig. 2a, with a thumbnail 11 of video a and a thumbnail 12 of video b displayed. In addition, an add control 20 may be displayed in the bottom area of the video recording interface, for example, a "+" control may be clicked to select a video from the gallery to add to the editing, as shown in fig. 2b, and a thumbnail of the selected video is displayed in the bottom area of the target video recording interface, as shown in fig. 2c, and a thumbnail 13 of the selected video c is displayed in the bottom area of the target video recording interface. Under the condition that a user clicks a video thumbnail displayed in the bottom area of the video recording interface, the user can enter the video browsing interface, and can start browsing the video by clicking the play control. It should be noted that, the second preset area of the video browsing interface may include a video thumbnail displayed on the first preset area of the video recording interface.
According to the method and the device for acquiring the video images, the acquired video images of each frame and the identification information of the camera used for acquiring the video images of each frame are stored in an associated mode in the video image acquisition process, so that the identification information associated with the video images of each frame can be guaranteed to accurately reflect the camera used for acquiring the video images, and the identification information associated with the video images of each frame can be acquired conveniently.
Optionally, the identification information includes watermarks, each of the N cameras is associated with a watermark, and watermarks associated with different cameras are different;
the step of storing the identification information of each frame of video image and the camera used for collecting each frame of video image in an associated mode comprises the following steps:
and storing the watermark associated with each frame of video image and the camera used for acquiring each frame of video image.
In this embodiment, at least one watermark may be set for each of N cameras in advance, where different cameras are associated with different watermarks, for example, as shown in fig. 3, the N cameras include a WIDE-angle camera, a main camera, and a tele camera, where three watermarks, that is, WIDE1, WIDE2, and WIDE3, are set for the WIDE-angle camera, three watermarks, that is, MASTER1, MASTER2, and MASTER3, and three watermarks, that is, ULTRA1, ULTRA2, and ULTRA3, are set for the tele camera. In this way, the video image shot by the camera and the watermark associated with the camera can be stored in a correlated manner in the process of shooting by one of the N cameras.
According to the video processing method and device, watermarks are associated with each of the N cameras in advance, different cameras are associated with different watermarks, in the process of collecting video images through the N cameras, each frame of video image is associated with the watermark associated with the camera used for collecting each frame of video image, and therefore video clips shot by the cameras can be intuitively and rapidly distinguished based on the watermarks associated with the video images of each frame of video in the video.
Optionally, the video recording interface includes a first control;
in the process of acquiring video images through the N cameras, the method further comprises the following steps:
displaying a first watermark on a first video image displayed on the video recording interface, wherein the first video image is acquired by a first camera, and the first watermark is a watermark associated with the first camera;
in case of receiving a second input of a first indication identifier on the first control by a user, responding to the second input, moving the first indication identifier to a target position, updating the first video image into a second video image, and displaying a second watermark on the second video image;
The second video image is a video image collected by a second camera, the second watermark is a watermark associated with the second camera, and the second camera is a camera associated with a target zoom multiple indicated by the target position.
In an embodiment of the present application, the first control may include, but is not limited to, a zoom bar. The first indication marks are in one-to-one correspondence with the zoom multiples on the zoom bar. The second input may include, but is not limited to, a drag input, a slide input, and the like. The above-described target zoom factor may also be referred to as a target focal length.
The following description is made by taking an example in which N cameras include a wide-angle camera, a main camera, and a telephoto camera: before capturing video, a video watermarking switch of the camera may be turned on to turn on the video watermarking function. Smooth switching between different cameras can be achieved by sliding the zoom bar during video acquisition, for example, as shown in fig. 4a to 4c, dragging the zoom bar 30 to the left to a magnification of less than 1×, opening the wide angle, dragging the zoom bar to the right to a magnification of greater than 5×, and opening the tele.
The watermark matching method comprises the steps that after a sliding zoom bar is switched to different cameras in the video acquisition process, the watermark of the corresponding camera is automatically matched. For example, in the case of capturing a video image by a main camera, displaying the video image captured by the main camera on a video interface, and displaying the watermark associated with the main camera on the video image captured by the main camera, as shown in fig. 4a, displaying the watermark 41 associated with the main camera, that is, the Master watermark, on the video image captured by the main camera displayed on the video interface; under the condition that video images are acquired through the wide-angle camera, displaying the video images acquired by the wide-angle camera on a video recording interface, displaying watermarks related to the wide-angle camera on the video images acquired by the wide-angle camera, and displaying watermarks 42 related to the wide-angle camera, namely a wide watermark, on the video images acquired by the wide-angle camera displayed on the video recording interface as shown in fig. 4 b; in the case of capturing a video image by a tele camera, displaying the video image captured by the tele camera on a video recording interface, and displaying the watermark associated with the tele camera on the video image captured by the tele camera, as shown in fig. 4c, displaying the watermark 43 associated with the tele camera, that is, the ultra watermark on the video image captured by the tele camera displayed on the video recording interface.
It should be noted that, the video image displayed on the video interface may be an acquired video image or a video image for video encoding; the small-size image corresponding to the collected video image may be an image with a size smaller than that of the collected video image, for example, an image obtained by reducing the collected video image by a preset multiple. In the case that the video image displayed on the video recording interface is a small-size image corresponding to the collected video image, the embodiment of the application can store each collected frame of video image, the small-size image corresponding to each frame of video image and the watermark associated with the camera used for collecting each frame of video image in an associated manner.
In the process of acquiring video images through the N cameras, the zoom multiple can be adjusted to switch different cameras for video image acquisition, the watermarks of the cameras after switching can be adaptively matched and displayed on the acquired video images, the operation is convenient, and a user can intuitively check the cameras used for acquiring the video images.
Optionally, different cameras in the N cameras correspond to different zoom multiple ranges, and each camera in the N cameras acquires a video image in the corresponding zoom multiple range;
The step of storing the identification information of each frame of video image and the camera used for collecting each frame of video image in an associated mode comprises the following steps:
and storing each frame of video image and the zoom multiple identifications corresponding to the zoom multiple used by each frame of video image in an associated mode.
In this embodiment of the present application, different cameras among the N cameras correspond to different zoom multiple ranges, for example, the N cameras include a wide-angle camera, a main camera, and a tele camera, where the zoom multiple range corresponding to the wide-angle camera is less than 1 time, that is, less than 1×, the zoom multiple range corresponding to the main camera is 1 time to 5 times, that is, 1×to5×, and the zoom multiple range corresponding to the tele camera is greater than 5 times, that is, more than 5×. Alternatively, in the process of capturing video by the N cameras, different cameras may be switched by sliding the zoom bar to perform video image capturing, for example, as shown in fig. 4a to 4 c.
And the zoom multiple identifiers corresponding to the zoom multiple are used for identifying the zoom multiple. In practical application, under the condition that each frame of video image and the zoom multiple identification corresponding to the zoom multiple used by each frame of video image are stored in an associated mode, based on the zoom multiple identification associated with each frame of video image, the zoom multiple used by each frame of video image can be determined, and the camera used by each frame of video image can be determined based on the zoom multiple range to which the zoom multiple used by each frame of video image belongs.
In this embodiment of the present application, different cameras among the N cameras correspond different zoom multiple ranges, and each camera among the N cameras gathers video images in the corresponding zoom multiple range, and in the process of gathering video images through the N cameras, each frame of video image that gathers and the zoom multiple sign that corresponds that the zoom multiple that gathers each frame of video image used are associated and stored, so based on the zoom multiple sign that each frame of video image is associated in the video, not only can be quick distinguish the video clip that each camera took, but also can handle from the video clip that the different zoom multiple corresponds of zoom multiple dimension screening.
Optionally, before the receiving the first input of the user, the method further includes:
receiving a third input of a user to the target video;
and responding to the third input, playing the target video, and displaying the identification information associated with each played video image on the second control in real time.
In this embodiment of the present application, the third input may include a voice input or a touch input, where the touch input may include, but is not limited to, a click input, a slide input, a press input, or a drag input. The second control may be a playing progress bar, or may be an additionally set control.
In an embodiment, if the identification information associated with the video image is a watermark, the watermark associated with each frame of video image that is played may be displayed on the second control in real time during the process of playing the target video. For example, as shown in fig. 5a, the second control is a watermark selection control 51, and the watermark associated with each frame of video image that is played is displayed on the watermark selection control 51. It should be noted that, during the process of playing the target video, playing the target video may be paused to view the watermark associated with the video image of the current frame displayed on the watermark selection control 51.
In another embodiment, if the identification information associated with the video image is a zoom multiple identification, the zoom multiple identification associated with each frame of video image that is played may be displayed on the second control in real time during the process of playing the target video. For example, as shown in fig. 5b, the second control is a playing progress bar 52, and the zoom multiple identifier 62 associated with each frame of video image played is displayed on the playing progress bar 52.
In another embodiment, if the identification information associated with the video image includes a zoom multiple identification and a camera identification, the zoom multiple identification and the camera identification associated with each frame of video image that is played may be displayed on the second control in real time during the process of playing the target video. For example, as shown in fig. 5c, the second control is a playing progress bar 52, and a zoom multiple identifier 62 and a camera identifier 63 associated with each frame of video image played are displayed on the playing progress bar 52.
According to the method and the device for displaying the video images, the identification information associated with each frame of video image played is displayed on the second control in real time in the process of playing the target video, so that a user can conveniently and intuitively check the camera or zoom multiple used for collecting each frame of video image.
Optionally, the identification information includes a watermark; before the receiving the first input of the user, the method further comprises:
receiving a fourth input of a user to the second control;
responding to the fourth input, and displaying N watermark options, wherein the N watermark options are in one-to-one correspondence with the N cameras;
the receiving a first input from a user includes:
receiving a first input of a user to a target watermark option of the N watermark options;
the target camera is a camera corresponding to the target watermark option.
In this embodiment of the present application, the fourth input may include a voice input or a touch input, where the touch input may include, but is not limited to, a click input, a slide input, a press input, or a drag input.
For example, as shown in fig. 6a, when the user clicks the watermark selection control 51, and displays an option list 511 including three watermark options including a main shot, a wide angle and a tele, as shown in fig. 6b, when the user clicks one of the watermark options, a video clip of a camera corresponding to the watermark option is obtained from the target video, for example, when the user clicks the main shot option, a video clip of the main camera is obtained from the target video.
According to the embodiment of the application, the N watermark options are provided for the user to select, so that convenience in selecting video clips corresponding to different cameras can be improved.
Optionally, the identification information includes a zoom multiple identification;
the receiving a first input from a user includes:
receiving a first input of a zoom multiple identifier displayed on the second control by a user;
the target camera is a camera indicated by a target zoom multiple identifier, and the target zoom multiple identifier is a zoom multiple identifier determined according to the input parameters of the first input.
In this embodiment of the present application, the first input may include a press input, a slide input, or a drag input. Wherein, in the case that the first input is a pressing input, the input parameter of the first input may include a pressing duration; in the case that the first input is a sliding input, the input parameter of the first input may include a sliding distance; in the case where the first input is a drag input, the input parameter of the first input may include a drag distance.
In practical application, the corresponding relation between different zoom multiple identifiers and the input parameters of the first input can be established in advance, so that the corresponding zoom multiple identifier, namely the target zoom multiple identifier, can be rapidly determined based on the input parameters of the first input. For example, as shown in fig. 7, the zoom magnification identification 62 displayed on the play progress bar 52 is pressed to select a target zoom magnification.
The camera indicated by the target zoom multiple identifier may refer to a camera corresponding to a zoom multiple range to which a zoom multiple indicated by the target zoom multiple identifier belongs. Optionally, after determining the target zoom multiple identifier, directly acquiring a video segment corresponding to the target zoom multiple identifier from the target video, where the video segment corresponding to the target zoom multiple identifier is a video segment acquired by the target camera; or the video clip corresponding to the target camera indicated by the target zoom multiple identifier may be obtained from the target video, for example, all video images of which the zoom multiple indicated by the associated zoom multiple identifier in the target video belongs to the target zoom multiple range may be obtained, where the target zoom multiple range is a zoom multiple range corresponding to the target camera.
According to the embodiment of the application, the target zoom multiple identification is selected through the first input of the zoom multiple identification displayed on the second control, so that the operation is convenient, and the user can conveniently process the video from the granularity of the zoom multiple.
Optionally, the labeling information comprises a camera identifier and a zoom multiple identifier;
the receiving a first input from a user includes:
Receiving a first input aiming at a camera mark or a zoom multiple mark displayed on the second control;
the target camera is a camera corresponding to a target camera identifier or a target zoom multiple identifier determined according to an input parameter of the first input, when receiving the first input of the camera identifier displayed on the second control, a video segment corresponding to the camera corresponding to the target camera identifier can be obtained from the target video, and when receiving the first input of the zoom multiple identifier displayed on the second control, a video segment corresponding to the target zoom multiple can be obtained from the target video.
In this embodiment of the present application, the first input may include a press input, a slide input, or a drag input. Wherein, in the case that the first input is a pressing input, the input parameter of the first input may include a pressing duration; in the case that the first input is a sliding input, the input parameter of the first input may include a sliding distance; in the case where the first input is a drag input, the input parameter of the first input may include a drag distance.
For example, as shown in fig. 8a, in the case of receiving a pressing input for the camera identifier 63 displayed on the playing progress bar 52, a target camera identifier may be determined according to a pressing duration of the pressing input, and a video clip of a camera corresponding to the target camera identifier may be obtained from the target video; as shown in fig. 8b, in the case of receiving a pressing input for the zoom magnification identification 62 displayed on the play progress bar 52, a target zoom magnification determined according to a pressing duration of the pressing input may be acquired, and a video clip corresponding to the target zoom magnification identification may be acquired from the target video.
In this embodiment of the present application, the annotation information includes a camera identifier and a zoom multiple identifier, and under a condition that a first input for the camera identifier displayed on the second control is received, a video segment corresponding to a camera corresponding to the target camera identifier may be obtained from the target video, and under a condition that a first input for the zoom multiple identifier displayed on the second control is received, a video segment corresponding to the target zoom multiple identifier may be obtained from the target video, so that a user may conveniently process videos from camera granularity and zoom multiple granularity.
Optionally, in the case that thumbnails of a plurality of videos are displayed in the second preset area of the video browsing interface, in response to the first input, a video clip corresponding to the target camera may be obtained from each video according to identification information associated with each frame of video image of each video. The second preset area may be a bottom area of the video browsing interface.
For example, as shown in fig. 6a, a thumbnail of a video b, and a thumbnail of a video c are displayed in a bottom area of the video browsing interface, and when the user selects the wide-angle watermark option, a video clip corresponding to the wide-angle camera in the video a, a video clip corresponding to the wide-angle camera in the video b, and a video clip corresponding to the wide-angle camera in the video c may be acquired respectively.
In the case that the thumbnail of the plurality of videos is displayed in the second preset area of the video browsing interface, the video clips corresponding to the target cameras are obtained from each video according to the identification information associated with each frame of video image of each video, so that the efficiency of processing the plurality of videos can be improved.
Optionally, the performing target processing on the target video segment includes:
Playing the target video clip;
alternatively, the target video clip is clipped.
In the embodiment of the application, after the target video clip is obtained, the target video clip can be played; or may clip the target video clip, for example, adjust the exposure parameters of the target video clip or add filters to the target video clip or intercept the target video clip, etc.
According to the method and the device for editing the video, the target video clips are played after the target video clips are obtained, so that a user can only browse the video clips shot by the target cameras, or the target video clips are clipped after the target video clips are obtained, and the requirement of editing the video from the granularity of the cameras for the user is met.
After the video playing is completed, the return control at the upper left corner of the video browsing interface may be clicked, so that the video recording interface may be returned, and the bottom area of the video recording interface may include the video thumbnail stored in the bottom area of the video browsing interface, as shown in fig. 9.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution body may be a video processing apparatus, or a control module in the video processing apparatus for executing the loading video processing method. In the embodiment of the present application, a video processing device executes a video processing method for loading video, which is taken as an example, to describe the video processing device provided in the embodiment of the present application.
Referring to fig. 10, fig. 10 is a block diagram of a video processing apparatus according to an embodiment of the present application, and as shown in fig. 10, the video processing apparatus 1000 includes:
a first receiving module 1001, configured to receive a first input of a user, where the first input is an input of selecting a target camera from N cameras, and N is an integer greater than 1;
an obtaining module 1002, configured to respond to the first input, and obtain a target video segment from the target video according to identification information associated with each frame of video image of the target video, where each frame of video image of the target video segment is collected by the target camera;
a processing module 1003, configured to perform target processing on the target video segment;
the target video is composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image.
Optionally, the identification information includes at least one of a watermark, a camera identification, and a zoom factor identification.
Optionally, the apparatus further comprises:
and the storage module is used for storing the identification information of each frame of video image and the identification information of the camera used for acquiring each frame of video image in a correlated manner in the process of acquiring the video images through the N cameras before acquiring the identification information corresponding to each frame of video image of the target video.
Optionally, the identification information includes watermarks, each of the N cameras is associated with a watermark, and watermarks associated with different cameras are different;
the storage module is specifically used for:
and storing the watermark associated with each frame of video image and the camera used for acquiring each frame of video image.
Optionally, the video recording interface includes a first control;
the apparatus further comprises:
the first display module is used for displaying a first watermark on a first video image displayed on the video recording interface in the process of acquiring video images through the N cameras, wherein the first video image is acquired by a first camera, and the first watermark is related to the first camera;
the second display module is used for responding to the second input, moving the first indication mark to a target position, updating the first video image into a second video image and displaying a second watermark on the second video image when receiving the second input of the first indication mark on the first control by the user;
the second video image is a video image collected by a second camera, the second watermark is a watermark associated with the second camera, and the second camera is a camera associated with a target zoom multiple indicated by the target position.
Optionally, different cameras in the N cameras correspond to different zoom multiple ranges, and each camera in the N cameras acquires a video image in the corresponding zoom multiple range;
the storage module is specifically used for:
and storing each frame of video image and the zoom multiple identifications corresponding to the zoom multiple used by each frame of video image in an associated mode.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a third input of the user to the target video before the first input of the user is received;
and the playing module is used for responding to the third input, playing the target video and displaying the identification information associated with each played video image on the second control in real time.
Optionally, the identification information includes a watermark; the apparatus further comprises:
the third receiving module is used for receiving a fourth input of the user to the second control before the first input of the user is received;
the third display module is used for responding to the fourth input and displaying N watermark options, wherein the N watermark options are in one-to-one correspondence with the N cameras;
the first receiving module is specifically configured to:
receiving a first input of a user to a target watermark option of the N watermark options;
The target camera is a camera corresponding to the target watermark option.
Optionally, the identification information includes a zoom multiple identification;
the first receiving module is specifically configured to:
receiving a first input of a zoom multiple identifier displayed on the second control by a user;
the target camera is a camera indicated by a target zoom multiple identifier, and the target zoom multiple identifier is a zoom multiple identifier determined according to the input parameters of the first input.
Optionally, the processing module is specifically configured to:
playing the target video clip;
alternatively, the target video clip is clipped.
The video processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video processing apparatus 1000 provided in the embodiment of the present application can implement each process of the embodiment of the video processing method, and in order to avoid repetition, a description thereof is omitted here.
The video processing device 1000 of the embodiment of the present application, a first receiving module 1001 is configured to receive a first input of a user, where the first input is an input of selecting a target camera from N cameras, and N is an integer greater than 1; an obtaining module 1002, configured to respond to the first input, and obtain a target video segment from the target video according to identification information associated with each frame of video image of the target video, where each frame of video image of the target video segment is collected by the target camera; a processing module 1003, configured to perform target processing on the target video segment; the target video is composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image. The video clips corresponding to different cameras can be distinguished conveniently through the identification information associated with each frame of video image of the target video, and then the video clips shot by different cameras in the target video can be screened conveniently and rapidly for processing.
Optionally, referring to fig. 11, fig. 11 is a block diagram of an electronic device provided in an embodiment of the present application, and as shown in fig. 11, an electronic device 1100 provided in an embodiment of the present application includes a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and capable of running on the processor 1101, where the program or the instruction implements each process of the video processing method embodiment described above when being executed by the processor 1101, and the same technical effect can be achieved, and is not repeated herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Referring to fig. 12, fig. 12 is a block diagram of an electronic device according to another embodiment of the present application, and as shown in fig. 12, the electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensor 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, and processor 1210.
Those skilled in the art will appreciate that the electronic device 1200 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1210 by a power management system, such as to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than illustrated, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The input unit 1204 is configured to receive a first input of a user, where the first input is an input of selecting a target camera from N cameras, and N is an integer greater than 1;
the processor 1210 is configured to obtain, in response to the first input, a target video segment from the target video according to identification information associated with each frame of video image of the target video, where each frame of video image of the target video segment is collected by the target camera; performing target processing on the target video segment; the target video is composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image.
Optionally, the identification information includes at least one of a watermark, a camera identification, and a zoom factor identification.
Optionally, the memory 1209 is configured to:
before the target video clips are acquired from the target video according to the identification information associated with each frame of video image of the target video, each frame of video image and the identification information of the cameras used for acquiring each frame of video image are associated and stored in the process of acquiring video images through the N cameras.
Optionally, the identification information includes watermarks, each of the N cameras is associated with a watermark, and watermarks associated with different cameras are different;
the memory 1209 is configured to:
and storing the watermark associated with each frame of video image and the camera used for acquiring each frame of video image.
Optionally, the video recording interface includes a first control;
the display unit 1206 is configured to:
displaying a first watermark on a first video image displayed on the video interface in the process of acquiring video images through the N cameras, wherein the first video image is acquired by a first camera, and the first watermark is related to the first camera;
in case of receiving a second input of a first indication identifier on the first control by a user, responding to the second input, moving the first indication identifier to a target position, updating the first video image into a second video image, and displaying a second watermark on the second video image;
the second video image is a video image collected by a second camera, the second watermark is a watermark associated with the second camera, and the second camera is a camera associated with a target zoom multiple indicated by the target position.
Optionally, different cameras in the N cameras correspond to different zoom multiple ranges, and each camera in the N cameras acquires a video image in the corresponding zoom multiple range;
the memory 1209 is configured to:
and storing each frame of video image and the zoom multiple identifications corresponding to the zoom multiple used by each frame of video image in an associated mode.
Optionally, the input unit 1204 is further configured to receive a third input of the target video by the user before the first input by the user is received;
the display unit 1206 is further configured to play the target video in response to the third input, and display, on the second control, identification information associated with each video image played in real time.
Optionally, the identification information includes a watermark;
the input unit 1204 is further configured to receive a fourth input from the user to the second control before the first input from the user is received;
the display unit 1206 is further configured to display N watermark options in response to the fourth input, where the N watermark options are in one-to-one correspondence with the N cameras;
the input unit 1204 is further configured to receive a first input of a target watermark option of the N watermark options from a user; the target camera is a camera corresponding to the target watermark option.
Optionally, the identification information includes a zoom multiple identification;
the input unit 1204 is further configured to receive a first input of a zoom multiple identifier displayed on the second control by a user;
the target camera is a camera indicated by a target zoom multiple identifier, and the target zoom multiple identifier is a zoom multiple identifier determined according to the input parameters of the first input.
Optionally, the processor 1210 is further configured to:
playing the target video clip;
alternatively, the target video clip is clipped.
It should be understood that in the embodiment of the present application, the input unit 1204 may include a graphics processor (Graphics Processing Unit, GPU) 12041 and a microphone 12042, and the graphics processor 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes a touch panel 12071 and other input devices 12072. The touch panel 12071 is also called a touch screen. The touch panel 12071 may include two parts, a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 1209 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. Processor 1210 may integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the video processing method, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the video processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (20)

1. A video processing method, comprising:
in the process of collecting video images through N cameras, storing each frame of video image and identification information of the camera used for collecting each frame of video image in an associated mode;
receiving a first input of a user, wherein the first input is input for selecting a target camera from N cameras, and N is an integer greater than 1;
responding to the first input, and acquiring a target video segment from the target video according to identification information associated with each frame of video image of the target video, wherein each frame of video image of the target video segment is acquired by the target camera;
performing target processing on the target video segment;
the target video is a video composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image.
2. The method of claim 1, wherein the identification information comprises at least one of a watermark, a camera identification, and a zoom factor identification.
3. The method of claim 1, wherein the identification information includes a watermark, each of the N cameras being associated with a watermark, and watermarks associated with different cameras being different;
The step of storing the identification information of each frame of video image and the camera used for collecting each frame of video image in an associated mode comprises the following steps:
and storing the watermark associated with each frame of video image and the camera used for acquiring each frame of video image.
4. The method of claim 3, wherein the video recording interface comprises a first control;
in the process of acquiring video images through the N cameras, the method further comprises the following steps:
displaying a first watermark on a first video image displayed on the video recording interface, wherein the first video image is acquired by a first camera, and the first watermark is a watermark associated with the first camera;
in case of receiving a second input of a first indication identifier on the first control by a user, responding to the second input, moving the first indication identifier to a target position, updating the first video image into a second video image, and displaying a second watermark on the second video image;
the second video image is a video image collected by a second camera, the second watermark is a watermark associated with the second camera, and the second camera is a camera associated with a target zoom multiple indicated by the target position.
5. The method of claim 1, wherein different ones of the N cameras correspond to different zoom magnification ranges, and each of the N cameras captures video images within the corresponding zoom magnification ranges;
the step of storing the identification information of each frame of video image and the camera used for collecting each frame of video image in an associated mode comprises the following steps:
and storing each frame of video image and the zoom multiple identifications corresponding to the zoom multiple used by each frame of video image in an associated mode.
6. The method of claim 1, wherein prior to the receiving the first input from the user, the method further comprises:
receiving a third input of a user to the target video;
and responding to the third input, playing the target video, and displaying the identification information associated with each played video image on the second control in real time.
7. The method of claim 6, wherein the identification information comprises a watermark; before the receiving the first input of the user, the method further comprises:
receiving a fourth input of a user to the second control;
responding to the fourth input, and displaying N watermark options, wherein the N watermark options are in one-to-one correspondence with the N cameras;
The receiving a first input from a user includes:
receiving a first input of a user to a target watermark option of the N watermark options;
the target camera is a camera corresponding to the target watermark option.
8. The method of claim 6, wherein the identification information comprises a zoom factor identification;
the receiving a first input from a user includes:
receiving a first input of a zoom multiple identifier displayed on the second control by a user;
the target camera is a camera indicated by a target zoom multiple identifier, and the target zoom multiple identifier is a zoom multiple identifier determined according to the input parameters of the first input.
9. The method of claim 1, wherein said targeting the target video segment comprises:
playing the target video clip;
alternatively, the target video clip is clipped.
10. A video processing apparatus, comprising:
the storage module is used for storing each frame of video image and the identification information of the camera used for collecting each frame of video image in a correlated manner in the process of collecting the video images through the N cameras;
The first receiving module is used for receiving a first input of a user, wherein the first input is an input for selecting a target camera from N cameras, and N is an integer greater than 1;
the acquisition module is used for responding to the first input, acquiring a target video fragment from the target video according to the identification information associated with each frame of video image of the target video, wherein each frame of video image of the target video fragment is acquired by the target camera;
the processing module is used for carrying out target processing on the target video clips;
the target video is a video composed of video images collected by the N cameras, and the identification information is used for indicating the cameras used for collecting each frame of video image.
11. The apparatus of claim 10, wherein the identification information comprises at least one of a watermark, a camera identification, and a zoom factor identification.
12. The apparatus of claim 10, wherein the identification information comprises a watermark, each of the N cameras being associated with a watermark, and wherein the watermarks associated with different cameras are different;
the storage module is specifically used for:
and storing the watermark associated with each frame of video image and the camera used for acquiring each frame of video image.
13. The apparatus of claim 12, wherein the video recording interface comprises a first control;
the apparatus further comprises:
the first display module is used for displaying a first watermark on a first video image displayed on the video recording interface in the process of acquiring video images through the N cameras, wherein the first video image is acquired by a first camera, and the first watermark is related to the first camera;
the second display module is used for responding to the second input, moving the first indication mark to a target position, updating the first video image into a second video image and displaying a second watermark on the second video image when receiving the second input of the first indication mark on the first control by the user;
the second video image is a video image collected by a second camera, the second watermark is a watermark associated with the second camera, and the second camera is a camera associated with a target zoom multiple indicated by the target position.
14. The apparatus of claim 10, wherein different ones of the N cameras correspond to different zoom magnification ranges, and each of the N cameras captures video images within the corresponding zoom magnification ranges;
The storage module is specifically used for:
and storing each frame of video image and the zoom multiple identifications corresponding to the zoom multiple used by each frame of video image in an associated mode.
15. The apparatus of claim 10, wherein the apparatus further comprises:
the second receiving module is used for receiving a third input of the user to the target video before the first input of the user is received;
and the playing module is used for responding to the third input, playing the target video and displaying the identification information associated with each played video image on the second control in real time.
16. The apparatus of claim 15, wherein the identification information comprises a watermark; the apparatus further comprises:
the third receiving module is used for receiving a fourth input of the user to the second control before the first input of the user is received;
the third display module is used for responding to the fourth input and displaying N watermark options, wherein the N watermark options are in one-to-one correspondence with the N cameras;
the first receiving module is specifically configured to:
receiving a first input of a user to a target watermark option of the N watermark options;
the target camera is a camera corresponding to the target watermark option.
17. The apparatus of claim 15, wherein the identification information comprises a zoom factor identification;
the first receiving module is specifically configured to:
receiving a first input of a zoom multiple identifier displayed on the second control by a user;
the target camera is a camera indicated by a target zoom multiple identifier, and the target zoom multiple identifier is a zoom multiple identifier determined according to the input parameters of the first input.
18. The apparatus of claim 10, wherein the processing module is specifically configured to:
playing the target video clip;
alternatively, the target video clip is clipped.
19. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the video processing method of any of claims 1-9.
20. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the video processing method according to any of claims 1-9.
CN202110183174.6A 2021-02-08 2021-02-08 Video processing method, device, electronic equipment and readable storage medium Active CN113010738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110183174.6A CN113010738B (en) 2021-02-08 2021-02-08 Video processing method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110183174.6A CN113010738B (en) 2021-02-08 2021-02-08 Video processing method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113010738A CN113010738A (en) 2021-06-22
CN113010738B true CN113010738B (en) 2024-01-30

Family

ID=76402183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110183174.6A Active CN113010738B (en) 2021-02-08 2021-02-08 Video processing method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113010738B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113316021B (en) * 2021-07-27 2021-10-29 北京天工异彩影视科技有限公司 Movie and television editing work management system
CN116709016B (en) * 2022-02-24 2024-06-18 荣耀终端有限公司 Multiplying power switching method and multiplying power switching device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989063A (en) * 2015-02-09 2016-10-05 大唐软件技术股份有限公司 Video retrieval method and device
CN109729294A (en) * 2019-01-15 2019-05-07 深圳市云歌人工智能技术有限公司 Video image acquisition methods, device, equipment and storage medium
CN109819188A (en) * 2019-01-30 2019-05-28 维沃移动通信有限公司 The processing method and terminal device of video
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN110267010A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110650294A (en) * 2019-11-06 2020-01-03 深圳传音控股股份有限公司 Video shooting method, mobile terminal and readable storage medium
CN111131902A (en) * 2019-12-13 2020-05-08 华为技术有限公司 Method for determining target object information and video playing equipment
CN111741247A (en) * 2020-06-23 2020-10-02 浙江大华技术股份有限公司 Video playback method and device and computer equipment
CN111770386A (en) * 2020-05-29 2020-10-13 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100201815A1 (en) * 2009-02-09 2010-08-12 Vitamin D, Inc. Systems and methods for video monitoring
JP6747158B2 (en) * 2016-08-09 2020-08-26 ソニー株式会社 Multi-camera system, camera, camera processing method, confirmation device, and confirmation device processing method
US20180103197A1 (en) * 2016-10-06 2018-04-12 Gopro, Inc. Automatic Generation of Video Using Location-Based Metadata Generated from Wireless Beacons

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989063A (en) * 2015-02-09 2016-10-05 大唐软件技术股份有限公司 Video retrieval method and device
CN109729294A (en) * 2019-01-15 2019-05-07 深圳市云歌人工智能技术有限公司 Video image acquisition methods, device, equipment and storage medium
CN109819188A (en) * 2019-01-30 2019-05-28 维沃移动通信有限公司 The processing method and terminal device of video
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN110267010A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110650294A (en) * 2019-11-06 2020-01-03 深圳传音控股股份有限公司 Video shooting method, mobile terminal and readable storage medium
CN111131902A (en) * 2019-12-13 2020-05-08 华为技术有限公司 Method for determining target object information and video playing equipment
CN111770386A (en) * 2020-05-29 2020-10-13 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN111741247A (en) * 2020-06-23 2020-10-02 浙江大华技术股份有限公司 Video playback method and device and computer equipment

Also Published As

Publication number Publication date
CN113010738A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
CN111866392B (en) Shooting prompting method and device, storage medium and electronic equipment
CN111064930B (en) Split screen display method, display terminal and storage device
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
CN112887794B (en) Video editing method and device
CN112911147B (en) Display control method, display control device and electronic equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
WO2011018683A1 (en) System to highlight differences in thumbnail images, mobile phone including system, and method
CN112698775A (en) Image display method and device and electronic equipment
CN113794834A (en) Image processing method and device and electronic equipment
CN112437231A (en) Image shooting method and device, electronic equipment and storage medium
CN112148192A (en) Image display method and device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN113271378B (en) Image processing method and device and electronic equipment
CN113852756B (en) Image acquisition method, device, equipment and storage medium
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113873081B (en) Method and device for sending associated image and electronic equipment
CN114143455B (en) Shooting method and device and electronic equipment
CN113794943B (en) Video cover setting method and device, electronic equipment and storage medium
CN113987241A (en) Image display method, image display device, electronic apparatus, and readable storage medium
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN113542599A (en) Image shooting method and device
CN115442527B (en) Shooting method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant