CN115278139A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115278139A
CN115278139A CN202210900735.4A CN202210900735A CN115278139A CN 115278139 A CN115278139 A CN 115278139A CN 202210900735 A CN202210900735 A CN 202210900735A CN 115278139 A CN115278139 A CN 115278139A
Authority
CN
China
Prior art keywords
video
editing
target
recording
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210900735.4A
Other languages
Chinese (zh)
Inventor
吴杰
马堃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202210900735.4A priority Critical patent/CN115278139A/en
Publication of CN115278139A publication Critical patent/CN115278139A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The method comprises the steps of responding to a video recording instruction triggered by a preset page, recording a collected video displayed in the preset page, wherein the preset page is a display page of a target picture collected by preset camera equipment; in the recording process of the collected video, responding to a first editing instruction aiming at the collected video, and displaying first editing information corresponding to the first editing instruction on a recording page of the collected video; and generating a target video comprising the recorded video and the first editing information under the condition that a recording ending instruction is detected. By utilizing the embodiment of the disclosure, interactivity and video effectiveness in the process of generating the video can be improved, the condition of generating invalid collected video is reduced, and further, the waste of system resources is reduced, and the system performance is improved.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
In daily life or work, it is sometimes necessary to remotely view a relevant site or a relevant person in the site. In the related art, for example, an infant monitoring device may acquire videos corresponding to infants in real time and automatically generate the videos periodically, but the automatically generated videos are not targeted, a user is weak in participation and interactivity, a large number of invalid videos are easily generated, and further, the problems of resource waste of a remote service system, reduction of system performance and the like are caused.
Disclosure of Invention
The present disclosure provides a video processing method, a video processing apparatus, an electronic device, and a storage medium, which can improve interactivity and video effectiveness in a video generation process, reduce the situation of generating an invalid video, further reduce system resource waste, and improve system performance. The technical scheme of the disclosure is as follows:
according to an aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
recording the acquired video displayed in a preset page in response to a video recording instruction triggered on the preset page, wherein the preset page is a display page of a target picture acquired by preset camera equipment;
in the recording process of the acquired video, responding to a first editing instruction aiming at the acquired video, and displaying first editing information corresponding to the first editing instruction on a recording page of the acquired video;
and under the condition that a recording ending instruction is detected, generating a target video comprising the recorded video and the first editing information.
In the above embodiment, the recording of the acquired video can be realized by presetting the video recording instruction triggered by the display page of the target picture acquired by the camera equipment. In the process of recording the acquired video, responding to a first editing instruction aiming at the acquired video, and displaying first editing information corresponding to the first editing instruction on a recording page of the acquired video; and under the condition that a recording ending instruction is detected, a target video comprising a recorded video and first editing information is generated, so that video editing in the recording process of the acquired video can be realized, interactivity and video effectiveness in the video generating process are greatly improved, the condition of generating invalid acquired video is effectively reduced, the waste of system resources can be reduced, and the system performance is greatly improved.
In an optional embodiment, the method further comprises:
and responding to a sharing instruction aiming at the target video, and sending the target video to at least one target sharing object.
In the embodiment, the target video is sent to the at least one target sharing object, so that the target video can be shared, and the interactivity of the video can be better improved.
In an optional embodiment, in a case that the at least one target sharing object includes at least one editing right object, and any editing right object edits the target video, the method further includes:
in response to a video viewing instruction, displaying the target video and at least one updating video on a video viewing page;
the at least one updated video is generated based on the target video and the editing information corresponding to at least one target editing permission object, and the at least one target editing permission object is an object for editing the target video in the at least one editing permission object.
In the above embodiment, when the at least one target sharing object includes at least one editing right object and any editing right object edits the target video, the terminal recording the collected video may view the target video and the at least one updated video on the video viewing page, so that the user can know the editing condition of the video, and the interactivity is improved better.
In an optional embodiment, the method further comprises:
and responding to a first video fusion instruction, and performing fusion processing on the target video and the at least one updated video to obtain a first target fusion video.
In the above embodiment, under the condition that the first video fusion instruction is triggered, the vividness and the comprehensiveness of the first target fusion video are greatly improved by performing fusion processing on the target video and at least one updated video.
In an optional embodiment, the fusing the target video and the at least one updated video to obtain a first target fused video includes:
determining newly added editing information corresponding to the at least one updated video and editing position information corresponding to the newly added editing information;
and adding the newly added editing information to the target video based on the editing position information to obtain the first target fusion video.
In the above embodiment, in the fusion process, the newly added editing information corresponding to at least one updated video and the editing position information corresponding to the newly added editing information are determined, so that the newly added editing information can be quickly and accurately added to the target video, and the effectiveness of the obtained first target fusion video can be ensured on the basis of greatly improving the vividness and comprehensiveness of the obtained first target fusion video.
In an optional embodiment, the method further comprises:
in response to the second video fusion instruction, performing fusion processing on at least one target update video to obtain a second target fusion video;
wherein the at least one target update video is an update video generated within a preset time period.
In the above embodiment, under the condition that the second video fusion instruction is triggered, the effectiveness and timeliness of the generated video are greatly improved on the basis of improving the vividness and comprehensiveness of the obtained second target fusion video by performing fusion processing on the updated video generated within the preset time period.
In an optional embodiment, the method further comprises:
under the condition of monitoring voice information, triggering the first editing instruction;
the displaying, in response to a first editing instruction for the acquired video, first editing information corresponding to the first editing instruction on a recording page of the acquired video includes:
responding to the first editing instruction, and performing voice recognition on the monitored voice information to obtain first editing information;
and displaying the first editing information on the recording page.
In the above embodiment, through the mode of carrying out speech recognition to the music information that hears monitored, the collection video of recording is edited, can promote the convenience of the editing of recording in-process greatly, also can avoid the editing process simultaneously, the target picture that is just recording shelters from, and then can be convenient for the better control of carrying out the video recording of user.
In an optional embodiment, after the responding to the first editing instruction for the captured video and displaying the first editing information corresponding to the first editing instruction on the recording page of the captured video, the method further includes:
and in the recording process of the acquired video, adding the first editing information in the currently recorded acquired video.
In the above embodiment, in the process of recording the acquired video, the first editing information is added to the currently recorded acquired video, so that the target video including the recorded video and the first editing information can be quickly generated when the video recording is finished.
In an optional embodiment, in the case that a recording end instruction is detected, generating a target video including a recorded video and the first editing information includes:
and in the case that a recording ending instruction is detected, taking the recorded video added with the first editing information as the target video.
In the above embodiment, in the process of recording the acquired video, the first editing information is added to the currently recorded acquired video, so that the target video including the recorded video and the first editing information can be quickly generated under the condition that the video recording is finished, and the generation speed of the video including the editing information is greatly increased on the basis of improving the vividness and the effectiveness of the generated target video.
In an optional embodiment, the generating the target video including the recorded video and the first editing information in the case that the recording end instruction is detected includes:
and under the condition that a recording ending instruction is detected, generating a target video based on the recorded video and the first editing information.
In the above embodiment, when the recording is finished, the target video is generated by combining the recorded video and the first editing information, so that a user who subsequently watches the target video can be helped based on the first editing information, the video content can be better known, and the vividness and the effectiveness of the generated target video can be improved.
In an optional embodiment, after the generating the target video including the recorded video and the first editing information, the method further includes:
responding to a second editing instruction aiming at the target video, and displaying second editing information corresponding to the second editing instruction on a preset editing page;
updating the target video based on the second editing information in response to an editing confirmation instruction.
In the embodiment, after the target video is generated, the editing function of the target video is provided for the user who records the collected video, so that the video editing requirement of the user can be better met, and the interactivity and the video quality of the target video are improved.
In an optional embodiment, before the recording the captured video shown in the preset page in response to the video recording instruction triggered on the preset page, the method further includes:
carrying out object detection on the target picture to obtain an object detection result;
and displaying preset recording prompt information under the condition that the object detection result indicates that the target picture comprises a preset object.
In the above embodiment, by performing object detection on the acquired target picture, under the condition that the object detection result indicates that the target picture includes the preset object, the user can be automatically prompted to record the acquired video through the preset recording prompt information, and then the convenience and user experience of recording the acquired video can be effectively improved.
According to another aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
the video recording module is used for recording the acquired video displayed in a preset page in response to a video recording instruction triggered by the preset page, wherein the preset page is a display page of a target picture acquired by preset camera equipment;
the first editing information display module is used for responding to a first editing instruction aiming at the acquired video in the recording process of the acquired video and displaying first editing information corresponding to the first editing instruction on a recording page of the acquired video;
and the target video generating module is used for generating a target video comprising the recorded video and the first editing information under the condition that a recording ending instruction is detected.
In an optional embodiment, the apparatus further comprises:
and the video sharing module is used for responding to a sharing instruction aiming at the target video and sending the target video to at least one target sharing object.
In an optional embodiment, in a case that the at least one target sharing object includes at least one editing right object, and any editing right object edits the target video, the apparatus further includes:
the video display module is used for responding to a video viewing instruction and displaying the target video and at least one updating video on a video viewing page;
the at least one update video is generated based on the target video and the editing information corresponding to at least one target editing permission object, and the at least one target editing permission object is an object for editing the target video in the at least one editing permission object.
In an optional embodiment, the apparatus further comprises:
and the first fusion processing module is used for responding to a first video fusion instruction and performing fusion processing on the target video and the at least one updated video to obtain a first target fusion video.
In an optional embodiment, the first fusion processing module includes:
an information determining unit, configured to determine newly added editing information corresponding to the at least one updated video and editing position information corresponding to the newly added editing information;
and the editing information adding unit is used for adding the newly added editing information to the target video based on the editing position information to obtain the first target fusion video.
In an optional embodiment, the apparatus further comprises:
the second fusion processing module is used for responding to a second video fusion instruction and performing fusion processing on at least one target update video to obtain a second target fusion video;
wherein the at least one target update video is an update video generated within a preset time period.
In an optional embodiment, the apparatus further comprises:
the first editing instruction triggering module is used for triggering the first editing instruction under the condition of monitoring voice information;
the first editing information display module comprises:
the voice recognition unit is used for responding to the first editing instruction and performing voice recognition on the monitored voice information to obtain first editing information;
and the first editing information display unit is used for displaying the first editing information on the recording page.
In an optional embodiment, after the responding to the first editing instruction for the captured video and displaying the first editing information corresponding to the first editing instruction on the recording page of the captured video, the apparatus further includes:
and the first editing information adding module is used for adding the first editing information into the currently recorded acquired video in the recording process of the acquired video.
In an optional embodiment, the target video generation module is specifically configured to, when a recording end instruction is detected, use the recorded video added with the first editing information as the target video.
In an optional embodiment, the target video generation module is specifically configured to generate the target video based on the recorded video and the first editing information when a recording end instruction is detected.
In an optional embodiment, the apparatus further comprises:
the second editing information display module is used for responding to a second editing instruction aiming at the target video after the target video comprising the recorded video and the first editing information is generated, and displaying second editing information corresponding to the second editing instruction on a preset editing page;
and the target video updating module is used for responding to an editing confirmation instruction and updating the target video based on the second editing information.
In an optional embodiment, the apparatus further comprises:
the object detection module is used for detecting an object of the target picture before the acquired video displayed in the preset page is recorded in response to the video recording instruction triggered on the preset page, so that an object detection result is obtained;
and the preset recording prompt information display module is used for displaying the preset recording prompt information under the condition that the object detection result indicates that the target picture comprises the preset object.
According to another aspect of the embodiments of the present disclosure, there is provided a video processing system including:
the preset camera equipment is used for acquiring a video of a target picture;
the video processing device is used for recording the acquired video displayed in a preset page in a video recording instruction triggered by the preset page, wherein the preset page is a page displaying the target picture acquired by the preset camera equipment; the method comprises the steps of acquiring a video, and displaying first editing information corresponding to a first editing instruction on a recording page of the acquired video in response to the first editing instruction aiming at the acquired video in the recording process of the acquired video; and the video editing device is used for generating a target video comprising the recorded video and the first editing information under the condition that a recording ending instruction is detected.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of any one of the above.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform any one of the methods of the embodiments of the present disclosure.
According to another aspect of the embodiments of the present disclosure, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of any one of the above-mentioned embodiments of the present disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating an application environment in accordance with an illustrative embodiment;
FIG. 2 is a flow diagram illustrating a method of video processing according to an exemplary embodiment;
FIG. 3 is a schematic illustration of a preset page provided in accordance with an exemplary embodiment;
fig. 4 is a schematic diagram of a variation of a recording page provided according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a page change of an editing rights object for editing a target video according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating a video processing device according to an exemplary embodiment;
fig. 7 is a block diagram of a terminal shown in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment according to an exemplary embodiment, which may include a preset image pickup apparatus 100 and a video processing terminal 200, as shown in fig. 1.
In a specific embodiment, the preset image capturing apparatus 100 may be configured to capture an image in a target area, and optionally, the target area may be a certain place, such as a certain room, a certain area of a mall, and the like; specifically, the preset image pickup apparatus 100 may include, but is not limited to, a dome camera, a gun camera, and the like.
In a specific embodiment, the video processing terminal 200 may be configured to record, edit, share, and the like a captured video captured by a preset image capturing device. Specifically, the video processing terminal 200 may include, but is not limited to, a smart phone, a desktop computer, a tablet computer, a notebook computer, a smart speaker, a digital assistant, an Augmented Reality (AR)/Virtual Reality (VR) device, a smart wearable device, and other types of electronic devices, and may also be software running on the electronic devices, such as an application program. Optionally, the operating system running on the electronic device may include, but is not limited to, an android system, an IOS system, linux, windows, and the like.
In addition, it should be noted that fig. 1 shows only one application environment provided by the present disclosure, and in practical applications, other application environments may also be included, for example, more preset image capturing devices may be included.
In the embodiment of the present specification, the preset image capturing apparatus 100 and the video processing terminal 200 may be connected by a communication method such as a network, and the present disclosure is not limited herein.
Fig. 2 is a flowchart illustrating a video processing method according to an exemplary embodiment, which is used in an electronic device such as a video processing terminal, as shown in fig. 2, and includes the following steps:
s201: and recording the acquired video displayed in the preset page in response to a video recording instruction triggered on the preset page.
In a specific embodiment, the preset page may be a display page of a target picture acquired by a preset image pickup device. The target screen may be a screen (image) of the target area. Optionally, the video recording instruction may be triggered by clicking a preset recording start control, or shaking. The acquired video may be a video (a video composed of multiple frames of target pictures) of a target picture acquired by a preset image pickup device. Optionally, the preset camera device may transmit the target picture acquired in real time to the video processing terminal, and correspondingly, the acquired video displayed in the preset page may be a video of a remote picture acquired in real time (that is, the target picture acquired in real time by the preset camera device).
In an optional embodiment, the user may trigger the video recording instruction in combination with actual requirements, for example, when seeing a person or an object of interest, in the process of viewing the target picture on the preset page. Optionally, the acquired video may be recorded from a target picture corresponding to the video recording instruction; optionally, in the process of triggering the video recording instruction, the user may select to start recording in advance by a preset time length or push the preset time length to start recording.
In a specific embodiment, as shown in fig. 3, fig. 3 is a schematic diagram of a default page provided according to an exemplary embodiment. The control corresponding to 301 may be a preset recording start control. Correspondingly, under the condition that a video recording instruction is triggered based on the preset recording starting control, a recording page can be accessed, and the recording of the collected video can be carried out.
In an optional embodiment, before recording the captured video shown in the preset page in response to the video recording instruction triggered on the preset page, the method further includes:
carrying out object detection on the target picture to obtain an object detection result;
and displaying preset recording prompt information under the condition that the object detection result indicates that the target picture comprises a preset object.
In a specific embodiment, the preset object may be a preset object in which the user feels a thank interest, for example, in some home care scenarios, the preset object may be a face of a family friend, or any human face. Optionally, object detection may be performed on the currently acquired target picture in a timed manner or in real time in combination with actual application requirements.
In a specific embodiment, the target picture may be subjected to object detection by combining with an image of a preset object that is originally set; or combining with a pre-trained object detection network to detect the object of the target picture.
In a specific embodiment, the preset recording prompting message may be a message for prompting a user to record a captured video. Optionally, the preset recording prompt information may be text information, audio information, or the like.
In the above embodiment, by performing object detection on the acquired target picture, under the condition that the object detection result indicates that the target picture includes the preset object, the user can be automatically prompted to record the acquired video through the preset recording prompt information, and then the convenience and user experience of recording the acquired video can be effectively improved.
S203: in the recording process of the acquired video, responding to a first editing instruction aiming at the acquired video, and displaying first editing information corresponding to the first editing instruction on a recording page of the acquired video.
In a specific embodiment, in order to avoid the occlusion of the target picture being recorded during the editing process, the editing may be performed in a voice manner. Optionally, in some scenes where text information is added to the recorded captured video, a text input box may be set on a preset page, and correspondingly, the method may further include:
and triggering a first editing instruction under the condition of monitoring the voice information.
In a specific embodiment, during the captured video recording process, if the voice message is monitored, the first editing instruction may be automatically triggered.
Correspondingly, the displaying, on the recording page of the captured video, the first editing information corresponding to the first editing instruction in response to the first editing instruction for the captured video includes:
responding to the first editing instruction, and performing voice recognition on the monitored voice information to obtain first editing information;
and displaying the first editing information on the recording page.
In a specific embodiment, the first editing information may include text information corresponding to the voice information, and may further include information such as a trigger time and a trigger object corresponding to the first editing instruction.
In a specific embodiment, as shown in fig. 4, fig. 4 is a schematic diagram of a variation of a recording page provided according to an exemplary embodiment. Wherein, fig. 4 (a) is an initial recording page, fig. 4 (b) is a recording page in the process of performing voice recognition after monitoring voice information, and information corresponding to 401 indicates the progress of music recognition; fig. 4 (c) is a recording page showing the first editing information.
In the above embodiment, through the mode of carrying out speech recognition to the music information that hears monitored, the collection video of recording is edited, can promote the convenience of the editing of recording in-process greatly, also can avoid the editing process simultaneously, the target picture that is just recording shelters from, and then can be convenient for the better control of carrying out the video recording of user.
In an alternative embodiment, in combination with a preset control or operation, multimedia resources such as audio and text can be added and edited in the process of recording the acquired video. Optionally, one or more edits may be performed in the process of recording the acquired video, and optionally, the edits such as deletion and update may also be performed on the first editing information displayed on the display recording page.
S205: and generating a target video comprising the recorded video and the first editing information under the condition that a recording ending instruction is detected.
In a specific embodiment, the recording end instruction may be based on a preset recording end control or a preset recording end operation trigger. Optionally, the recording end instruction may be triggered by clicking a control corresponding to 402 in fig. 4 (c). Specifically, the target video may be a remote-screen video (video of the target screen) including the recorded video and the first editing information.
In an optional embodiment, in a case that the recording of the video is finished, generating the target video by combining the recorded video and the first editing information generated in the recording process, and correspondingly, in a case that the recording-finishing instruction is detected, generating the target video including the recorded video and the first editing information may include:
and under the condition that a recording ending instruction is detected, generating a target video based on the recorded video and the first editing information.
In a specific embodiment, the generating the target video based on the recorded captured video and the first editing information includes: adding first editing information in each video frame image of the recorded video to obtain a target video; or adding first editing information in a first preset video frame image of the recorded video to obtain the target video.
In a specific embodiment, the first preset video frame image may be a specified frame image, for example, a preset number of frame images, and may be specifically set according to the actual application requirement.
In the above embodiment, when the recording is finished, the target video is generated by combining the recorded video and the first editing information, so that a user who subsequently watches the target video can be helped based on the first editing information, the video content can be better known, and the vividness and the effectiveness of the generated target video can be improved.
In an optional embodiment, the first editing information may be added to the currently recorded captured video during the recording process, and accordingly, after the first editing information corresponding to the first editing instruction is displayed on the recording page of the captured video in response to the first editing instruction for the captured video, the method may further include:
and in the recording process of the acquired video, adding first editing information in the currently recorded acquired video.
Accordingly, in the case where the recording end instruction is detected, generating the target video including the recorded video and the first editing information includes:
and in the case that a recording end instruction is detected, the recorded video added with the first editing information is taken as a target video.
In the above embodiment, in the process of recording the acquired video, the first editing information is added to the currently recorded acquired video, so that the target video including the recorded video and the first editing information can be quickly generated under the condition that the video recording is finished, and on the basis of improving the vividness and the effectiveness of the generated target video, the generation speed of the video including the editing information is greatly improved.
In an optional embodiment, after generating the target video including the recorded video and the first editing information, the method may further include:
responding to a second editing instruction aiming at the target video, and displaying second editing information corresponding to the second editing instruction on a preset editing page;
and updating the target video based on the second editing information in response to the editing confirmation instruction.
In a specific embodiment, a user who records a captured video may edit a target video in combination with actual requirements after generating the target video. Optionally, the preset editing page may be provided with at least one piece of editing operation information, so as to edit the target video based on at least one editing mode.
In a specific embodiment, taking the example of performing the additional editing on the target video, the at least one piece of editing operation information may include at least one of an information input box and a voice recording control.
In a specific embodiment, updating the target video based on the second editing information may include adding the second editing information in each video frame image of the target video; or adding second editing information in a second preset video frame image of the target video.
In the embodiment, after the target video is generated, the editing function of the target video is provided for the user who records the collected video, so that the video editing requirement of the user can be better met, and the interactivity and the video quality of the target video are improved.
In an optional embodiment, the method may further include:
and responding to a sharing instruction aiming at the target video, and sending the target video to at least one target sharing object.
In an optional embodiment, the at least one target sharing object may be at least one user account of the shared target video. Specifically, after the target video is generated, the user may trigger to share the target video with at least one target sharing object according to actual requirements. Optionally, the sharing instruction may be triggered by a preset sharing control or a preset sharing operation.
In the embodiment, the target video is sent to the at least one target sharing object, so that the target video can be shared, and the interactivity of the video can be better improved.
In an optional embodiment, in a case that at least one target sharing object includes at least one editing right object, and any editing right object edits a target video, the method may further include:
in response to a video viewing instruction, displaying a target video and at least one updating video on a video viewing page;
in this embodiment, the editing right object may be an object having an editing right for the target video. In a specific embodiment, any editing permission object in the at least one target sharing object may edit the target video. In a specific embodiment, as shown in fig. 5, fig. 5 is a schematic diagram of page changes of a target video edited by an editing rights object according to an exemplary embodiment. Fig. 5 (a) may be a page in the process of text information addition editing based on voice for a certain editing rights object, and fig. 5 (b) may be a display page of an updated video after corresponding editing information is added to the editing rights object. Optionally, in a case that the editing right object stores the updated video, the updated video may be automatically synchronized to the terminal (video processing terminal) that records the acquired video through the background server, or in a case that the terminal that records the acquired video triggers a video viewing instruction, the updated video may be synchronized to the terminal that records the acquired video through the background server.
In a specific embodiment, the editing right object may or may not edit the target video. Specifically, the at least one update video may be generated based on the target video and the editing information corresponding to the at least one target editing right object, and the at least one target editing right object may be an object for editing the target video in the at least one editing right object.
In the above embodiment, under the condition that at least one target sharing object includes at least one editing permission object and any editing permission object edits a target video, the terminal recording the collected video can view the target video and at least one updated video on the video viewing page, so that the user can know the editing condition of the video conveniently, and the interactivity is improved better.
In an optional embodiment, the method may further include:
and responding to the first video fusion instruction, and performing fusion processing on the target video and at least one updated video to obtain a first target fusion video.
In a specific embodiment, the first video fusion instruction may be triggered by a user, or may be triggered automatically at a fixed time; optionally, under the condition that the first video fusion instruction is triggered, the target video and the at least one updated video may be fused to obtain the first target fusion video.
In an optional embodiment, the fusing the target video and the at least one updated video to obtain the first target fused video may include: determining newly-added editing information corresponding to at least one updated video and editing position information corresponding to the newly-added editing information; and adding the newly added editing information to the target video based on the editing position information to obtain a first target fusion video.
In a specific embodiment, the editing position information may be video frame information corresponding to newly added editing information added to the target video; optionally, newly added editing information may be added to the corresponding video frame of the target video in combination with the editing position information, so as to obtain the first target fusion video.
In the above embodiment, under the condition that the first video fusion instruction is triggered, the target video and the at least one updated video are fused, and in the fusion process, the newly added editing information corresponding to the at least one updated video and the editing position information corresponding to the newly added editing information are determined, so that the newly added editing information can be quickly and accurately added to the target video, and the validity of the obtained first target fusion video can be ensured on the basis of greatly improving the vividness and comprehensiveness of the obtained first target fusion video.
In an optional embodiment, the method may further include:
in response to the second video fusion instruction, performing fusion processing on at least one target update video to obtain a second target fusion video;
in a specific embodiment, the second video fusion instruction may be triggered by a user, or may be automatically triggered at regular time; the at least one target update video is an update video generated within a preset time period. The preset time period may be a preset video fusion period, or may be a specified time period. Specifically, the same information in at least one target update video may be deduplicated and then fused, and the newly added reduplicated information is added to the target video to obtain a second target fused video.
In the above embodiment, under the condition that the second video fusion instruction is triggered, the effectiveness and timeliness of the generated video are greatly improved on the basis of improving the vividness and comprehensiveness of the obtained second target fusion video by performing fusion processing on the updated video generated within the preset time period.
As can be seen from the technical solutions provided by the embodiments of the present specification, the present specification can record a remotely captured picture (i.e., a captured video) by setting a video recording instruction triggered by a display page of a target picture captured by a camera device. In the recording process of the acquired video, responding to a first editing instruction aiming at the acquired video, and displaying first editing information corresponding to the first editing instruction on a recording page of the acquired video; and under the condition that a recording ending instruction is detected, a target video comprising a recorded video and first editing information is generated, video editing in the process of recording the collected video corresponding to the remote picture can be realized, interactivity in the process of editing the video corresponding to the remote picture and effectiveness of the remote picture video are greatly improved, the condition of generating invalid remote picture video is effectively reduced, further, waste of system resources can be reduced, and system performance is greatly improved.
Fig. 6 is a block diagram illustrating a video processing device according to an example embodiment. Referring to fig. 6, the apparatus includes:
the video recording module 610 is configured to record a collected video displayed in a preset page in response to a video recording instruction triggered on the preset page, where the preset page is a display page of a target picture collected by preset camera equipment;
the first editing information displaying module 620 is configured to, in the process of recording the acquired video, respond to a first editing instruction for the acquired video and display first editing information corresponding to the first editing instruction on a recording page of the acquired video;
and a target video generating module 630, configured to generate a target video including the recorded video and the first editing information in a case where the recording end instruction is detected.
In an optional embodiment, the apparatus further comprises:
and the video sharing module is used for responding to a sharing instruction aiming at the target video and sending the target video to at least one target sharing object.
In an optional embodiment, in a case that at least one target sharing object includes at least one editing right object, and any editing right object edits a target video, the apparatus further includes:
the video display module is used for responding to the video viewing instruction and displaying the target video and at least one updating video on a video viewing page;
the at least one updating video is generated based on the target video and the editing information corresponding to the at least one target editing permission object, and the at least one target editing permission object is an object for editing the target video in the at least one editing permission object.
In an optional embodiment, the apparatus further comprises:
and the first fusion processing module is used for responding to the first video fusion instruction and carrying out fusion processing on the target video and at least one updated video to obtain a first target fusion video.
In an optional embodiment, the first fusion processing module includes:
the information determining unit is used for determining newly added editing information corresponding to at least one updated video and editing position information corresponding to the newly added editing information;
and the editing information adding unit is used for adding the newly added editing information to the target video based on the editing position information to obtain a first target fusion video.
In an optional embodiment, the apparatus further comprises:
the second fusion processing module is used for responding to a second video fusion instruction and performing fusion processing on at least one target update video to obtain a second target fusion video;
wherein, at least one target update video is an update video generated in a preset time period.
In an optional embodiment, the apparatus further comprises:
the first editing instruction triggering module is used for triggering a first editing instruction under the condition of monitoring voice information;
the first edit information presentation module 620 includes:
the voice recognition unit is used for responding to the first editing instruction and carrying out voice recognition on the monitored voice information to obtain first editing information;
and the first editing information display unit is used for displaying the first editing information on the recording page.
In an optional embodiment, after responding to the first editing instruction for the captured video and displaying the first editing information corresponding to the first editing instruction on the recording page of the captured video, the apparatus further includes:
and the first editing information adding module is used for adding first editing information in the currently recorded acquired video in the recording process of the acquired video.
In an optional embodiment, the target video generating module 630 is specifically configured to, in a case that the recording end instruction is detected, use the recorded video added with the first editing information as the target video.
In an optional embodiment, the target video generating module 630 is specifically configured to generate the target video based on the recorded video and the first editing information when the recording end instruction is detected.
In an optional embodiment, the apparatus further comprises:
the second editing information display module is used for responding to a second editing instruction aiming at the target video after the target video comprising the recorded video and the first editing information is generated, and displaying second editing information corresponding to the second editing instruction on a preset editing page;
and the target video updating module is used for responding to the editing confirmation instruction and updating the target video based on the second editing information.
In an optional embodiment, the apparatus further comprises:
the object detection module is used for detecting an object of a target picture before recording the acquired video displayed in the preset page in response to a video recording instruction triggered on the preset page to obtain an object detection result;
and the preset recording prompt information display module is used for displaying the preset recording prompt information under the condition that the object detection result indicates that the target picture comprises the preset object.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
The embodiment of the present disclosure further provides a schematic structural diagram of a terminal, as shown in fig. 7, the terminal may be used to implement the video output method provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the terminal may include RF (Radio Frequency) circuitry 710, memory 720 including one or more computer-readable storage media, input unit 730, display unit 740, sensor 750, audio circuitry 760, wiFi (wireless fidelity) module 770, processor 780 including one or more processing cores, and power supply 790. Those skilled in the art will appreciate that the terminal structure shown in fig. 7 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
RF circuit 710 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for receiving downlink information from a base station and processing the received downlink information by one or more processors 780; in addition, data relating to uplink is transmitted to the base station. In general, RF circuit 710 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, the RF circuit 710 may also communicate with a network and other terminals through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), email, SMS (Short Messaging Service), and the like.
The memory 720 may be used to store software programs and modules, and the processor 780 performs various functional applications and data processing by operating the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 720 may also include a memory controller to provide access to memory 720 by processor 780 and input unit 730.
The input unit 730 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 730 may include a touch-sensitive surface 731 as well as other input devices 732. Touch-sensitive surface 731, also referred to as a touch display screen or touch pad, can collect touch operations by a user (e.g., operations by a user using a finger, a stylus, or any other suitable object or attachment to touch-sensitive surface 731 or near touch-sensitive surface 731) on or near touch-sensitive surface 731, and drive corresponding connection devices according to a predetermined program. Alternatively, the touch sensitive surface 731 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and provides them to processor 780, where they can receive commands from processor 780 and execute them. In addition, the touch sensitive surface 731 can be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 730 may also include other input devices 732 in addition to the touch-sensitive surface 731. In particular, other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 740 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal, which may be configured of graphics, text, icons, video, and any combination thereof. The Display unit 740 may include a Display panel 741, and optionally, the Display panel 741 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 731 can overlie display panel 741, such that when touch-sensitive surface 731 detects a touch event thereon or thereabout, processor 780 can determine the type of touch event, and processor 780 can then provide a corresponding visual output on display panel 741 in accordance with the type of touch event. Where the touch-sensitive surface 731 and the display panel 741 may be implemented as two separate components, input and output functions, but in some embodiments the touch-sensitive surface 731 and the display panel 741 may be integrated to implement input and output functions.
The terminal may also include at least one sensor 750, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 741 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 741 and/or a backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the terminal is stationary, and can be used for applications of recognizing terminal gestures (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor and the like which can be configured on the terminal are not described in detail herein.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and the terminal. The audio circuit 760 can transmit the electrical signal converted from the received audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 and output; on the other hand, the microphone 762 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 760, processes the audio data by the audio data output processor 780, and transmits the processed audio data to, for example, another terminal via the RF circuit 710, or outputs the audio data to the memory 720 for further processing. The audio circuitry 760 may also include an earbud jack to provide communication of peripheral headphones with the terminal.
WiFi belongs to short-distance wireless transmission technology, the terminal can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 770, and wireless broadband internet access is provided for the user. Although fig. 7 shows the WiFi module 770, it is understood that it does not belong to the essential constitution of the terminal, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 780 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby monitoring the terminal as a whole. Optionally, processor 780 may include one or more processing cores; preferably, the processor 780 may integrate an application processor, which primarily handles operating system, user interface, application programs, etc. and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 780.
The terminal also includes a power supply 790 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 780 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 790 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the display unit of the terminal is a touch screen display, the terminal further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors according to the instructions of the method embodiments of the present invention.
In an exemplary embodiment, there is also provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video processing method as in the embodiments of the present disclosure.
In an exemplary embodiment, there is also provided a computer-readable storage medium, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform a video processing method in an embodiment of the present disclosure.
In an exemplary embodiment, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the video processing method in the embodiments of the present disclosure.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. A video processing method, comprising:
recording the acquired video displayed in a preset page in response to a video recording instruction triggered on the preset page, wherein the preset page is a display page of a target picture acquired by preset camera equipment;
in the recording process of the acquired video, responding to a first editing instruction aiming at the acquired video, and displaying first editing information corresponding to the first editing instruction on a recording page of the acquired video;
and under the condition that a recording ending instruction is detected, generating a target video comprising the recorded video and the first editing information.
2. The video processing method of claim 1, wherein the method further comprises:
and responding to a sharing instruction aiming at the target video, and sending the target video to at least one target sharing object.
3. The video processing method according to claim 2, wherein in a case that the at least one target shared object includes at least one editing right object, and any editing right object edits the target video, the method further comprises:
in response to a video viewing instruction, displaying the target video and at least one updating video on a video viewing page;
the at least one updated video is generated based on the target video and the editing information corresponding to at least one target editing permission object, and the at least one target editing permission object is an object for editing the target video in the at least one editing permission object.
4. The video processing method of claim 3, wherein the method further comprises:
and responding to a first video fusion instruction, and performing fusion processing on the target video and the at least one updated video to obtain a first target fusion video.
5. The video processing method according to claim 4, wherein the fusing the target video and the at least one updated video to obtain a first target fused video comprises:
determining newly-added editing information corresponding to the at least one updated video and editing position information corresponding to the newly-added editing information;
and adding the newly added editing information to the target video based on the editing position information to obtain the first target fusion video.
6. The video processing method of claim 3, wherein the method further comprises:
in response to the second video fusion instruction, performing fusion processing on at least one target update video to obtain a second target fusion video;
wherein the at least one target update video is an update video generated within a preset time period.
7. The video processing method of claim 1, wherein the method further comprises:
under the condition of monitoring voice information, triggering the first editing instruction;
the displaying, in response to a first editing instruction for the acquired video, first editing information corresponding to the first editing instruction on a recording page of the acquired video includes:
responding to the first editing instruction, and performing voice recognition on the monitored voice information to obtain the first editing information;
and displaying the first editing information on the recording page.
8. The video processing method according to any one of claims 1 to 7, wherein after the responding to the first editing instruction for the captured video and displaying the first editing information corresponding to the first editing instruction on the recording page of the captured video, the method further comprises:
and in the recording process of the acquired video, adding the first editing information in the currently recorded acquired video.
9. The video processing method according to claim 8, wherein, in a case where a recording end instruction is detected, generating a target video including the recorded video and the first editing information includes:
and under the condition that a recording ending instruction is detected, taking the recorded video added with the first editing information as the target video.
10. The video processing method according to any one of claims 1 to 7, wherein the generating a target video including the recorded video and the first editing information in a case where the recording end instruction is detected includes:
and under the condition that a recording ending instruction is detected, generating a target video based on the recorded video and the first editing information.
11. The video processing method according to any one of claims 1 to 7, wherein after the generating of the target video including the recorded video and the first editing information, the method further comprises:
responding to a second editing instruction aiming at the target video, and displaying second editing information corresponding to the second editing instruction on a preset editing page;
and responding to an edit confirmation instruction, and updating the target video based on the second editing information.
12. The video processing method according to any of claims 1 to 7, wherein before the recording the captured video shown in the preset page in response to the video recording instruction triggered on the preset page, the method further comprises:
carrying out object detection on the target picture to obtain an object detection result;
and displaying preset recording prompt information under the condition that the object detection result indicates that the target picture comprises a preset object.
13. A video processing apparatus, comprising:
the video recording module is used for recording the acquired video displayed in a preset page in response to a video recording instruction triggered by the preset page, wherein the preset page is a display page of a target picture acquired by preset camera equipment;
the first editing information display module is used for responding to a first editing instruction aiming at the acquired video in the recording process of the acquired video and displaying first editing information corresponding to the first editing instruction on a recording page of the acquired video;
and the target video generating module is used for generating a target video comprising the recorded video and the first editing information under the condition that a recording ending instruction is detected.
14. A video processing system, comprising:
the preset camera equipment is used for acquiring a video of a target picture;
the video processing device is used for recording the acquired video displayed in a preset page at a video recording instruction triggered by the preset page, wherein the preset page is a page displaying the target picture acquired by the preset camera equipment; the method comprises the steps of acquiring a video, and displaying first editing information corresponding to a first editing instruction on a recording page of the acquired video in response to the first editing instruction aiming at the acquired video in the recording process of the acquired video; and the video editing device is used for generating a target video comprising the recorded video and the first editing information under the condition that a recording ending instruction is detected.
15. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of any of claims 1 to 12.
16. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable a video processing device to perform the video processing method of any of claims 1 to 12.
CN202210900735.4A 2022-07-28 2022-07-28 Video processing method and device, electronic equipment and storage medium Withdrawn CN115278139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210900735.4A CN115278139A (en) 2022-07-28 2022-07-28 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210900735.4A CN115278139A (en) 2022-07-28 2022-07-28 Video processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115278139A true CN115278139A (en) 2022-11-01

Family

ID=83770821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210900735.4A Withdrawn CN115278139A (en) 2022-07-28 2022-07-28 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115278139A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135395A (en) * 2023-08-24 2023-11-28 中电金信软件有限公司 Page recording method and device
WO2024099370A1 (en) * 2022-11-08 2024-05-16 北京字跳网络技术有限公司 Video production method and apparatus, device and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024099370A1 (en) * 2022-11-08 2024-05-16 北京字跳网络技术有限公司 Video production method and apparatus, device and medium
CN117135395A (en) * 2023-08-24 2023-11-28 中电金信软件有限公司 Page recording method and device

Similar Documents

Publication Publication Date Title
CN106791892B (en) Method, device and system for live broadcasting of wheelhouses
CN108021305B (en) Application association starting method and device and mobile terminal
CN107333162B (en) Method and device for playing live video
CN111309218A (en) Information display method and device and electronic equipment
CN106302996B (en) Message display method and device
CN106254910B (en) Method and device for recording image
CN109756767B (en) Preview data playing method, device and storage medium
CN115278139A (en) Video processing method and device, electronic equipment and storage medium
WO2015131768A1 (en) Video processing method, apparatus and system
CN108551525B (en) State determination method of movement track and mobile terminal
CN115022653A (en) Information display method and device, electronic equipment and storage medium
CN108989554B (en) Information processing method and terminal
CN103905837A (en) Image processing method and device and terminal
US11243668B2 (en) User interactive method and apparatus for controlling presentation of multimedia data on terminals
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
KR102263977B1 (en) Methods, devices, and systems for performing information provision
CN109728918B (en) Virtual article transmission method, virtual article reception method, device, and storage medium
CN115017406A (en) Live broadcast picture display method and device, electronic equipment and storage medium
CN114547436A (en) Page display method and device, electronic equipment and storage medium
CN115017340A (en) Multimedia resource generation method and device, electronic equipment and storage medium
CN111143805A (en) Operation method and device and electronic equipment
CN111966271B (en) Screen panorama screenshot method and device, terminal equipment and storage medium
CN115361590B (en) Live video display method and device, electronic equipment and storage medium
CN114866640B (en) Touch panel failure communication method and related device
CN115237317B (en) Data display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20221101