CN114401368A - Method and device for processing co-shooting video - Google Patents

Method and device for processing co-shooting video Download PDF

Info

Publication number
CN114401368A
CN114401368A CN202210082644.4A CN202210082644A CN114401368A CN 114401368 A CN114401368 A CN 114401368A CN 202210082644 A CN202210082644 A CN 202210082644A CN 114401368 A CN114401368 A CN 114401368A
Authority
CN
China
Prior art keywords
video
video recording
snap
recording area
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210082644.4A
Other languages
Chinese (zh)
Other versions
CN114401368B (en
Inventor
黄璜
岱钦夫
李泽宇
段凌宇
曾博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Calorie Information Technology Co ltd
Hangzhou Sports Co ltd
Original Assignee
Beijing Calorie Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Calorie Information Technology Co ltd filed Critical Beijing Calorie Information Technology Co ltd
Priority to CN202210082644.4A priority Critical patent/CN114401368B/en
Publication of CN114401368A publication Critical patent/CN114401368A/en
Application granted granted Critical
Publication of CN114401368B publication Critical patent/CN114401368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a method and a device for processing a co-shooting video. Wherein, the method comprises the following steps: responding to the close-up operation, and displaying a video close-up page, wherein the video close-up page comprises a plurality of video recording areas; responding to the operation of starting shooting, and synchronously recording videos in a video recording area; and under the condition that the video recording is finished, generating a snap-shot video according to the video recorded in the video recording area. The method and the device solve the technical problems that in the related technology, the co-shooting can only be carried out with the shot video in the same frame, and the requirement of simultaneously co-shooting a plurality of users to generate the co-shot video cannot be met.

Description

Method and device for processing co-shooting video
Technical Field
The invention relates to the field of video shooting, in particular to a method and a device for processing a co-shooting video.
Background
With the continuous advance of the video wave, how to motivate users to create and consume more videos is the key of the breach. Most users often face the situation that the users pick up the camera and do not know where to start shooting, and the users strongly need abundant video materials as shooting inspiration and also need more creative play methods to arouse the shooting desire. In the related technology, more users shoot the same frame with a certain shot video, and then the effect of generating the same frame with two videos is generated, so that the requirement of simultaneously shooting a plurality of users to generate a shooting video cannot be met.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing a co-shooting video, which are used for at least solving the technical problems that the co-shooting in the related technology can only be performed with the same frame of the already-shot video, and the requirement of simultaneously co-shooting by a plurality of users to generate the co-shooting video cannot be met.
According to an aspect of the embodiments of the present invention, there is provided a method for processing a co-shooting video, including: responding to a close-up operation, and displaying a video close-up page, wherein the video close-up page comprises a plurality of video recording areas; responding to the operation of starting shooting, and synchronously recording videos in the video recording area; and under the condition that the video recording is finished, generating a snap-shot video according to the video recorded in the video recording area.
Optionally, displaying the video snap page includes: responding to a close-up operation, and displaying a main video recording area, wherein the main video recording area is one of the plurality of video recording areas; responding to a joining request of a user terminal, and increasing the video recording areas with corresponding quantity, wherein the increased video recording areas are increased according to a preset increasing mode, and the main video recording area and the video recording area combined area are rectangular.
Optionally, responding to the join request of the user terminal, increasing the corresponding number of video recording areas includes: and under the condition that the number of the added user terminals is multiple, generating a video recording area of the user terminal at the corresponding position according to the time sequence of the addition and the preset position sequence.
Optionally, after adding a corresponding number of video recording areas in response to a join request of the user terminal, the method further includes: and responding to the switching operation of switching the main video recording area, and switching the positions of the main video recording area and the selected video recording area.
Optionally, after adding a corresponding number of video recording areas in response to a join request of the user terminal, the method further includes: in response to a replacement operation for replacing the structure of the combined area, replacing the structure of the combined area with a predefined structure selected by the replacement operation; the combined area of the predefined structure is rectangular, and the display positions and/or the display sizes of the main video recording area and the video recording area in the combined area of the predefined structure and the combined area before updating are different.
Optionally, when the video recording is completed, generating a snap-shot video according to the video recorded in the video recording area includes: and under the condition that the video recording is finished, generating a snap-shot video from the video recorded in the video recording area according to the structure of the finally determined combined area.
Optionally, after generating a snap-shot video according to the video recorded in the video recording area when the video recording is completed, the method further includes: and responding to the generation operation of generating the three-dimensional virtual object, and generating the three-dimensional virtual object of the target object in the main video recorded in the main video recording area, wherein the main video is the video of the target object.
Optionally, the method further includes: responding to the angle adjustment operation of the three-dimensional virtual object, and performing angle adjustment on the three-dimensional virtual object; and under the condition that the target video is played, when a snap-shot video is generated according to the target video and/or the video recorded in the video recording area, adding a three-dimensional virtual object after angle adjustment to the snap-shot video.
Optionally, after generating the three-dimensional virtual object of the target object in the main video recorded in the main video recording area in response to the generation operation for generating the three-dimensional virtual object, the method further includes: storing the three-dimensional virtual object; and under the condition of clicking the three-dimensional virtual object, displaying a three-dimensional display page of the three-dimensional virtual object, wherein the three-dimensional display page responds to the angle adjustment operation to carry out angle adjustment on the three-dimensional virtual object.
Optionally, the video snap page further includes: a display area of a target video, wherein the target video is a video which has been shot or completed with a snap shot.
According to another aspect of the embodiments of the present invention, there is also provided a device for processing a co-shot video, including: the display module is used for responding to the close-up operation and displaying a video close-up page, wherein the video close-up page comprises a plurality of video recording areas; the recording module is used for responding to the operation of starting shooting and synchronously recording the video in the video recording area; and the synthesis module is used for generating a snap-shot video according to the video recorded in the video recording area under the condition that the video recording is finished.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes the method for processing the taken video in any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a computer storage medium, where the computer storage medium is used to store a program, and when the program runs, the apparatus where the computer storage medium is located is controlled to execute the processing method of the snap video.
In the embodiment of the invention, a video close-up page is displayed by adopting response close-up operation, wherein the video close-up page comprises a plurality of video recording areas; responding to the operation of starting shooting, and synchronously recording videos in a video recording area; under the condition that the video recording is completed, the mode of generating the co-shooting video according to the video recorded in the video recording area is realized, the purpose of synchronously carrying out the co-shooting by different users is achieved by synchronously carrying out the video recording on a plurality of video recording windows and generating the corresponding co-shooting video according to the synchronously recorded video, and the technical effects of simultaneously selecting a plurality of users to carry out the co-shooting video according to the user requirements are achieved, so that the interesting requirement that the users select a plurality of users to simultaneously carry out the co-shooting video is met, further the interest and the enthusiasm of the users in the co-shooting process are improved, further, the technical problem that the co-shooting in the related technology can only carry out the co-shooting with the shot video in the same frame and the requirement of simultaneously generating the co-shooting video by the plurality of users cannot be met is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of processing a snap video according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a double snap user interface according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a three-person snap user interface according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a five-person snap user interface according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a video taken in time with a video that has been captured in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of a 3-D user video snap according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a device for processing a snap video according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a method of processing a video in time, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flowchart of a method for processing a snap video according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, responding to a close-up operation, and displaying a video close-up page, wherein the video close-up page comprises a plurality of video recording areas;
step S104, responding to the operation of starting shooting, and synchronously recording videos in a video recording area;
and step S106, under the condition that the video recording is finished, generating a snap shot video according to the video recorded in the video recording area.
Through the steps, a video close-up page is displayed by adopting response close-up operation, wherein the video close-up page comprises a plurality of video recording areas; responding to the operation of starting shooting, and synchronously recording videos in a video recording area; under the condition that the video recording is completed, the mode of generating the co-shooting video according to the video recorded in the video recording area is realized, the purpose of synchronously carrying out the co-shooting by different users is achieved by synchronously carrying out the video recording on a plurality of video recording windows and generating the corresponding co-shooting video according to the synchronously recorded video, and the technical effects of simultaneously selecting a plurality of users to carry out the co-shooting video according to the user requirements are achieved, so that the interesting requirement that the users select a plurality of users to simultaneously carry out the co-shooting video is met, further the interest and the enthusiasm of the users in the co-shooting process are improved, further, the technical problem that the co-shooting in the related technology can only carry out the co-shooting with the shot video in the same frame and the requirement of simultaneously generating the co-shooting video by the plurality of users cannot be met is solved.
The execution main body of the steps can be terminal equipment of sports software, the sports software can respond to the operation of a user on the terminal equipment to perform data interaction with the user, can also push sports courses to the user according to preset rules, provides classification, class list, search and the like of the sports courses for the user, and can also perform shooting of sports videos, including independent shooting or co-shooting.
The close-shot operation can be that a shot video which can be in close shot is clicked on the motion software, or the close-shot operation is triggered by clicking a related close-shot video starting control. After the close-up operation is triggered, a video close-up page can be entered, the video close-up page can be a horizontal screen or a vertical screen, and can be specifically determined according to the number of video recording areas of the close-up page, for example, fig. 2 to 4 respectively show the cases that the number of the video recording areas is two, three or five, and the video recording area of the motion software can be determined by selecting horizontal screen shooting or vertical screen shooting on each number of close-up pages.
The plurality of video recording areas correspond to different motion software of the video terminal, and each user entering the snap page can create the corresponding video recording area to display the video recording area on the snap page. Under the condition that the video recording areas of the close-shot pages are multiple, only the video recording area of the motion software of the terminal can be displayed, all the video recording areas can also be displayed, and the video recording areas can be set according to requirements.
The multiple video recording areas may include a main video recording area corresponding to a main user, the user who covers the user and can initiate the snapshot may be changed in authority among the multiple users displayed in the snapshot page, that is, the users corresponding to the video recording areas in the snapshot page may all be updated to the main user in an inherited or master user-selected manner.
The operation of starting shooting may be completed by the master user, and specifically, the master user may click a start recording button of the snapshot page to trigger the operation of starting shooting. In other embodiments, the operation started during shooting may also be automatically triggered according to a preset time point when the user enters the page for close shooting, and before the user starts, a countdown reminding user may be displayed.
After shooting is started, the plurality of video recording areas synchronously record corresponding user videos, and under the condition that video recording is completed, a close-shot video is generated according to the videos recorded in the plurality of video recording areas. The operation of completing the video recording may also be completed by the master user, similar to the operation of triggering the start of shooting.
When a snap-shot video is generated according to videos recorded in a plurality of video recording areas, a corresponding snap-shot video can be generated according to the layout of the plurality of video recording areas of the snap-shot page.
Optionally, responding to the snapshot operation, displaying the video snapshot page includes: responding to the close-up operation, and displaying a main video recording area, wherein the main video recording area is one of a plurality of video recording areas; responding to a joining request of a user terminal, and increasing the video recording areas with corresponding quantity, wherein the increased video recording areas are increased according to a preset increasing mode, and the main video recording area and the video recording area combined area are rectangular.
After entering the close-up page and before starting to record the video, the user can invite the user or join the user into the close-up page in an active joining mode. And when a new user terminal is added into the close-up page, automatically generating corresponding video recording areas, and simultaneously entering a plurality of new user terminals, generating corresponding number of video recording areas and displaying the video recording areas according to the current structure of the close-up page.
It should be noted that, before a new user terminal joins, the master user may verify the newly joined user terminal, join the new user terminal to the auction page under the condition that the master user allows the joining, otherwise, feed back information that the master user has rejected the joining to the new user terminal, and reject the joining of the new user to the auction page.
Optionally, responding to the join request of the user terminal, increasing the corresponding number of video recording areas includes: and under the condition that the number of the added user terminals is multiple, generating a video recording area of the user terminal at the corresponding position according to the time sequence of the addition and the preset position sequence.
And under the condition that a plurality of user terminals are added, generating a video recording area according to the time sequence of the addition, and displaying the video recording area according to the preset position sequence. For example, as shown in fig. 3, the 1 area may be a video recording area of a new user terminal that is last joined, the B area may be a main video recording area of a main user, and the a area may be a video recording area of a first user terminal that the main user is allowed to join.
Optionally, after adding a corresponding number of video recording areas in response to the join request of the user terminal, the method further includes: and responding to the switching operation of switching the main video recording area, and switching the positions of the main video recording area and the selected video recording area.
In the multiple video recording areas of the co-shooting page, the master user can exchange the main video recording area with the video recording areas of other terminals to complete the position switching of the main video recording area and the selected video recording area. The switching operation may be before the video recording starts or during the video recording.
Optionally, after adding a corresponding number of video recording areas in response to the join request of the user terminal, the method further includes: responding to the replacement operation of replacing the structure of the combined area, and replacing the structure of the combined area with a predefined structure selected by the replacement operation; the combined area of the predefined structure is rectangular, and the display positions and/or the display sizes of the main video recording area and the video recording area in the combined area of the predefined structure and the combined area before updating are different.
Different structures for horizontal screen recording or vertical screen recording exist in the photo-matching page, different structures may exist for horizontal screen recording, as shown in fig. 4, the area C can be exchanged with the area B, the area a and the area 1, and different structures can be generated. The master user can select different predefined structures according to requirements to replace the current structure of the snap-in page.
Optionally, in a case that the video recording is completed, generating a snap-shot video according to the video recorded in the video recording area includes: and under the condition that the video recording is finished, generating a snap-shot video from the video recorded in the video recording area according to the structure of the finally determined combined area.
Under the condition that the structure of the close-shot page changes, the structure of the close-shot video generated by default by the system also changes, and in order to avoid confusion, therefore, under the condition that the video recording is finished, the video recorded in the video recording area generates the close-shot video according to the structure of the finally determined combined area.
Optionally, in a case that the video recording is completed, after generating a snap-shot video according to a video recorded in the video recording area, the method further includes: and responding to the generation operation of generating the three-dimensional virtual object, and generating the three-dimensional virtual object of the target object in the main video recorded in the main video recording area, wherein the main video is the video of the target object.
After the video recording is finished, the master user can select to generate a three-dimensional virtual object of the image of the master user in the recorded video, specifically, the action of the master user can be identified through the video recorded in the master video recording area, the action information of the master user is obtained, and the three-dimensional virtual object is generated according to the action information.
After the three-dimensional virtual object is generated, the master user can select to display according to the three-dimensional virtual object, so that the real image of the master user is prevented from being displayed, and the privacy of the user is protected to a certain extent.
As another optional embodiment, the user terminal may also select to generate a three-dimensional virtual object corresponding to itself, but the algorithm for generating the three-dimensional virtual object has a high requirement on computing power, and this embodiment may generate the three-dimensional virtual object only for the main user, or generate the three-dimensional virtual object for a part of the user terminals. After calculation power or optimization of a three-dimensional virtual object generation algorithm is not considered, a three-dimensional virtual object generation service can be provided for the user terminal.
Optionally, the method further comprises: responding to the angle adjustment operation of the three-dimensional virtual object, and performing angle adjustment on the three-dimensional virtual object; and under the condition that the target video is played, when a snap-shot video is generated according to the target video and/or the video recorded in the video recording area, adding the three-dimensional virtual object after angle adjustment on the snap-shot video.
After the three-dimensional virtual object is generated, the three-dimensional virtual object may be attached to a blank area of the live video or may be overlaid on a corresponding video recording area. Thereby adding the three-dimensional virtual object after angle adjustment on the snap video.
Before the three-dimensional virtual object is added to the snap-shot video, the three-dimensional virtual object can be subjected to angle or size adjustment, and the image of the three-dimensional virtual object can be edited.
Optionally, after generating the three-dimensional virtual object of the target object in the main video recorded in the main video recording area in response to the generation operation for generating the three-dimensional virtual object, the method further includes: storing a three-dimensional virtual object; and under the condition of clicking the three-dimensional virtual object, displaying a three-dimensional display page of the three-dimensional virtual object, wherein the three-dimensional display page responds to the angle adjustment operation to carry out angle adjustment on the three-dimensional virtual object.
After the three-dimensional virtual object is generated, the three-dimensional virtual object can be stored and associated with the snap video, after a master user or other users watching the snap video click the three-dimensional virtual object, the three-dimensional virtual object enters a three-dimensional display page of the three-dimensional virtual object, and the three-dimensional virtual object is observed in an all-around mode through angle adjustment.
Optionally, the video snap page further includes: and a display area of the target video, wherein the target video is a video which is shot or completed with a co-shot.
The close-up page can also comprise a display area which can be regarded as an area equivalent to the video recording area, when the close-up page is displayed, the display area is added into the close-up page in a default mode, and the position of the close-up page can be switched with the position of other video recording areas according to the operation of a master user.
The target video can be a video which is shot by a person to be shot when two persons are shot together, and can also be a video which is shot by a plurality of persons before the person to be shot and a video which is shot completely when the plurality of persons are shot together. That is, the method for taking a photo in real time can also be applied to the real-time taking of multi-person videos based on the videos which are taken or have already been taken.
It should be noted that the present application also provides an alternative implementation, and the details of the implementation are described below.
The embodiment provides a video close-shot mode, which is combined with a user motion training scene, and based on a produced course and a published video as a close-shot source video, a user can continuously shoot and create in a continuous shooting mode, so that the effect that a plurality of videos are in the same frame is realized. After the first person publishes the video, the second person and the video are in close shot to generate a double same-frame video, the third person and the double same-frame video are in close shot to generate a three-person same-frame video, the fourth person and the fifth person continue to be in close shot in the same mode, and finally the video effect of the five videos in the same frame is presented.
The whole process of the close-shot: selecting a certain video to start co-shooting- > two videos to shoot in the same frame- > finish co-shooting- > generating a double video in the same frame- > continue co-shooting- > three videos to shoot in the same frame- > finish co-shooting- > generating a three-person video in the same frame- >.
The videos which can be taken in time mainly come from two types, one type is a course subsection bar, the other type is a video which is issued by the user himself or other users, and the videos can be single-person videos, double-person videos, three-person videos and even four-person videos which are already framed.
And at the publisher page or the video playing page, the user selects the video capable of being shot in a close mode, clicks to start shooting in a close mode, and then enters the video shooting page in a vertical or horizontal split screen mode by acquiring the length-width ratio of the video to be shot.
And the screen swapping function can be realized by entering a video shooting page. Before a user starts to record a video, the positions of a recording lens and the video to be photographed can be switched, the left screen and the right screen can be exchanged when the left screen and the right screen are split, and the upper screen and the lower screen can be exchanged when the upper screen and the lower screen are split. When shooting, the user takes a shot as a main part, no matter whether the video to be co-shot is a video with several people in the same frame, the shot of the user is half of the screen forever, and the other half of the screen is the video to be co-shot.
In the recording process, the video recorded by the user and the video to be close-shot are synchronous in the whole process, synchronous pause is started, and the time length of the video recorded by the user can be shorter than that of the video to be close-shot but cannot exceed that of the video to be close-shot. The video that is taken a candid photograph in the recording process is as the reference thing, and the user only need follow the video that is taken a candid photograph and do the action in step can, the user need not to think the theme and the content of shooing. The functions of front and rear lens switching, flash lamp, countdown, filter and music are reserved in the recording process, meanwhile, the condition that a user continues to shoot after pausing in the shooting process can be met, a plurality of segmented videos are generated, and the segmented videos can be deleted in sequence if the user is unsatisfied to shoot.
And entering a video editing page after the recording is finished, synthesizing a new video with the same frame, wherein the newly synthesized video supports cutting length, adding a filter, adjusting volume and the like.
After the same-frame video is released, the video playing page can be clicked to skip to view the source video, if other users want to take a photo with the video, the other users can click to start the photo on the video playing page, the shooting process is repeated, and then the new same-frame video can be synthesized again. A maximum of five videos are allowed to be boxed.
The key point of the technology is as follows: 1. judging and processing split screen data; 2. support 5 videos to be framed
FIG. 2 is a schematic diagram of a double snap user interface according to an embodiment of the invention, as shown in FIG. 2, when two videos are in the same frame: if the width of the video (1) to be co-shot is smaller than or equal to the height (vertical screen or square screen), the default of entering the shooting interface is the left screen and the right screen, the size of the shooting interface (A) is 9:16, the size of the video (1) to be co-shot is also 9:16, the insufficient part automatically fills the black edge, and the size of the generated double same-frame video (1+ A) is 18: 16.
If the width of the video (1) to be shot is larger than the height (horizontal screen), the default of entering the shooting interface is up and down screen, the size of the shooting interface (A) is 16:9, the size of the video (1) to be shot is also 16:9, the insufficient part automatically fills the black edge, and the size of the generated double same-frame video (1+ A) is 16: 18.
Fig. 3 is a schematic diagram of a three-person co-shooting user interface according to an embodiment of the present invention, and as shown in fig. 3, when three videos are in the same frame, it is necessary to first determine the width and height of a two-person video in the same frame.
If the size of the double same-frame video (1+ A) is 16:18 (vertical screen), the default mode of entering the shooting interface is the left screen and the right screen, the size of the shooting interface (B) is 16:18, the size of the video (1+ A) to be co-shot is also 16:18, and the size of the generated three same-frame video (1+ A + B) is 16: 9.
If the size of the double same-frame video (1+ A) is 18:16 (horizontal screen), the default mode of entering the shooting interface is up and down screen, the size of the shooting interface (B) is 18:16, the size of the co-shot video (1+ A) is 18:16, and the size of the generated three same-frame video (1+ A + B) is 9: 16.
FIG. 4 is a schematic diagram of a five-person snap user interface according to an embodiment of the invention, as shown in FIG. 4, four and five and so on for calculations.
Fig. 5 is a schematic diagram of a video taken in time with a video that has been captured according to an embodiment of the present invention, which also supports real-time recording of multiple screens, as shown in fig. 5. Currently, the multi-person same frame is not real-time, and the shooting of the multi-person same frame must be established on the basis that the same frame video exists before. For example, to realize three-person same-frame, a published two-person same-frame video needs to be found first, and then the same-frame can be performed again. But there will be some hysteresis in this in-frame capture. The subsequent mode to be realized is a video conference mode and a 'one-shot-when-one-room' mode, wherein one host determines the time for starting and finishing recording, a plurality of screens start to record and finish simultaneously, a user can interact with other users in real time in the recording process, the recording process has the sense of crossing space barriers, and finally a video with a plurality of screens and the same frame is also generated.
Fig. 6 is a schematic diagram of a 3D user video capture according to an embodiment of the present invention, as shown in fig. 6, the present embodiment may also combine a 3D effect, for example, a user 1 captures a video, and a user 2 may autonomously select to switch to a 3D capture mode when capturing the video of the user 1, and at this time, the video recorded by the user 2 is subjected to 3D processing, and of course, the 3D effect is only for the user 2 himself, and may also be for other shooting users, and the other shooting users may select whether to perform 3D processing according to their own needs. But the source video of the user 1 whose shooting has been completed is not subjected to 3D processing. The 3D user 2 video supports the user 2 to view own actions in a video playing page in a 360-degree rotation mode in an all-around mode, and the user 2 can judge whether the own actions are standard or not and whether the own actions are in place or not, which cannot be met by a plane video effect displayed after ordinary shooting. The effect is similar to that of VR seeing shoes and VR seeing a house, but the difference is that the house does not need to be scanned and shot in a multi-point and multi-angle mode like VR seeing a house, a virtual portrait is firstly modeled after complete three-dimensional point cloud data, longitude and latitude data and multi-exposure high-definition color photos are obtained, the action of the user 2 in recording videos is automatically matched through the virtual portrait, and the recorded videos of the user 2 are directly converted into a 3D effect.
Fig. 7 is a schematic diagram of a device for processing a snap video according to an embodiment of the present invention, and as shown in fig. 7, according to another aspect of the embodiment of the present invention, there is also provided a device for processing a snap video, including: a display module 72, a recording module 74 and a composition module 76, which will be described in detail below.
The display module 72 is configured to respond to a close-up operation and display a video close-up page, where the video close-up page includes a plurality of video recording areas; a recording module 74, connected to the display module 72, for responding to the operation of starting shooting and synchronously recording video in the video recording area; and a synthesizing module 76 connected to the recording module 74 for generating a snap-shot video according to the video recorded in the video recording area when the video recording is completed.
In summary, according to the above apparatus, the display module 72 is adopted to respond to the close-up operation and display the video close-up page, where the video close-up page includes a plurality of video recording areas; the recording module 74 responds to the shooting start operation and synchronously records the video in the video recording area; under the condition that video recording is completed, the synthesizing module 76 generates a close-shot video according to a video recorded in a video recording area, synchronously records videos of a plurality of video recording windows, and generates a corresponding close-shot video according to the synchronously recorded videos, so that the aim of selecting a plurality of users to simultaneously perform close-shot videos according to user requirements to obtain the experience of synchronous close-shot of different users is fulfilled, the interesting requirement that the users select the users to simultaneously perform close-shot videos is met, the technical effects of improving the interest and enthusiasm of the users to perform close-shot are achieved, and the technical problems that the close-shot in the related technology can only be performed in the same frame with the videos which are already shot, and the requirement of simultaneously performing close-shot on the videos by the users to generate the close-shot videos can not be met are solved.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes a processing method of a taken video in any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a computer storage medium, where the computer storage medium includes a stored program, and when the program runs, the apparatus on which the computer storage medium is located is controlled to execute the processing method of the snap video.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A method for processing a snap video, comprising:
responding to a close-up operation, and displaying a video close-up page, wherein the video close-up page comprises a plurality of video recording areas;
responding to the operation of starting shooting, and synchronously recording videos in the video recording area;
and under the condition that the video recording is finished, generating a snap-shot video according to the video recorded in the video recording area.
2. The method of claim 1, wherein displaying a video snap page in response to a snap operation comprises:
responding to a close-up operation, and displaying a main video recording area, wherein the main video recording area is one of the plurality of video recording areas;
responding to a joining request of a user terminal, and increasing the video recording areas with corresponding quantity, wherein the increased video recording areas are increased according to a preset increasing mode, and the main video recording area and the video recording area combined area are rectangular.
3. The method of claim 2, wherein increasing the corresponding number of video recording areas in response to the user terminal's join request comprises:
and under the condition that the number of the added user terminals is multiple, generating a video recording area of the user terminal at the corresponding position according to the time sequence of the addition and the preset position sequence.
4. The method of claim 2, wherein after increasing the corresponding number of video recording areas in response to a join request from the user terminal, the method further comprises:
and responding to the switching operation of switching the main video recording area, and switching the positions of the main video recording area and the selected video recording area.
5. The method of claim 2, wherein after increasing the corresponding number of video recording areas in response to a join request from the user terminal, the method further comprises:
in response to a replacement operation for replacing the structure of the combined area, replacing the structure of the combined area with a predefined structure selected by the replacement operation;
the combined area of the predefined structure is rectangular, and the display positions and/or the display sizes of the main video recording area and the video recording area in the combined area of the predefined structure and the combined area before updating are different.
6. The method of claim 5, wherein generating a snap video from the video recorded in the video recording area upon completion of the video recording comprises:
and under the condition that the video recording is finished, generating a snap-shot video from the video recorded in the video recording area according to the structure of the finally determined combined area.
7. The method according to claim 1, wherein after generating a snap-shot video from the video recorded in the video recording area in a case where the video recording is completed, the method further comprises:
and responding to the generation operation of generating the three-dimensional virtual object, and generating the three-dimensional virtual object of the target object in the main video recorded in the main video recording area, wherein the main video is the video of the target object.
8. The method of claim 7, further comprising:
responding to the angle adjustment operation of the three-dimensional virtual object, and performing angle adjustment on the three-dimensional virtual object;
and under the condition that the video recording is finished, when a snap-shot video is generated according to the video recorded in the video recording area, adding a three-dimensional virtual object after angle adjustment on the snap-shot video.
9. The method according to claim 8, wherein after generating the three-dimensional virtual object of the target object in the main video recorded in the main video recording area in response to the generation operation of generating the three-dimensional virtual object, the method further comprises:
storing the three-dimensional virtual object;
and under the condition of clicking the three-dimensional virtual object, displaying a three-dimensional display page of the three-dimensional virtual object, wherein the three-dimensional display page responds to the angle adjustment operation to carry out angle adjustment on the three-dimensional virtual object.
10. The method of any of claims 1-9, wherein the video-on page further comprises: a display area of a target video, wherein the target video is a video which has been shot or completed with a snap shot.
11. A device for processing a snap video, comprising:
the display module is used for responding to the close-up operation and displaying a video close-up page, wherein the video close-up page comprises a plurality of video recording areas;
the recording module is used for responding to the operation of starting shooting and synchronously recording the video in the video recording area;
and the synthesis module is used for generating a snap-shot video according to the video recorded in the video recording area under the condition that the video recording is finished.
12. A processor, characterized in that the processor is configured to execute a program, wherein the program executes the method for processing the snap video according to any one of claims 1 to 11.
13. A computer storage medium for storing a program, wherein when the program runs, an apparatus in which the computer storage medium is located is controlled to execute the processing method of the snap video according to any one of claims 1 to 11.
CN202210082644.4A 2022-01-24 2022-01-24 Processing method and device for simultaneous video Active CN114401368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210082644.4A CN114401368B (en) 2022-01-24 2022-01-24 Processing method and device for simultaneous video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210082644.4A CN114401368B (en) 2022-01-24 2022-01-24 Processing method and device for simultaneous video

Publications (2)

Publication Number Publication Date
CN114401368A true CN114401368A (en) 2022-04-26
CN114401368B CN114401368B (en) 2024-05-03

Family

ID=81233193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210082644.4A Active CN114401368B (en) 2022-01-24 2022-01-24 Processing method and device for simultaneous video

Country Status (1)

Country Link
CN (1) CN114401368B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442519A (en) * 2022-08-08 2022-12-06 珠海普罗米修斯视觉技术有限公司 Video processing method, device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862412A (en) * 2019-03-14 2019-06-07 广州酷狗计算机科技有限公司 It is in step with the method, apparatus and storage medium of video
CN111866434A (en) * 2020-06-22 2020-10-30 阿里巴巴(中国)有限公司 Video co-shooting method, video editing device and electronic equipment
CN112511853A (en) * 2020-11-26 2021-03-16 北京乐学帮网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN112714337A (en) * 2020-12-22 2021-04-27 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium
CN113033242A (en) * 2019-12-09 2021-06-25 上海幻电信息科技有限公司 Action recognition method and system
CN113766168A (en) * 2021-05-31 2021-12-07 腾讯科技(深圳)有限公司 Interactive processing method, device, terminal and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862412A (en) * 2019-03-14 2019-06-07 广州酷狗计算机科技有限公司 It is in step with the method, apparatus and storage medium of video
CN113033242A (en) * 2019-12-09 2021-06-25 上海幻电信息科技有限公司 Action recognition method and system
CN111866434A (en) * 2020-06-22 2020-10-30 阿里巴巴(中国)有限公司 Video co-shooting method, video editing device and electronic equipment
CN112511853A (en) * 2020-11-26 2021-03-16 北京乐学帮网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN112714337A (en) * 2020-12-22 2021-04-27 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium
CN113766168A (en) * 2021-05-31 2021-12-07 腾讯科技(深圳)有限公司 Interactive processing method, device, terminal and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442519A (en) * 2022-08-08 2022-12-06 珠海普罗米修斯视觉技术有限公司 Video processing method, device and computer readable storage medium
CN115442519B (en) * 2022-08-08 2023-12-15 珠海普罗米修斯视觉技术有限公司 Video processing method, apparatus and computer readable storage medium

Also Published As

Publication number Publication date
CN114401368B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
CN111698390B (en) Virtual camera control method and device, and virtual studio implementation method and system
JP7135141B2 (en) Information processing system, information processing method, and information processing program
Matsuyama et al. 3D video and its applications
US20180182114A1 (en) Generation apparatus of virtual viewpoint image, generation method, and storage medium
EP3374992A1 (en) Device and method for creating videoclips from omnidirectional video
CN114302214B (en) Virtual reality equipment and anti-jitter screen recording method
WO1995007590A1 (en) Time-varying image processor and display device
WO2018196658A1 (en) Virtual reality media file generation method and device, storage medium, and electronic device
CN110266983A (en) A kind of image processing method, equipment and storage medium
CN105324994A (en) Method and system for generating multi-projection images
CN103813094A (en) Electronic device and related method capable of capturing images, and machine readable storage medium
JP2019512177A (en) Device and related method
CN114401368B (en) Processing method and device for simultaneous video
KR101843025B1 (en) System and Method for Video Editing Based on Camera Movement
WO2023236656A1 (en) Method and apparatus for rendering interactive picture, and device, storage medium and program product
Foote et al. One-man-band: A touch screen interface for producing live multi-camera sports broadcasts
CN109872400B (en) Panoramic virtual reality scene generation method
WO2020017354A1 (en) Information processing device, information processing method, and program
KR20180021623A (en) System and method for providing virtual reality content
CN111064947A (en) Panoramic-based video fusion method, system, device and storage medium
JP4395082B2 (en) Video generation apparatus and program
JP2023057124A (en) Image processing apparatus, method, and program
JP5457668B2 (en) Video display method and video system
US20150375109A1 (en) Method of Integrating Ad Hoc Camera Networks in Interactive Mesh Systems
WO2013137840A1 (en) Method for recording small interactive video scenes and device for implementing same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240328

Address after: Room 5801, Building 5, Times Future City, Cangqian Street, Yuhang District, Hangzhou City, Zhejiang Province, 311113

Applicant after: Hangzhou Sports Co.,Ltd.

Country or region after: China

Applicant after: BEIJING CALORIE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501, building 10-2, 94 Dongsishitiao, Dongcheng District, Beijing

Applicant before: BEIJING CALORIE INFORMATION TECHNOLOGY Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant