CN114302071A - Video processing method and device, storage medium and electronic equipment - Google Patents
Video processing method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN114302071A CN114302071A CN202111626520.XA CN202111626520A CN114302071A CN 114302071 A CN114302071 A CN 114302071A CN 202111626520 A CN202111626520 A CN 202111626520A CN 114302071 A CN114302071 A CN 114302071A
- Authority
- CN
- China
- Prior art keywords
- target
- plane
- panoramic
- image
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 10
- 238000004590 computer program Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Landscapes
- Studio Devices (AREA)
Abstract
The application discloses a video processing method, a video processing device, a storage medium and electronic equipment. The method comprises the following steps: acquiring a panoramic video, wherein each frame of panoramic image included in the panoramic video comprises at least one same main body; determining a target subject from the subjects; determining the position of a target main body in each frame of panoramic image in a corresponding plane area to obtain a plurality of positions; dividing the plane area into a plurality of sub-plane areas according to a plurality of positions; for each frame of panoramic image, converting image data of an area corresponding to a sub-plane area where a target main body is located into image data corresponding to a plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area; fusing corresponding image data corresponding to different sub-plane areas to obtain a multi-frame target plane image; and determining a target panoramic video according to the multi-frame target plane images. The method and the device can reduce the threshold of making the video with the special effect that the same main body has a plurality of separate bodies in the same scene.
Description
Technical Field
The present application belongs to the field of electronic technologies, and in particular, to a video processing method, an apparatus, a storage medium, and an electronic device.
Background
With the rapid development of self-media, the demand of users for various interesting effect video productions is increasing, for example, the body-separating effect of the same shooting subject in a plurality of the same or different postures appearing simultaneously in the same scene. However, in the related art, a threshold for creating a video that realizes a special effect in which the same subject appears multiple separate subjects in the same scene is high.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device, a storage medium and an electronic device, which can reduce the threshold of manufacturing a video for realizing the special effect that a plurality of subjects appear in the same scene in the same main body in the video.
In a first aspect, an embodiment of the present application provides a video processing method, including:
acquiring a panoramic video, wherein the panoramic video comprises a plurality of frames of panoramic images, and each frame of panoramic image comprises at least one same main body;
determining a target subject from at least one identical subject;
determining the position of a target main body in each frame of panoramic image in a corresponding plane area to obtain a plurality of positions, wherein the size of the plane area is the same as that of the plane image converted from the panoramic image;
dividing the plane area into a plurality of sub-plane areas according to the plurality of positions, wherein the difference value of the number of the positions corresponding to each sub-plane area is smaller than a preset difference value;
for each frame of panoramic image, converting image data of an area corresponding to a sub-plane area where a target main body is located into image data corresponding to a plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area;
fusing corresponding image data corresponding to different sub-plane areas to obtain a multi-frame target plane image;
determining a target panoramic video according to multiple frames of target plane images, wherein each frame of target panoramic image in the target panoramic video comprises multiple target subjects.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the video acquisition module is used for acquiring a panoramic video, wherein the panoramic video comprises a plurality of frames of panoramic images, and each frame of panoramic image comprises at least one same main body;
the main body determining module is used for determining a target main body from at least one same main body;
the position determining module is used for determining the position of a target main body in each frame of panoramic image in a corresponding plane area to obtain a plurality of positions, and the size of the plane area is the same as that of a plane image converted from the panoramic image;
the area dividing module is used for dividing the plane area into a plurality of sub-plane areas according to the plurality of positions, and the difference value of the number of the positions corresponding to each sub-plane area is smaller than a preset difference value;
the data conversion module is used for converting the image data of the area corresponding to the sub-plane area where the target main body is located into the image data corresponding to the plane image corresponding to each frame of panoramic image to obtain a plurality of image data corresponding to each sub-plane area;
the data fusion module is used for fusing corresponding image data corresponding to different sub-plane areas to obtain a multi-frame target plane image;
the video determining module is used for determining a target panoramic video according to multiple frames of target plane images, wherein each frame of panoramic image in the target panoramic video comprises multiple target bodies.
In a third aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer program is enabled to execute a video processing method provided by the embodiment of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the video processing method provided in the embodiment of the present application by calling a computer program stored in the memory.
In the embodiment of the application, the panoramic video can be automatically processed to obtain the corresponding target panoramic video, wherein each frame of target panoramic image in the target panoramic video comprises a plurality of target main bodies, and the threshold for manufacturing the video with the special effect of a plurality of separate bodies in the same scene by using the same main body in the video is lower.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a first flowchart of a video processing method according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a second video processing method according to an embodiment of the present application.
Fig. 3 to fig. 7 are schematic scene diagrams of a video processing method according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
An embodiment of the present application provides a video processing method, a video processing apparatus, a storage medium, and an electronic device, where an execution main body of the video processing method may be the video processing apparatus provided in the embodiment of the present application, or the electronic device integrated with the video processing apparatus, where the video processing apparatus may be implemented in a hardware or software manner. The electronic device may be a device with data processing capability and configured with a processor, such as a smart phone, a tablet computer, a palm computer, and a notebook computer.
Referring to fig. 1, fig. 1 is a first schematic flow chart of a video processing method according to an embodiment of the present application, where the flow chart may include:
101. and acquiring the panoramic video.
The panoramic video is a video shot in all directions at 360 degrees by using the 3D camera equipment, and a user can adjust the video to watch the video up, down, left and right at will when watching the video.
For example, the electronic device may obtain the panoramic video via a network or in another manner (e.g., by an owner of the electronic device). The panoramic video comprises a plurality of frames of panoramic images, and each frame of panoramic image comprises at least one same main body. The positions in different panoramic images may be the same or different for a same subject. That is, it is assumed that the panoramic video includes panoramic images M1 to M5. A subject a1 (e.g., a person) exists in the panoramic image M1, and the panoramic images M2 to M5 also exist in the subject a 1.
102. A target subject is determined from at least one identical subject.
For example, assuming that only one identical subject is included in the multi-frame panoramic image, the electronic device may automatically determine the subject as the target subject.
For another example, assuming that the multi-frame panoramic image includes a plurality of identical subjects, the electronic device may determine the target subject from the plurality of identical subjects according to a preset subject determination policy. For example, the preset subject determination policy may be to preferentially determine the human subject as the target subject, and then, when the same subject included in the multi-frame panoramic image includes a human and an animal, the human may be automatically determined as the target subject.
For another example, assuming that a plurality of identical subjects are included in the multi-frame panoramic image, the electronic device may provide a selection interface for the user to select a target subject from the plurality of identical subjects. The electronic device may take a subject indicated by a trigger operation as a target subject by receiving the trigger operation for the selection interface.
It should be noted that the above are only some examples of determining the target subject, and are not intended to limit the present application.
103. And determining the position of the target subject in each frame of panoramic image in the corresponding plane area to obtain a plurality of positions.
It will be appreciated that the panoramic image may be converted to a planar image, such as a 2:1 latitude and longitude image. The 2:1 longitude and latitude image is an image with a coordinate origin at the lower left corner of the image and the size of the image being two times of the length of the other two sides. Such as 1920 pixels by 960 pixels, where in the latitude and longitude image the precision is in the range of 0-1920 pixels and the latitude is in the range of 0-960 pixels.
Wherein the planar area is the same size as the planar image into which the panoramic image is converted. That is, it is assumed that the size of a plane image into which an arbitrary panoramic image is converted is 1920 pixels × 960 pixels, and the size of a plane area is also 1920 pixels × 960 pixels.
In the present embodiment, assuming that the panoramic video includes the panoramic images M1 to M5, the electronic device may determine the position of the target subject in the panoramic image M1 in the planar region, the position of the target subject in the panoramic image M2 in the planar region, the position of the target subject in the panoramic image M3 in the planar region, the position of the target subject in the panoramic image M4 in the planar region, and the position of the target subject in the panoramic image M5 in the planar region, resulting in a plurality of positions.
104. The planar area is divided into a plurality of sub-planar areas according to the plurality of positions.
And the difference value of the number of the positions corresponding to each sub-plane area is smaller than a preset difference value. The preset difference may be set according to actual conditions, and is not limited specifically here. For example, the preset difference may be 1, 2, 3, etc.
For example, assuming that the preset difference is 2, the electronic device obtains 10 positions, and then the plane area may be divided into 5 sub-plane areas, each sub-plane area includes 2 positions, or the plane area may be divided into 3 sub-plane areas, where two sub-plane areas include 3 positions, and the remaining sub-plane areas include 4 positions.
It should be noted that, in the embodiment of the present application, the number of the sub-plane areas may be set according to practical situations, and is not limited specifically here. For example, the number of sub-plane regions may be reduced in order to synthesize as many images including a plurality of identical subjects as possible, and the number of sub-plane regions may be increased in order to increase the number of identical subjects in the synthesized image as possible.
105. And for each frame of panoramic image, converting the image data of the area corresponding to the sub-plane area where the target main body is located into the image data corresponding to the plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area.
For example, assuming that the panoramic video includes panoramic images M1 to M4, the planar area includes sub-planar areas a1 and a2, the target subject in the panoramic images M1 and M2 is in the sub-planar area a1, and the target subject in the panoramic images M3 and M4 is in the sub-planar area a2, image data of an area of the panoramic image M1 corresponding to the sub-planar area a1 may be converted into image data corresponding to a planar image corresponding to the panoramic image M1, planar data corresponding to the sub-planar area a1 may be obtained, image data of an area of the panoramic image M2 corresponding to the sub-planar area a1 may be converted into image data corresponding to a planar image corresponding to the panoramic image M2, planar data corresponding to the sub-planar area a1 may be obtained, image data of an area of the panoramic image M3 corresponding to the sub-planar area a2 may be converted into image data corresponding to a planar image corresponding to the panoramic image M3, planar data corresponding to the sub-planar area a2 may be obtained, and converts the image data of the region of the panoramic image M4 corresponding to the sub-plane region a2 into image data corresponding to the planar image corresponding to the panoramic image M4, to obtain planar data corresponding to the sub-plane region a 2.
106. And fusing corresponding image data corresponding to different sub-plane areas to obtain a multi-frame target plane image.
For example, it is assumed that the plane area includes sub-plane areas a1 and a2, the image data corresponding to the sub-plane area a1 includes image data D1 and D2, and the image data corresponding to the sub-plane area a2 includes image data D3 and D4, where the image data D1 is converted from the image data of an area where the panoramic image M1 corresponds to the sub-plane area a1, the image data D2 is converted from the image data of an area where the panoramic image M2 corresponds to the sub-plane area a2, the image data D3 is converted from the image data of an area where the panoramic image M2 corresponds to the sub-plane area a2, and the image data D4 is converted from the image data of an area where the panoramic image M2 corresponds to the sub-plane area a 2. The electronic device may fuse the image data D1 and D3 to obtain one frame of object plane image, and the electronic device may fuse the image data D2 and D4 to obtain one frame of object plane image, thereby obtaining two frames of object plane images. The electronic device may also fuse the image data D1 and D4 to obtain one frame of object plane image, and the electronic device may fuse the image data D2 and D3 to obtain one frame of object plane image, thereby obtaining two frames of object plane images. It will be appreciated that the above-described target plane images each include two target subjects.
107. And determining a target panoramic video according to the multi-frame target plane images, wherein each frame of panoramic image in the target panoramic video comprises a plurality of target subjects.
For example, after obtaining multiple frames of target plane images, the electronic device may determine the target panoramic video according to the multiple frames of target plane images. It is understood that each frame of the target panoramic image includes a plurality of target subjects.
In this embodiment, the panoramic video may be automatically processed to obtain a corresponding target panoramic video, where each frame of target panoramic image in the target panoramic video includes a plurality of target subjects, and a threshold for making a video that realizes a special effect of multiple subjects appearing in the same scene in the same subject in the video is low.
In some embodiments, the determining the position of the target subject in the corresponding planar area in each frame of the panoramic image, and after obtaining a plurality of positions, the determining may further include:
determining any coordinate of the plurality of coordinates with the distance smaller than the preset distance as a first target coordinate;
determining coordinates of the plurality of coordinates except for coordinates with a distance smaller than a preset distance as second target coordinates;
dividing the plane area into a plurality of sub-plane areas according to the plurality of positions may include:
dividing the plane area into a plurality of sub-plane areas according to the first target coordinate and the second target coordinate;
for each frame of panoramic image, converting image data of an area corresponding to a sub-plane area where a target subject is located into image data corresponding to a plane image corresponding to the sub-plane area, to obtain a plurality of image data corresponding to each sub-plane area, which may include:
and for each frame of panoramic image in the panoramic images corresponding to the first target coordinate and the second target coordinate, converting the image data of the area corresponding to the sub-plane area where the target main body is located into the image data corresponding to the plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area.
Because the positions of the target subjects in the different panoramic images are closer to each other or are at the same position, the problem of incomplete overlapping or incomplete fusion between the fused images can occur, in this embodiment, after the coordinates of the target subject in the planar area in each frame of panoramic image are determined, and a plurality of coordinates are obtained, the electronic device can further determine any one of the coordinates of which the distance is smaller than the preset distance as the first target coordinate; determining coordinates of the plurality of coordinates except for coordinates with a distance smaller than a preset distance as second target coordinates; and dividing the plane area into a plurality of sub-plane areas according to the first target coordinate and the second target coordinate. The preset distance is set according to actual conditions, and is not particularly limited herein.
For example, assuming that the electronic device obtains 10 coordinates C1 to C10, where the distance between each of C1, C2, and C3 is smaller than the preset distance, the distance between C4 and C5 is smaller than the preset distance, and the distance between C8 and C9 is smaller than the preset distance, then any one of C1, C2, and C3 may be determined as a first target coordinate, such as C1 is determined as the first target coordinate, any one of C4 and C5 is determined as the first target coordinate, such as C5 is determined as the first target coordinate, any one of C8 and C9 is determined as the first target coordinate, such as C9 is determined as the first target coordinate, and coordinates C6, C7, and C10 are determined as the second target coordinate.
In some embodiments, dividing the planar area into a plurality of sub-planar areas according to the first target coordinates and the second target coordinates may include:
determining the target number of the sub-plane areas, the position of each sub-plane area and the area size according to the first target coordinate and the second target coordinate;
and dividing the plane area into a target number of sub-plane areas according to the position and the area size of each sub-plane area.
For example, the electronic device may determine the target number of sub-plane areas, the position of each plane area, and the area size according to the first target coordinate and the second target coordinate, and divide the plane area into the target number of sub-plane areas according to the position of each sub-plane area and the area size.
In some embodiments, determining the position of the target subject in the corresponding planar region in each frame of the panoramic image may include:
acquiring tracking data enabling a target main body to be always positioned in the center of a screen to obtain tracking data corresponding to each frame of panoramic image;
and determining the position of the target subject in the corresponding plane area in each frame of panoramic image according to the tracking data corresponding to each frame of panoramic image.
Because the motion of the target main body is continuous in the same panoramic video, the target main body can be tracked all the time, the tracking data which enables the target main body to be positioned at the center of a screen all the time is obtained, the tracking data corresponding to each frame of panoramic image is obtained, and then the position of the target main body in each frame of panoramic image in the corresponding plane area can be determined according to the tracking data corresponding to each frame of panoramic image. Compared with the scheme of directly identifying the position of the target subject in each frame of panoramic image and converting the position into a planar area, the method can enable the position to be determined more accurately and avoid excessive occupation of the memory.
In some embodiments, determining the target subject from at least one of the same subjects may include:
and responding to the target subject selection operation, and determining the subject selected by the selection operation in at least one same subject as the target subject.
For example, assuming that a plurality of identical subjects are included in the multi-frame panoramic image, the electronic device may provide a selection interface for a user to select a target subject from the plurality of identical subjects. The electronic device may take a subject indicated by a trigger operation as a target subject by receiving the trigger operation for the selection interface.
In some embodiments, determining the target panoramic video from the plurality of frames of target planar images includes:
converting each frame of target plane image into a target panoramic image to obtain a plurality of frames of target panoramic images;
and determining a target panoramic video according to the multi-frame target panoramic image.
For example, after obtaining multiple frames of target planar images, the electronic device may perform conversion processing on each frame of target planar image to obtain multiple frames of target panoramic images. Subsequently, the electronic device may generate a target panoramic video from the multi-frame target panoramic image.
In some embodiments, after determining the target panoramic video according to the multiple frames of target plane images, the method further includes:
and responding to the playing operation aiming at the target panoramic video, and playing the target panoramic video.
For example, after determining the target panoramic video, the electronic device may display a play control. When the triggering operation of the playing control is received, the electronic equipment can determine that the playing operation of the target panoramic video is received. In response to a play operation directed to the target panoramic video, the electronic device may play the target panoramic video such that the user may view the target panoramic video.
It should be noted that the above is only an example of receiving the playing operation for the target panoramic video, and is not intended to limit the present application, and in practical applications, the playing operation for the target panoramic video may also be received by other manners, and is not specifically limited herein.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a second video processing method according to an embodiment of the present application, where the process may include:
201. and acquiring the panoramic video.
The panoramic video is a video shot in all directions at 360 degrees by using the 3D camera equipment, and a user can adjust the video to watch the video up, down, left and right at will when watching the video.
In this embodiment, the panoramic video includes a plurality of frames of panoramic images, and each frame of panoramic image includes at least one same subject. Here, the positions of the same subject in different panoramic images may be the same or different.
For example, a user may obtain a section of panoramic video by shooting with a 3D camera, or download a section of panoramic video from a network, so that the electronic device obtains the panoramic video.
202. And responding to the target subject selection operation, and determining the subject selected by the selection operation in at least one same subject as the target subject.
For example, after acquiring the panoramic video, the electronic device may perform target detection processing on each frame of panoramic image in the panoramic video, so as to determine a main body in each frame of panoramic image. The electronic device may then compare the subjects in each frame of the panoramic image to determine therefrom which subjects are included in each frame of the panoramic image. For example, a subject appears in each panoramic image, and the subject is the subject included in each panoramic image, and through the above process, the electronic device can determine that each panoramic image includes at least one same subject.
For example, the electronic device may provide a selection interface for a user to select a target subject from at least one of the same subjects. When the electronic equipment receives the trigger operation of the selection interface, the electronic equipment receives the target main selection operation, and in response to the target main body selection operation, the electronic equipment can take the main body indicated by the target main body selection operation as the target main body.
It is to be appreciated that when each frame of the panoramic image includes only one identical subject, the electronic device may automatically determine the subject as the target subject.
In some embodiments, the electronic device may play the panoramic video under an operation of a user, and during the playing of the panoramic video, the user may perform a selection operation at any time to select a subject, and when the subject selected by the user is not a subject included in each frame of panoramic image, the electronic device may prompt the user to re-select the subject, and when the subject selected by the user is a subject included in each frame of panoramic image, the electronic device may determine the subject selected by the user as a target subject.
203. And acquiring tracking data enabling the target main body to be always positioned in the center of the screen to obtain the tracking data corresponding to each frame of panoramic image.
For example, after the target subject is determined, the electronic device may track the target subject to obtain tracking data that enables the target subject to be always located in the center of the screen of the electronic device, so as to obtain tracking data corresponding to each frame of image. Wherein the tracking data may include an amount of rotation.
204. And determining the coordinates of the target main body in each frame of panoramic image in the corresponding plane area according to the tracking data corresponding to each frame of panoramic image to obtain a plurality of coordinates.
It will be appreciated that the panoramic image may be converted to a planar image, such as a 2:1 latitude and longitude image. The 2:1 longitude and latitude image is an image with a coordinate origin at the lower left corner of the image and the size of two opposite sides of the image is twice as large as the length of the other two opposite sides of the image. Such as 1920 pixels by 960 pixels, where in the latitude and longitude image the precision is in the range of 0-1920 pixels and the latitude is in the range of 0-960 pixels.
Wherein the planar area is the same size as the planar image into which the panoramic image is converted. That is, it is assumed that the size of a plane image into which an arbitrary panoramic image is converted is 1920 pixels × 960 pixels, and the size of a plane area is also 1920 pixels × 960 pixels.
For example, after obtaining the tracking data corresponding to each frame of image, the electronic device may calculate through a trigonometric function, so as to convert the coordinates of the target subject in each frame of panoramic image into coordinates in a corresponding planar area according to the tracking data corresponding to each frame of panoramic image, and obtain a plurality of coordinates. Wherein each coordinate is used to represent the position of the target subject in the planar region in a different panoramic image.
For example, assuming that the panoramic video includes panoramic images M1 to M12, the electronic device may determine the coordinates C1 of the target subject in the panoramic image M1 in the planar region from the tracking data corresponding to the panoramic image M1, determine the coordinates C2 of the target subject in the panoramic image M2 in the planar region from the tracking data corresponding to the panoramic image M2, determine the coordinates C3 of the target subject in the panoramic image M3 in the planar region from the tracking data corresponding to the panoramic image M3, determine the coordinates C4 of the target subject in the panoramic image M4 in the planar region from the tracking data corresponding to the panoramic image M5, determine the coordinates C5 of the target subject in the panoramic image M5 in the planar region from the tracking data corresponding to the panoramic image M6, determine the coordinates C6 of the target subject in the panoramic image M6 in the planar region from the tracking data corresponding to the panoramic image M6, according to the tracking data corresponding to the panoramic image M7, the coordinate C7 of the target subject in the panoramic image M7 in the planar area is determined, according to the tracking data corresponding to the panoramic image M8, the coordinate C8 of the target subject in the panoramic image M8 in the planar area is determined, according to the tracking data corresponding to the panoramic image M9, the coordinate C9 of the target subject in the panoramic image M9 in the planar area is determined, according to the tracking data corresponding to the panoramic image M10, the coordinate C10 of the target subject in the panoramic image M10 in the planar area is determined, according to the tracking data corresponding to the panoramic image M11, the coordinate C11 of the target subject in the panoramic image M11 in the planar area is determined, according to the tracking data corresponding to the panoramic image M12, the coordinate C12 of the target subject in the panoramic image M12 in the planar area is determined, and a plurality of coordinates are obtained.
205. And determining any coordinate of the plurality of coordinates with the distance smaller than the preset distance as a first target coordinate.
206. And determining coordinates of the plurality of coordinates except for the coordinates having the distance smaller than the preset distance as second target coordinates.
The preset distance may be set according to actual requirements, and is not limited herein.
For example, assuming that the coordinates C1 to C12 obtained by the electronic device are as shown in fig. 3, where the distance between C1 and C2 is smaller than the preset distance, and the distance between each of C3, C4, and C5 is smaller than the preset distance, the electronic device may determine C1 or C2 as the first target coordinate, determine C3, C4, or C5 as the first target coordinate, and determine C6 to C12 as the second target coordinate.
207. And determining the target number of the sub-plane areas, the position of each sub-plane area and the area size according to the first target coordinate and the second target coordinate.
208. And dividing the plane area into a target number of sub-plane areas according to the position and the area size of each sub-plane area.
And the difference value of the number of the coordinates corresponding to each sub-plane area is smaller than a preset difference value. The preset difference value may be set according to actual requirements, and is not particularly limited herein.
For example, as shown in fig. 4, assuming that the first target coordinates and the second target coordinates include coordinates C1, C5 to C12, and the preset difference value is 1, the electronic device may divide the plane area a into an area a1, an area a2, and an area A3 as shown in fig. 5, wherein the area a1 includes coordinates C1, C5, and C6, the area a2 includes coordinates C7, C8, and C9, and the area A3 includes coordinates C10, C11, and C12.
209. And for each frame of panoramic image in the panoramic images corresponding to the first target coordinate and the second target coordinate, converting the image data of the area corresponding to the sub-plane area where the target main body is located into the image data corresponding to the plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area.
For example, if the coordinate C1 of the target subject in the planar area in the determined panoramic image M1 is the first target coordinate based on the tracking data corresponding to the panoramic image M1, the panoramic image corresponding to the first target coordinate is the panoramic image M1, and if the coordinate C9 of the target subject in the determined panoramic image M9 is the second target coordinate based on the tracking data corresponding to the panoramic image M9, the panoramic image corresponding to the second target coordinate is the panoramic image M9.
From the above, assuming that the first target coordinates obtained by the electronic device are C1 and C5, and the second target coordinates are C6 to C11, it can be determined that the panoramic image corresponding to the coordinate C1 is panoramic image M1, the panoramic image corresponding to the coordinate C5 is panoramic image M5, the panoramic image corresponding to the coordinate C6 is panoramic image M6, the panoramic image corresponding to the coordinate C7 is panoramic image M7, the panoramic image corresponding to the coordinate C8 is panoramic image M8, the panoramic image corresponding to the coordinate C9 is panoramic image M9, the panoramic image corresponding to the coordinate C10 is panoramic image M10, the panoramic image corresponding to the coordinate C11 is panoramic image M11, and the panoramic image corresponding to the coordinate C12 is panoramic image M12. With continued reference to fig. 4, it may be determined that the sub-plane area where the target subject is located in the panoramic images M1, M5, and M6 is the sub-plane area a1, the sub-plane area where the target subject is located in the panoramic images M7 to M9 is the sub-plane area a2, and the sub-plane area where the target subject is located in the panoramic images M10 to M12 is the sub-plane area A3, then the electronic device may convert the image data of the area corresponding to the sub-plane area a1 in the panoramic images M1, M5, and M6 into the image data D1, D5, and D6 corresponding to the corresponding plane images, respectively, the electronic device may convert the image data of the area corresponding to the sub-plane area a 9 in the panoramic images M7 to M9 into the image data D9 to D9 corresponding to the plane area a 9, respectively, thereby, image data D1, D5 and D6 corresponding to the sub-plane region a1, image data D7 to D9 corresponding to the sub-plane region a2, and image data D10 to D12 corresponding to the sub-plane region A3 were obtained.
Specifically, it is assumed that, as for the image data D1, assuming that the panoramic image M1 can be converted into the planar image M111, the planar image corresponding to the panoramic image M1 is the planar image M111, and assuming that the sub-plane area a1 is an area at the left side position of the planar area, the area corresponding to the sub-plane area a1 in the planar image M111 is also an area at the left side position of the planar image M112, the position of the area in the planar image M112 is the same as the position of the sub-plane area a1 in the planar area a, and the size of the area is the same, and the image data D1 is image data corresponding to the area, that is, image data forming the area. For a detailed description of other image data, reference may be made to the detailed description of the image data D1, which is not described herein again.
210. And fusing corresponding image data corresponding to different sub-plane areas to obtain a multi-frame target plane image.
For example, suppose that the electronic device obtains image data D1, D5 and D6 corresponding to the sub-plane area a1, image data D7 to D9 corresponding to the sub-plane area a2, and image data D10 to D12 corresponding to the sub-plane area A3, as shown in fig. 5, the electronic device may fuse the image data D1, D7 and D10 to obtain a frame of object plane image M1710, as shown in fig. 6, the electronic device may fuse the image data D5, D8 and D11 to obtain a frame of object plane image M5811, as shown in fig. 7, the electronic device may fuse the image data D6, D9 and D12 to obtain a frame of object plane image M6912, thereby obtaining A3-frame of object plane image. Referring to fig. 5 to 7, it can be determined that 3 target subjects P are included in each of the target plane images M1710, M5811, and M6912.
211. And converting each frame of target plane image into a panoramic image to obtain a plurality of frames of target panoramic images.
For example, assuming that the electronic device obtains the target plane images M1710, M5811, and M6912, the electronic device may convert the target plane image M1710 into a target panoramic image to obtain a target panoramic image M17100, the electronic device may convert the target plane image M5811 into a target panoramic image to obtain a target panoramic image M58110, and the electronic device may convert the target plane image M6912 into a target panoramic image to obtain a target panoramic image M69120, thereby obtaining a 3-frame target panoramic image.
212. And determining a target panoramic video according to the multi-frame panoramic images, wherein each frame of panoramic image in the target panoramic video comprises a plurality of target subjects.
For example, assuming that the electronic device obtains the target panoramic images M17100, M58110, and M69120, the electronic device may generate the target panoramic video from the target panoramic images M17100, M58110, and M69120. It is understood that the target panoramic images M17100, M58110, and M69120 each include a plurality of target subjects.
213. And responding to the playing operation aiming at the target panoramic video, and playing the target panoramic video.
For example, after determining the target panoramic video, the electronic device may display a play control. When a trigger operation for the play control is received, the electronic device may determine that a play operation for the target panoramic video is received. In response to a play operation directed to the target panoramic video, the electronic device may play the target panoramic video such that the user may view the target panoramic video.
It should be noted that the above is only an example of receiving the playing operation for the target panoramic video, and is not intended to limit the present application, and in practical applications, the playing operation for the target panoramic video may also be received by other manners, and is not specifically limited herein.
In the embodiments of the present application, "a plurality" includes "two" or "two or more".
Referring to fig. 8, fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure. The video processing apparatus 300 includes: a video acquisition module 301, a subject determination module 302, a position determination module 303, a region division module 304, a data conversion module 305, a data fusion module 306, and a video determination module 307.
The video obtaining module 301 is configured to obtain a panoramic video, where the panoramic video includes multiple frames of panoramic images, and each frame of panoramic image includes at least one same main body.
A subject determination module 302 for determining a target subject from at least one identical subject.
A position determining module 303, configured to determine a position of the target subject in each frame of the panoramic image in a corresponding planar area, to obtain a plurality of positions, where the planar area is the same as the size of the planar image into which the panoramic image is converted.
The area dividing module 304 is configured to divide the planar area into a plurality of sub-planar areas according to the plurality of positions, where a difference between the numbers of positions corresponding to each sub-planar area is smaller than a preset difference.
The data conversion module 305 is configured to, for each frame of panoramic image, convert image data of a region corresponding to a sub-plane region where a target subject is located into image data corresponding to a plane image corresponding to the sub-plane region, and obtain a plurality of image data corresponding to each sub-plane region.
And the data fusion module 306 is configured to fuse corresponding image data corresponding to different sub-plane regions to obtain a multi-frame target plane image.
The video determining module 307 is configured to determine a target panoramic video according to multiple frames of target planar images, where each frame of panoramic image in the target panoramic video includes multiple target subjects.
In some embodiments, the location comprises coordinates, and the location determining module 303 may be configured to: determining any coordinate of the plurality of coordinates with the distance smaller than the preset distance as a first target coordinate; determining coordinates of the plurality of coordinates except for coordinates with a distance smaller than a preset distance as second target coordinates;
a region division module 304, operable to: dividing a plane area into a plurality of sub-plane areas according to the first target coordinate and the second target coordinate;
a data conversion module 305 operable to: and for each frame of panoramic image in the panoramic images corresponding to the first target coordinate and the second target coordinate, converting the image data of the area corresponding to the sub-plane area where the target main body is located into the image data corresponding to the plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area.
In some embodiments, the region dividing module 304 may be configured to: determining the target number of the sub-plane areas, the position of each sub-plane area and the area size according to the first target coordinate and the second target coordinate; and dividing the plane area into the sub-plane areas with the target number according to the position and the area size of each sub-plane area.
In some embodiments, the location determination module 303 may be configured to: acquiring tracking data enabling a target main body to be always positioned in the center of a screen to obtain tracking data corresponding to each frame of panoramic image; and determining the position of the target subject in the corresponding plane area in each frame of panoramic image according to the tracking data corresponding to each frame of panoramic image.
In some embodiments, the subject determination module 303 may be configured to: and responding to a target subject selection operation, and determining a subject selected by the selection operation in the at least one same subject as a target subject.
In some embodiments, the video determination module 303 may be configured to: converting each frame of target plane image into a target panoramic image to obtain a plurality of frames of target panoramic images; and determining a target panoramic video according to the multi-frame target panoramic image.
In some embodiments, the video processing apparatus 300 may further include a video playing module, and the video playing module may be configured to: and responding to the playing operation aiming at the target panoramic video, and playing the target panoramic video.
It should be noted that the video processing apparatus 300 provided in the embodiment of the present application and the video processing method in the foregoing embodiment belong to the same concept, and specific implementation processes thereof are detailed in the foregoing related embodiments, and are not described herein again.
The embodiment of the present application provides a storage medium, on which a computer program is stored, and when the stored computer program is executed on a processor of an electronic device provided in the embodiment of the present application, the processor of the electronic device is caused to execute any of the steps in the video processing method suitable for the electronic device. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
Referring to fig. 9, an electronic device 400 includes a processor 401 and a memory 402. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The processor 401 in the embodiment of the present application may be a general-purpose processor, such as a processor of an ARM architecture.
The memory 402 stores a computer program, which may be a high speed random access memory, but also may be a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 301 access to the memory 402. The processor 401 is operable to execute any of the above video processing methods, such as:
acquiring a panoramic video, wherein the panoramic video comprises a plurality of frames of panoramic images, and each frame of panoramic image comprises at least one same main body;
determining a target subject from at least one identical subject;
determining the position of a target main body in each frame of panoramic image in a corresponding plane area to obtain a plurality of positions, wherein the size of the plane area is the same as that of the plane image converted from the panoramic image;
dividing the plane area into a plurality of sub-plane areas according to the plurality of positions, wherein the difference value of the number of the positions corresponding to each sub-plane area is smaller than a preset difference value;
for each frame of panoramic image, converting image data of an area corresponding to a sub-plane area where a target main body is located into image data corresponding to a plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area;
fusing corresponding image data corresponding to different sub-plane areas to obtain a multi-frame target plane image;
determining a target panoramic video according to multiple frames of target plane images, wherein each frame of target panoramic image in the target panoramic video comprises multiple target subjects.
The foregoing detailed description has provided a video processing method, apparatus, storage medium, and electronic device, and the principles and embodiments of the present application have been described herein using specific examples, and the description of the foregoing examples is only provided to help understand the method and core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A video processing method, comprising:
acquiring a panoramic video, wherein the panoramic video comprises a plurality of frames of panoramic images, and each frame of panoramic image comprises at least one same main body;
determining a target subject from at least one identical subject;
determining the position of a target main body in each frame of panoramic image in a corresponding plane area to obtain a plurality of positions, wherein the size of the plane area is the same as that of the plane image converted from the panoramic image;
dividing the plane area into a plurality of sub-plane areas according to the plurality of positions, wherein the difference value of the number of the positions corresponding to each sub-plane area is smaller than a preset difference value;
for each frame of panoramic image, converting image data of an area corresponding to a sub-plane area where a target main body is located into image data corresponding to a plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area;
fusing corresponding image data corresponding to different sub-plane areas to obtain a multi-frame target plane image;
determining a target panoramic video according to multiple frames of target plane images, wherein each frame of target panoramic image in the target panoramic video comprises multiple target subjects.
2. The video processing method according to claim 1, wherein the position includes coordinates, and after determining the position of the target subject in the corresponding planar area in each frame of the panoramic image and obtaining a plurality of positions, the method further comprises:
determining any coordinate of the plurality of coordinates with the distance smaller than the preset distance as a first target coordinate;
determining coordinates of the plurality of coordinates except for coordinates with a distance smaller than a preset distance as second target coordinates;
said dividing said planar area into a plurality of sub-planar areas according to said plurality of positions, comprising:
dividing a plane area into a plurality of sub-plane areas according to the first target coordinate and the second target coordinate;
for each frame of panoramic image, converting image data of an area corresponding to a sub-plane area where a target main body is located into image data corresponding to a plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area, wherein the image data comprises:
and for each frame of panoramic image in the panoramic images corresponding to the first target coordinate and the second target coordinate, converting the image data of the area corresponding to the sub-plane area where the target main body is located into the image data corresponding to the plane image corresponding to the sub-plane area, and obtaining a plurality of image data corresponding to each sub-plane area.
3. The video processing method according to claim 2, wherein said dividing a planar area into a plurality of sub-planar areas according to the first target coordinates and the second target coordinates comprises:
determining the target number of the sub-plane areas, the position of each sub-plane area and the area size according to the first target coordinate and the second target coordinate;
and dividing the plane area into the sub-plane areas with the target number according to the position and the area size of each sub-plane area.
4. The video processing method according to claim 1, wherein the determining the position of the target subject in the corresponding planar area in each frame of the panoramic image comprises:
acquiring tracking data enabling a target main body to be always positioned in the center of a screen to obtain tracking data corresponding to each frame of panoramic image;
and determining the position of the target subject in the corresponding plane area in each frame of panoramic image according to the tracking data corresponding to each frame of panoramic image.
5. The video processing method according to any one of claims 1 to 4, wherein said determining a target subject from at least one same subject comprises:
and responding to a target subject selection operation, and determining a subject selected by the selection operation in the at least one same subject as a target subject.
6. The video processing method according to any one of claims 1 to 4, wherein the determining a target panoramic video from the plurality of frames of target plane images comprises:
converting each frame of target plane image into a target panoramic image to obtain a plurality of frames of target panoramic images;
and determining a target panoramic video according to the multi-frame target panoramic image.
7. The video processing method according to any one of claims 1 to 4, wherein after determining the target panoramic video from the plurality of frames of target plane images, the method further comprises:
and responding to the playing operation aiming at the target panoramic video, and playing the target panoramic video.
8. A video processing apparatus, comprising:
the video acquisition module is used for acquiring a panoramic video, wherein the panoramic video comprises a plurality of frames of panoramic images, and each frame of panoramic image comprises at least one same main body;
the main body determining module is used for determining a target main body from at least one same main body;
the position determining module is used for determining the position of a target main body in each frame of panoramic image in a corresponding plane area to obtain a plurality of positions, and the size of the plane area is the same as that of a plane image converted from the panoramic image;
the area dividing module is used for dividing the plane area into a plurality of sub-plane areas according to the plurality of positions, and the difference value of the number of the positions corresponding to each sub-plane area is smaller than a preset difference value;
the data conversion module is used for converting the image data of the area corresponding to the sub-plane area where the target main body is located into the image data corresponding to the plane image corresponding to each frame of panoramic image to obtain a plurality of image data corresponding to each sub-plane area;
the data fusion module is used for fusing corresponding image data corresponding to different sub-plane areas to obtain a multi-frame target plane image;
the video determining module is used for determining a target panoramic video according to multiple frames of target plane images, wherein each frame of panoramic image in the target panoramic video comprises multiple target bodies.
9. A storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the video processing method of any one of claims 1 to 7.
10. An electronic device, characterized in that the electronic device comprises a processor and a memory, wherein a computer program is stored in the memory, and the processor is configured to execute the video processing method according to any one of claims 1 to 7 by calling the computer program stored in the memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111626520.XA CN114302071B (en) | 2021-12-28 | 2021-12-28 | Video processing method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111626520.XA CN114302071B (en) | 2021-12-28 | 2021-12-28 | Video processing method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114302071A true CN114302071A (en) | 2022-04-08 |
CN114302071B CN114302071B (en) | 2024-02-20 |
Family
ID=80971606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111626520.XA Active CN114302071B (en) | 2021-12-28 | 2021-12-28 | Video processing method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114302071B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104767911A (en) * | 2015-04-28 | 2015-07-08 | 腾讯科技(深圳)有限公司 | Method and device for processing image |
US20170295318A1 (en) * | 2014-03-04 | 2017-10-12 | Gopro, Inc. | Automatic generation of video from spherical content using audio/visual analysis |
CN111010511A (en) * | 2019-12-12 | 2020-04-14 | 维沃移动通信有限公司 | Panoramic body-separating image shooting method and electronic equipment |
CN111601033A (en) * | 2020-04-27 | 2020-08-28 | 北京小米松果电子有限公司 | Video processing method, device and storage medium |
CN111832538A (en) * | 2020-07-28 | 2020-10-27 | 北京小米松果电子有限公司 | Video processing method and device and storage medium |
CN112073748A (en) * | 2019-06-10 | 2020-12-11 | 北京字节跳动网络技术有限公司 | Panoramic video processing method and device and storage medium |
CN112188113A (en) * | 2019-07-01 | 2021-01-05 | 北京新唐思创教育科技有限公司 | Video decomposition method and device, and terminal |
CN112637490A (en) * | 2020-12-18 | 2021-04-09 | 咪咕文化科技有限公司 | Video production method and device, electronic equipment and storage medium |
CN113497973A (en) * | 2021-09-06 | 2021-10-12 | 北京市商汤科技开发有限公司 | Video processing method and device, computer readable storage medium and computer equipment |
WO2021238325A1 (en) * | 2020-05-29 | 2021-12-02 | 华为技术有限公司 | Image processing method and apparatus |
CN113841112A (en) * | 2020-08-06 | 2021-12-24 | 深圳市大疆创新科技有限公司 | Image processing method, camera and mobile terminal |
-
2021
- 2021-12-28 CN CN202111626520.XA patent/CN114302071B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170295318A1 (en) * | 2014-03-04 | 2017-10-12 | Gopro, Inc. | Automatic generation of video from spherical content using audio/visual analysis |
CN104767911A (en) * | 2015-04-28 | 2015-07-08 | 腾讯科技(深圳)有限公司 | Method and device for processing image |
CN112073748A (en) * | 2019-06-10 | 2020-12-11 | 北京字节跳动网络技术有限公司 | Panoramic video processing method and device and storage medium |
CN112188113A (en) * | 2019-07-01 | 2021-01-05 | 北京新唐思创教育科技有限公司 | Video decomposition method and device, and terminal |
CN111010511A (en) * | 2019-12-12 | 2020-04-14 | 维沃移动通信有限公司 | Panoramic body-separating image shooting method and electronic equipment |
CN111601033A (en) * | 2020-04-27 | 2020-08-28 | 北京小米松果电子有限公司 | Video processing method, device and storage medium |
WO2021238325A1 (en) * | 2020-05-29 | 2021-12-02 | 华为技术有限公司 | Image processing method and apparatus |
CN113810587A (en) * | 2020-05-29 | 2021-12-17 | 华为技术有限公司 | Image processing method and device |
CN111832538A (en) * | 2020-07-28 | 2020-10-27 | 北京小米松果电子有限公司 | Video processing method and device and storage medium |
CN113841112A (en) * | 2020-08-06 | 2021-12-24 | 深圳市大疆创新科技有限公司 | Image processing method, camera and mobile terminal |
CN112637490A (en) * | 2020-12-18 | 2021-04-09 | 咪咕文化科技有限公司 | Video production method and device, electronic equipment and storage medium |
CN113497973A (en) * | 2021-09-06 | 2021-10-12 | 北京市商汤科技开发有限公司 | Video processing method and device, computer readable storage medium and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114302071B (en) | 2024-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110300264B (en) | Image processing method, image processing device, mobile terminal and storage medium | |
US10051180B1 (en) | Method and system for removing an obstructing object in a panoramic image | |
CN110111364B (en) | Motion detection method and device, electronic equipment and storage medium | |
CN110213492B (en) | Device imaging method and device, storage medium and electronic device | |
CN111630523A (en) | Image feature extraction method and device | |
US20240348928A1 (en) | Image display method, device and electronic device for panorama shooting to improve the user's visual experience | |
CN114390201A (en) | Focusing method and device thereof | |
CN111105351B (en) | Video sequence image splicing method and device | |
CN115278084A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN113286084B (en) | Terminal image acquisition method and device, storage medium and terminal | |
CN114119701A (en) | Image processing method and device | |
US20230215018A1 (en) | Electronic device including camera and method for generating video recording of a moving object | |
CN114302071B (en) | Video processing method and device, storage medium and electronic equipment | |
CN115514887A (en) | Control method and device for video acquisition, computer equipment and storage medium | |
CN112672057B (en) | Shooting method and device | |
KR20210080334A (en) | Method, apparatus, and device for identifying human body and computer readable storage | |
CN115086625A (en) | Correction method, device and system of projection picture, correction equipment and projection equipment | |
CN114241127A (en) | Panoramic image generation method and device, electronic equipment and medium | |
CN104754201B (en) | A kind of electronic equipment and information processing method | |
CN111629126A (en) | Audio and video acquisition device and method | |
CN113112610B (en) | Information processing method and device and electronic equipment | |
CN113364985B (en) | Live broadcast lens tracking method, device and medium | |
US20240193824A1 (en) | Computing device and method for realistic visualization of digital human | |
CN113822990A (en) | Image processing method and device based on artificial intelligence and electronic equipment | |
CN116320740A (en) | Shooting focusing method, shooting focusing device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |