CN115861430A - Detection method, device, equipment and system of free visual angle system - Google Patents

Detection method, device, equipment and system of free visual angle system Download PDF

Info

Publication number
CN115861430A
CN115861430A CN202111117189.9A CN202111117189A CN115861430A CN 115861430 A CN115861430 A CN 115861430A CN 202111117189 A CN202111117189 A CN 202111117189A CN 115861430 A CN115861430 A CN 115861430A
Authority
CN
China
Prior art keywords
shooting
image
free
shooting device
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111117189.9A
Other languages
Chinese (zh)
Inventor
曹世明
李明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111117189.9A priority Critical patent/CN115861430A/en
Publication of CN115861430A publication Critical patent/CN115861430A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a method, a device, equipment and a system for detecting a free visual angle system, relates to the field of free visual angles, and solves the problem of how to accurately detect the offset of shooting equipment in the free visual angle system in real time. The free visual angle system comprises N shooting devices, and the N shooting devices synchronously shoot a first scene. After the first shooting device is deviated, the first shooting device may preliminarily determine the deviation according to the IMU information, determine a deviation amount according to an image acquired after the first shooting device is deviated and images of other shooting devices in the system that are not deviated, and determine a degree of the deviation of the first shooting device according to the deviation amount. Therefore, in the process that the N shooting devices synchronously shoot the first scene in the free visual angle system, the shooting devices with the offset can be accurately detected in real time.

Description

Detection method, device, equipment and system of free visual angle system
Technical Field
The present application relates to the field of image processing, and in particular, to a method, an apparatus, a device, and a system for detecting a free-view system.
Background
The free visual angle technology realizes a real-time interactive video mode, so that a user can independently select a 360-degree viewing visual angle of a free visual angle video on a terminal. The free view angle technology is mainly applied to scenes such as special-effect shooting of programs, live broadcast and the like. The free view technique may also be referred to as a free viewpoint technique. And triggering a shutter by a plurality of shooting equipment in the free visual angle system according to the control signal, and synchronously shooting the same scene to obtain multi-frame images at a plurality of angles. The computing equipment processes the multi-frame images of the multiple angles according to the parameters of the multiple shooting equipment to generate the free visual angle video. In the shooting process, if the shooting equipment in the free-view system collides or shakes, the position, the angle and the like of the shooting equipment are caused to deviate, parameters of the shooting equipment after deviation are used for processing the free-view video generated by the image, and the shake also occurs. In general, the detection of the deviation of the photographing device in the free-view system is determined by a worker according to the subjective viewing experience of the finally generated free-view video. The time span for detecting the occurrence of the offset from the photographing apparatus is large, and the subjective judgment is easily ignored for a minute offset. Therefore, how to accurately detect the offset of the shooting device in the free-view system in real time becomes an urgent problem to be solved.
Disclosure of Invention
The method, the device, the equipment and the system for detecting the free visual angle system solve the problem of how to accurately detect the deviation of the shooting equipment in the free visual angle system in real time.
In a first aspect, a method for detecting a free-view system is provided, where the free-view system includes N shooting devices, the N shooting devices synchronously shoot a first scene, the N shooting devices include a first shooting device, and the first shooting device includes a camera and an Inertial Measurement Unit (IMU). The method is performed by a first photographing apparatus, which is any one photographing apparatus in a free-view system. The method comprises the following steps: when the IMU information of the first shooting device changes, a first image at a first moment is obtained through the camera, M images obtained by the M shooting devices at the first moment are obtained from the M shooting devices, and a deviation amount is determined according to the first image and the M images and is used for indicating the degree of deviation of the changed external parameter of the first shooting device at the first moment relative to the initial camera external parameter. The M shooting devices are devices except the first shooting device in the free-view system, IMU information of the M shooting devices is unchanged, N is an integer larger than or equal to 3, M is an integer larger than or equal to 1, and M < N.
In this way, after the first shooting device is offset, the first shooting device may preliminarily determine the offset according to the IMU information, determine the offset according to the image acquired after the first shooting device is offset and the images of other shooting devices in the system that are not offset, and determine the offset degree of the first shooting device according to the offset. Therefore, in the process that the N shooting devices synchronously shoot the first scene in the free visual angle system, the shooting devices with the deviation can be accurately detected in real time.
If the difference between the M images and the first image is large, the accuracy of the offset determined from the first image and the M images is low. In order to improve the accuracy of the first photographing apparatus in determining the amount of shift, the first photographing apparatus may acquire an image from a photographing apparatus in which no shift has occurred among the designated photographing apparatuses.
For example, the M photographing apparatuses include apparatuses adjacent to the first photographing apparatus.
For another example, the M shooting devices include a shooting device in a preset machine position range in the free view angle system.
In one possible implementation, determining the offset amount from the first image and the M images includes: the first shooting device determines a re-projection error according to the first image and the M images, wherein the re-projection error is used for representing coordinate errors of feature points in the detection areas in the first image and the M images; and determining the offset according to the reprojection error.
In another possible implementation, after determining the offset amount from the first image and the M images, the method further includes: the first shooting device sends prompt information to the computing device, so that the computing device is informed of the deviation of the posture of the first shooting device in time, and the phenomenon that the image is shaken due to the fact that the computing device carries out post-processing on the image acquired by the first shooting device after the deviation according to the camera parameters of the first shooting device is avoided.
After the first photographing apparatus determines that the offset occurs, a method of recalibrating the first photographing apparatus may be determined according to the offset of the first photographing apparatus.
In another possible implementation, after determining the offset amount according to the first image and the M images, the method further includes: if the offset is smaller than the preset offset, the offset degree of the first shooting device is smaller, and the first shooting device sends the re-calibration external parameters to the computing device after determining the re-calibration external parameters of the first shooting device according to the offset and the initial camera external parameters. The re-scaling external parameters are used for post-processing images acquired by a first shooting device which generates a free-perspective video of a first scene. Or the first shooting device determines projection points in M images acquired by the M shooting devices at the second moment according to the background point cloud of the first scene; and determining the external parameter of the first shooting device according to the feature point of the second image at the second moment acquired by the first shooting device and the projection point, and sending the external parameter of the first shooting device to the computing device.
In another possible implementation, after determining the offset amount according to the first image and the M images, the method further includes: if the offset is larger than or equal to the preset offset, the offset degree of the first shooting device is larger, and the first shooting device triggers a cloud platform where the first shooting device is located to adjust the posture of the first shooting device.
Optionally, after the cloud platform where the first shooting device is located is triggered to adjust the posture of the first shooting device, the method further includes: the first shooting device determines projection points in M images acquired by the M shooting devices at the second moment according to the background point cloud of the first scene; and determining the external parameter of the first shooting device according to the feature point of the second image at the second moment acquired by the first shooting device and the projection point, and sending the external parameter of the first shooting device to the computing device.
In a second aspect, a method for generating a free-view video is provided, where the free-view system includes N capturing devices, the N capturing devices capture a first scene synchronously, the N capturing devices include a first capturing device, and the method is performed by a computing device, and the method includes: the method comprises the steps that a computing device carries out post-processing on images acquired by N shooting devices according to camera parameters of the N shooting devices to obtain a first free view video, wherein the camera parameters comprise initial camera internal parameters and initial camera external parameters; if the computing device obtains the re-calibration external parameters of the first shooting device, post-processing the image obtained by the first shooting device according to the re-calibration external parameters of the first shooting device and the initial camera internal parameters to obtain a second free view video; and combining the first free visual angle video and the second free visual angle video to obtain the free visual angle video of the first scene. The re-marked external parameter is the updated external parameter of the initial camera external parameter.
In one possible implementation, before performing post-processing on an image acquired by the first photographing apparatus according to the re-calibration external parameters and the initial camera internal parameters of the first photographing apparatus, the method further includes: and the computing equipment receives the prompt message sent by the first shooting equipment and acquires the re-standard external parameters of the first shooting equipment. The prompt information is used for indicating that the posture of the first shooting device is shifted. The re-scaling external parameters are used for post-processing images acquired by a first shooting device which generates a free-perspective video of a first scene.
In another possible implementation manner, acquiring the external reference of the first shooting device includes: and receiving the re-marked external parameters sent by the first shooting equipment.
In another possible implementation manner, acquiring the external reference of the first shooting device includes: the computing equipment determines projection points in M images acquired by M shooting equipment according to the background point cloud of the first scene, wherein the M shooting equipment is equipment except the first shooting equipment in a free view angle system, IMU information of the M shooting equipment is unchanged, N is an integer larger than or equal to 3, M is an integer larger than or equal to 1, and M < N; and determining the external reference of the first shooting device according to the characteristic points of the second image acquired by the first shooting device and the projection points.
In another possible implementation manner, before acquiring the external reference of the first shooting device, the method further includes: and suspending the post-processing of the image sent by the first shooting device.
In a third aspect, there is provided a detection apparatus comprising means for performing the detection method of the first aspect or the freeview system of any one of the possible designs of the first aspect.
In a fourth aspect, there is provided an apparatus for generating freeview video, the apparatus comprising means for performing the method for generating freeview video of the second aspect or any one of the possible designs of the second aspect.
In a fifth aspect, there is provided a photographing apparatus comprising: at least one processor, a memory, a camera, and an IMU, wherein the camera is configured to capture an image, the IMU is configured to obtain IMU information for the capture device, the memory is configured to store computer programs and instructions, and the processor is configured to invoke the computer programs and instructions to assist with the camera and IMU in performing a method of detection of a freeview system in accordance with the first aspect or any one of the possible designs of the first aspect.
In a sixth aspect, a data processing system is provided, comprising: at least one processor and a memory, the memory for storing a computer program and instructions, the processor for invoking the computer program and instructions to implement a method of generating freeview video of performing the second aspect or any of the possible designs of the second aspect.
In a seventh aspect, a free-view system is provided, where the free-view system includes N shooting devices, where the N shooting devices shoot a first scene synchronously, where the N shooting devices include a first shooting device, and the first shooting device includes a camera and an IMU, and when IMU information of the first shooting device changes, the first shooting device performs the detection method of the free-view system in the first aspect or any one of the possible designs of the first aspect.
In an eighth aspect, there is provided a computer-readable storage medium comprising: computer software instructions; the computer software instructions, when executed by the capturing device in the free-viewing angle system, cause the capturing device in the free-viewing angle system to perform the operational steps of the method as described in the first aspect or any one of the possible implementations of the first aspect.
A ninth aspect provides a computer program product for causing a photographing apparatus in a free-viewing angle system to perform the operational steps of the method as described in the first aspect or any one of the possible implementations of the first aspect when the computer program product runs on a computer.
The present application can further combine to provide more implementations on the basis of the implementations provided by the above aspects.
Drawings
Fig. 1 is a schematic diagram of a free-viewing system according to an embodiment of the present application;
FIG. 2 is an exemplary diagram of an offset provided herein;
fig. 3 is a flowchart of a detection method of a free-viewing angle system according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an IMU information change provided in the present application;
fig. 5 is a schematic diagram of M shooting devices provided in the present application;
fig. 6 is a schematic diagram of feature point matching provided in the present application;
fig. 7 is a flowchart of a detection method of a free-viewing angle system according to an embodiment of the present disclosure;
fig. 8 is a flowchart of a method for generating a freeview video according to an embodiment of the present application;
fig. 9 is a schematic view of a detection device according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of an apparatus for generating freeview video according to an embodiment of the present application;
fig. 11 is a schematic diagram of a shooting device according to an embodiment of the present application;
fig. 12 is a schematic diagram of a computing device according to an embodiment of the present application.
Detailed Description
For clarity and conciseness of the description of the embodiments described below, a brief introduction of the related art is first given.
Before a plurality of shooting devices in the free visual angle system synchronously shoot the same scene, the shooting devices are arranged on a track, a truss or a cloud deck in advance and are deployed on a shooting site. The photographing apparatus may also be referred to as a stand. A truss is a structure in which rod members are connected to each other at both ends by hinges. The embodiment of the present application does not limit the deployment manner of the plurality of shooting devices and the number of the shooting devices. The deployment mode can be annularly deployed, such as: circular or oval. Understandably, the more the number of the shooting devices is, the richer the images at different angles are, and the more the watching visual angles of the users are; conversely, the fewer the number of photographing devices, the fewer the images at different angles, and the fewer the viewing angles of the user.
The freeview system may also include a controller and a computing device. The controller is used for controlling the plurality of shooting devices to synchronously shoot the same scene. The computing device is used for post-processing the images acquired by the plurality of shooting devices by using the camera parameters of the plurality of shooting devices to obtain the free-view video. The camera parameters include camera internal parameters and camera external parameters.
Understandably, the camera external parameters are pose information of the camera relative to other reference objects under a world coordinate system. The world coordinate system is a space coordinate system established according to a photographed field, and the origin of the world coordinate system is located at a certain point of the field. The pose includes position and orientation. The camera external parameters include a rotation matrix and a translation matrix. The rotation matrix and the translation matrix together describe a transformation relationship of the point between the world coordinate system and the camera coordinate system. The rotation matrix describes the orientation of the coordinate axes of the world coordinate system relative to the coordinate axes of the camera coordinate system. The translation matrix describes the position of the spatial origin in the camera coordinate system.
The camera intrinsic parameters are parameters related to the characteristics of the camera itself, such as the focal length and pixel size of the camera. The camera intrinsic parameters represent a conversion relationship of three-dimensional coordinates of an object in a camera coordinate system and two-dimensional coordinates in an image coordinate system on a captured image.
The controller may be a separate physical device, or may be a Virtual Machine (VM) on a physical device. For example, a VM having the function of a controller is mounted on one of a plurality of photographing apparatuses.
The computing device may be a server, a cloud device, or an edge device (e.g., a box carrying a chip with processing capabilities), etc. The computing equipment has strong computing power, and can perform post-processing on images acquired by a plurality of shooting equipment to obtain free-view video and other computations.
Fig. 1 is a schematic diagram of a free-viewing angle system according to an embodiment of the present application. The free-view system 100 includes N capture devices (e.g., capture device 1 _1through capture device 1 _n), a controller 120, and a cloud device 130. The controller 120 is connected to the N photographing apparatuses in a wired manner or a wireless manner. The N shooting devices communicate with the cloud device 130 in a wired or wireless manner. The system maintenance personnel can set the number of shooting devices according to the deployment requirement. N is an integer greater than or equal to 3. As shown in fig. 1, the N photographing apparatuses may be deployed in a circular deployment manner at a photographing site.
The photographing apparatus may be an integrated module. The photographing apparatus includes a camera, an Inertial Measurement Unit (IMU), a communication module, and a processor. For example, the photographing device may be a terminal, such as a mobile phone terminal, a tablet computer, a notebook computer, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a Mixed Reality (MR) device, an Extended Reality (ER) device, a camera, and the like.
Before the N shooting devices synchronously shoot the first scene, the N shooting devices are calibrated, so that the precision of the N shooting devices in synchronously shooting the same scene is improved. The calibration comprises coarse calibration and fine calibration. The rough calibration refers to adjusting the position and the orientation of each shooting device, so that the center point of the picture of each shooting device points to the same surrounding center. For example, if the photographing apparatus is mounted on the pan/tilt head, the orientation of the photographing apparatus is adjusted by a servo system of the pan/tilt head. As another example, the position and orientation of the photographing apparatus are manually adjusted by a system maintenance person.
The fine calibration is to calculate and obtain camera parameters through a common view relationship between the shooting devices, such as: camera distortion parameters, camera internal parameters, camera external parameters and the like. For example, calibration methods based on manual design of calibrators. The calibration object includes common forms such as a calibration plate and a calibration tower. The surface of the calibration object is pasted with a characteristic pattern. The calibration object is arranged in a shooting site as required, after the shooting equipment shoots the calibration object printed with a specific pattern through a camera, the processor identifies feature points in the pattern by using a calibration algorithm, associates the same feature points shot by different shooting equipment, and calculates camera parameters. As another example, a calibration method based on an unmanned design calibration object. After the shooting equipment shoots images in a shooting site through the camera, the characteristic points in the patterns are identified by using a calibration algorithm, and the processor associates the same characteristic points shot by different shooting equipment to calculate camera parameters. After each of the N shooting devices is calibrated, each shooting device sends the calibrated camera parameters to the cloud device 130.
After the N shooting devices receive the control signal sent by the controller 120, the first scene is shot synchronously by the camera. The control signal may also be referred to as a synchronization control signal. The control signal may be a periodic (frame rate-related) pulse signal or a periodic signal triggered based on an instruction of a communication synchronization protocol. After the N shooting devices acquire the images, an image stream including the images acquired by the N shooting devices is sent to the cloud device 130 through the communication module. For example, the capture device sends an image to the cloud device 130 over the network 140. The network 140 may refer to an internetwork.
The cloud device 130 performs post-processing on the images acquired by the N shooting devices by using the camera parameters of the N shooting devices to obtain a free view video. Optionally, when the user needs to watch the free view video, the terminal may download the free view video from the cloud device 130 for watching, so that the user autonomously selects a 360 ° watching view angle of the free view video on the terminal. For example, the terminal downloads the free-view video from the cloud device 130 over the network 140.
In the process of acquiring an image by a first shooting device (e.g., shooting device 1_1), if the first shooting device is collided and shaken, which may cause position and orientation to shift, the cloud device 130 performs post-processing on the image acquired by the first shooting device by using camera parameters of the shifted first shooting device, and the obtained processed image is shaken, so that a user can perceive discontinuous video experience.
Exemplarily, as shown in fig. 2, the first row represents images acquired after rough calibration of three consecutive photographing apparatuses in the free-viewing system. Due to the precision limited by the coarse calibration, the position and angle of the central focal axis represented by the upward arrow is not accurately aligned to the center of the image. The second line shows the result of respectively post-processing the first line of pictures by using the camera parameters obtained by fine calibration after the fine calibration is performed on three continuous shooting devices in the free visual angle system, the positions and the angles of the central focusing axis represented by the upward arrow are accurately aligned to the center of the image, and the sizes and the directions are consistent, so that the obtained free visual angle video has the effect of performing smooth visual angle switching around a fixed center. The third row shows that after the 2 nd shooting device of the three shooting devices shifts, the image is post-processed by still using the camera parameters of the shifted 2 nd shooting device, and the obtained processed image shakes, so that discontinuous experience appears.
In the embodiment of the application, when IMU information acquired by a first shooting device (such as the shooting device 1 _1) through an IMU changes, the first shooting device determines the offset according to a first image acquired by the first shooting device through a camera at a first moment and an image acquired by a shooting device without offset in a system at the first moment; in addition, the first shooting device can perform recalibration in a quantification mode, namely whether to perform coarse calibration or not is judged according to the offset, and then fine calibration is performed. Therefore, the function of accurately detecting the deviation of the shooting equipment in the free visual angle system in real time and the function of automatically recalibrating the shooting equipment in real time when the shooting equipment deviates are realized.
Next, a method for detecting a free-viewing angle system provided in an embodiment of the present application is described in detail with reference to fig. 3 to 12. Fig. 3 is a schematic flowchart of a detection method of a free-viewing angle system according to an embodiment of the present disclosure. Here, the explanation will be made assuming that the first photographing apparatus is shifted. The first photographing apparatus may be any one of photographing apparatuses in a free view system. As shown in fig. 3, the method includes the following steps.
And step 310, the first shooting device judges whether the IMU information changes.
When N shooting devices in the free visual angle system synchronously shoot a first scene, the IMU of each shooting device continuously collects IMU information (or called IMU signals), and the camera of each shooting device continuously collects images. The photographing apparatus stores the IMU information into an IMU buffer queue storage space in the main memory for storing IMU information, and stores the image into an image buffer queue storage space in the main memory for storing the image. The method for keeping the clocks synchronized between the IMU information and the image may refer to conventional synchronization techniques, which are not illustrated.
The first shooting device can acquire IMU information of the first shooting device in a period of continuous time from the IMU buffer queue storage space, and judge whether the IMU information changes according to the IMU information in the period of continuous time, namely, preliminarily judge whether the first shooting device deviates. The time unit of a continuous period of time may be seconds or milliseconds. For example, the period of continuous time may be 1 second.
In some embodiments, the first photographing apparatus may determine whether variation exists in the IMU information for a continuous period of time, and preliminarily determine that the first photographing apparatus has an offset if the variation exists in the IMU information for a continuous period of time, which indicates that the IMU information has changed; if the IMU information of a period of continuous time does not have the variation, the IMU information is not changed, the fact that the first shooting device does not deviate is preliminarily determined, the first shooting device does not need to be recalibrated, and the process is finished.
IMU information includes, but is not limited to, three-axis accelerometer signals and three-axis gyroscope signals. As shown in fig. 4 (a), α, β, and γ represent angular velocity values in three orthogonal axes of the gyroscope, respectively, and X, Y, and Z represent acceleration values in three axes of the accelerometer, respectively. When the first photographing apparatus is in a stationary state, there is no angular velocity value on any of the rotation axes, but the angular velocity value is a small value near 0 due to the existence of noise and random walk of the gyroscope itself. Since the IMU is influenced by the acceleration of gravity in the direction towards the geocentric, the acceleration values actually detected in the three axes of the accelerometer are the result of an orthogonal decomposition of the acceleration of gravity in these three directions, and are influenced by the noise and random walk of the accelerometer, with small fluctuations around a fixed value. However, when the first photographing apparatus shakes or is shifted, the detection value of at least one of the six axes fluctuates greatly (as shown in (b) of fig. 4), that is, the variation amount of the IMU information is larger than the threshold value, it is preliminarily determined that the first photographing apparatus is shifted. The threshold of the gyroscope and the threshold of the accelerometer are respectively set as the variance of respective noise, the variance is determined by calibrating the IMU, and the specific calibration method can refer to the conventional technology and is not repeated.
Optionally, the first shooting device may perform denoising and smoothing processing on the IMU information in a continuous time period, and the first shooting device determines whether there is a variation in the IMU information in the continuous time period after denoising and smoothing processing. Therefore, noise interference is reduced, and the accuracy of judging whether the IMU information changes is improved.
Before N shooting devices in the free visual angle system synchronously shoot a first scene, the N shooting devices are installed on fixed frames such as a track, a truss or a cloud deck in advance. If the fixed frame where the N shooting devices are located deviates, the N shooting devices wholly deviate, IMU information of the N shooting devices all changes, and the first shooting device does not deviate in fact relative to other shooting devices in the free visual angle system. Therefore, when the IMU information of the first photographing apparatus changes, which means that the first photographing apparatus is preliminarily determined to be offset, it is further determined whether the first photographing apparatus is offset, and step 320 and step 330 are performed.
In step 320, the first photographing device acquires a first image at a first time through the camera.
Understandably, the first image of the first moment acquired by the first shooting device through the camera is the image acquired after the first shooting device is shifted.
And step 330, the first shooting device acquires M images acquired by the M shooting devices at the first time.
The M photographing apparatuses are apparatuses in which IMU information is unchanged except for the first photographing apparatus in the free view system. M is an integer greater than or equal to 1, M < N. It is understood that the M photographing devices are photographing devices in which no shift occurs in the free view system. The M images are images acquired by the M photographing apparatuses at a first time after the first photographing apparatus is offset.
In some embodiments, each of the N photographing devices in the free-view system may set a flag bit, where the flag bit is 0, and indicates that IMU information of the photographing device is unchanged, that is, the photographing device is not shifted; the flag bit is 1, which indicates that the IMU information of the photographing apparatus is changed, that is, the photographing apparatus is shifted. If the IMU information of the shooting equipment changes, marking the position 1; if the IMU information of the photographing apparatus is not changed, position 0 is marked.
After the IMU information of the first photographing apparatus changes, an image request message is sent to the photographing apparatus in the free-viewing angle system, and the M photographing apparatuses with the flag bit of 0 may send the M images acquired at the first time to the first photographing apparatus.
If the difference between the M images and the first image is large, the accuracy of the offset determined from the first image and the M images is low. In order to improve the accuracy of the first photographing apparatus in determining the amount of shift, the first photographing apparatus may acquire an image from a photographing apparatus in which no shift has occurred among the designated photographing apparatuses.
For example, the M photographing apparatuses include apparatuses adjacent to the first photographing apparatus. As shown in (a) of fig. 5, it is assumed that IMU information of the photographing device 1_1 is changed, IMU information of the photographing device 1_2 is not changed, IMU information of the photographing device 1_n is not changed, and M photographing devices include, for example, the photographing device 1 _u2 and the photographing device 1 _un.
For another example, the M shooting devices include a shooting device in a preset machine position range in the free view angle system. The preset station range may refer to a preset angle range centered around a photographing apparatus in which IMU information is changed or a range of the number of other photographing apparatuses. As shown in (b) of fig. 5, it is assumed that the preset range of the stand indicates that the number of photographing devices ranges from 4 to 6. Assuming that IMU information of the photographing device 1_1 is changed, IMU information of each of the photographing device 1 _u2, the photographing device 1 _u3, the photographing device 1_n-1, and the photographing device 1 _nis not changed, M photographing devices include, for example, the photographing device 1 _u2, the photographing device 1 _u3, the photographing device 1_n-1, and the photographing device 1_n.
It should be understood that if the first photographing apparatus acquires M images acquired at the first time by the M photographing apparatuses that have not been shifted, indicating that the first photographing apparatus has been shifted, the first photographing apparatus performs step 340. If the N shooting devices shift integrally, the first shooting device cannot acquire the M images acquired by the M shooting devices which do not shift at the first moment.
In step 340, the first camera determines a shift amount from the first image and the M images.
The offset amount is used to indicate a degree to which the change external parameter when the first photographing apparatus acquires the first image at the first timing is offset from the initial camera external parameter. In some embodiments, the first capture device determines a reprojection error based on the first image and the M images and determines the offset based on the reprojection error. The reprojection errors are used to characterize the coordinate errors of the feature points in the detected region in the first image and the M images. The reprojection error satisfies the following formula (1).
Figure BDA0003275912210000071
Wherein the content of the first and second substances,
Figure BDA0003275912210000072
represents the depth value of point j in the camera coordinate system of the first photographing apparatus, <>
Figure BDA0003275912210000073
An inverse matrix representing the reference matrix of the first recording device, and>
Figure BDA0003275912210000074
representing a point j corresponding to the coordinates of a pixel in a first image obtained by a first photographing device>
Figure BDA0003275912210000075
The representation is that point j corresponds to the pixel coordinates in the M images. />
Figure BDA0003275912210000076
To show the first beatAnd taking initial camera external parameters of the equipment. Δ T represents a reprojection error, i.e., an amount of change in IMU information in a period in which the first photographing apparatus is driven to be motionless.
The offset amount satisfies the following formula (2).
Figure BDA0003275912210000077
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003275912210000078
represents the initial camera external reference of the first shooting equipment under the world coordinate system, and is used for judging whether the first shooting equipment is in the world coordinate system or not>
Figure BDA0003275912210000079
Represents the initial camera external reference of the shooting equipment without deviation in the world coordinate system, and is/are judged>
Figure BDA0003275912210000081
Indicating the relative referencing of the first capture device relative to the capture device without the offset.
It should be noted that, based on the epipolar constraint theory, each feature point in the first image corresponds to a line in each of the M images. As shown in fig. 6, in the process of performing feature matching on the first image at the first view angle and the M images at the second view angle, for each feature point in the first capturing device, an un-shifted view angle with a common viewing area is determined from images acquired from an un-shifted capturing device, all detected feature points are found near the epipolar line corresponding to the M images, and feature matching is performed on the feature points in the first capturing device in sequence, thereby significantly reducing the time consumed for matching.
In this way, after the first photographing device is offset, the first photographing device may preliminarily determine the offset according to the IMU information, determine an offset according to an image acquired after the first photographing device is offset and images of other photographing devices in the system that are not offset, and determine that the first photographing device is actually offset according to the offset. Therefore, in the process that the N shooting devices synchronously shoot the first scene in the free visual angle system, the shooting devices with the deviation can be accurately detected in real time.
After the first photographing apparatus determines the shift amount from the first image and the M images, the first photographing apparatus may further perform step 350. In step 350, the first shooting device sends a prompt message to the cloud device 130. The prompt information is used to indicate that the posture of the first shooting device is shifted, so that the cloud device 130 can timely suspend post-processing on the image of the shifted shooting device.
If any shooting equipment in the free visual angle system deviates, no matter how large the deviation amount of the shooting equipment is, the user is immediately prompted to need to perform rough calibration again, so that the free visual angle system is very inflexible, and other shooting equipment in the system is influenced. Therefore, if the offset is small, the coarse calibration can be skipped, i.e., the fine calibration can be directly performed without performing the coarse calibration.
After the first photographing apparatus determines the shift amount from the first image and the M images, the method further includes the following steps.
And step 360, the first shooting equipment judges whether the offset is greater than or equal to a preset offset.
If the offset is smaller than the preset offset, it indicates that the offset angle of the first shooting device is small, and no coarse calibration is needed, and step 370 and step 380 are executed.
If the offset is greater than or equal to the preset offset, which indicates that the angle of the first photographing apparatus is relatively large, for example, the first photographing apparatus is offset by 180 °, the first photographing apparatus needs to perform coarse calibration, and step 390 is executed.
In some embodiments, the first photographing apparatus may determine whether to perform the coarse calibration according to the following criterion one or criterion two.
Criterion one is as follows: and calculating the pose of the first shooting equipment by using the offset, calculating the change of the included angle of the first shooting equipment relative to all the other shooting equipment which is not offset, and performing coarse calibration again if the average change of the included angle exceeds 20%. The change of the included angle of the first shooting device relative to all the other shooting devices which are not shifted meets the formula (3).
Figure BDA0003275912210000082
Where Δ θ represents the average of the change in the angle, Δ θ cur,i The attitude angle change of the first shooting device and the M shooting devices is represented, and the value can be calculated by the offset and the initial camera external parameter.
Criterion two: when the rough calibration needs to be carried out again if the criterion is not determined, three-dimensional points corresponding to all the other shooting devices which do not shift are obtained, all the three-dimensional points are respectively projected by utilizing the initial and current position postures of the first shooting device, and the average parallax in the horizontal and vertical directions of the picture is calculated. If the average parallax in a certain direction exceeds 10% of the resolution in that direction, coarse calibration needs to be performed again, and the average parallax in the horizontal and vertical directions can be calculated by the following formula (4).
Figure BDA0003275912210000083
Figure BDA0003275912210000091
And step 370, the first shooting device determines the re-calibration external parameters of the first shooting device according to the offset and the initial camera external parameters.
The re-scaling external parameters are used for post-processing images acquired by a first shooting device which generates a free-perspective video of a first scene.
The re-scaling external parameter satisfies the following formula (5).
Figure BDA0003275912210000092
Wherein the content of the first and second substances,
Figure BDA0003275912210000093
represents an initial camera external reference of the first photographing apparatus, is outside the scope of the first photographing apparatus, and is outside the scope of the first photographing apparatus>
Figure BDA0003275912210000094
Denotes a re-scale external parameter of the first photographing apparatus, and Δ T denotes an offset amount.
Step 380, the first shooting device sends the re-marked external parameters to the cloud device 130.
And 390, adjusting the posture of the first shooting device by the cloud platform where the first shooting device is located.
Optionally, the first shooting device may prompt a system maintainer that the deviation occurs, or the system maintainer may adjust the first shooting device, that is, manually adjust the posture of the first shooting device. However, the camera external parameter of the first photographing apparatus after the manual adjustment is deviated from the initial camera external parameter of the first photographing apparatus.
Step 3100, the cloud device 130 performs fine calibration on the first shooting device.
After the cloud platform where the first shooting device is located adjusts the posture of the first shooting device, the first shooting device may send fine calibration prompt information to the cloud device 130, where the fine calibration prompt information is used to indicate to perform fine calibration on the first shooting device.
The cloud device 130 stores background point clouds, converts the background point clouds into three-dimensional coordinates under camera coordinates by using initial camera internal parameters, converts the three-dimensional coordinates into two-dimensional coordinates by using the camera internal parameters, determines projection points in M images acquired by the M shooting devices at the second moment, performs feature matching on feature points of the second image acquired by the first shooting device at the second moment and the projection points, and determines the external parameters of the first shooting device.
The re-scaling external parameter satisfies the following formula (6).
Figure BDA0003275912210000095
/>
Wherein the content of the first and second substances,
Figure BDA0003275912210000096
a re-label external parameter, P, representing the first photographing apparatus j Three-dimensional coordinates representing a background point cloud, and->
Figure BDA0003275912210000097
Two-dimensional coordinates, K, representing the first camera cur An initial camera internal reference of the first photographing apparatus is represented.
Thus, the first photographing apparatus may be automatically recalibrated after detecting the offset thereof. The normal operation of the remaining shooting equipment which is not deviated can not be influenced in the calibration process. After the recalibration is finished, the recovered first shooting equipment can be added into the system again, and the shooting equipment which is not deviated from the rest shooting equipment works normally.
In another example, the photographing apparatus offset detection method and the recalibration method provided by the embodiments of the present application may be performed by the photographing apparatus if the computing resources of the photographing apparatus can provide sufficient computing power for determining the offset and recalibration. As shown in fig. 7, the difference from fig. 3 is that if the offset amount is smaller than the preset offset amount, the first photographing apparatus may also perform step 3100 to determine the re-scale external parameter. Or, if the offset is greater than or equal to the preset offset, the first shooting device performs coarse calibration, then performs fine calibration, and sends the re-calibration external parameter to the cloud device 130. The specific method of rough calibration can refer to the description of step 390, and the specific method of fine calibration can refer to the description of step 3100.
Fig. 8 is a flowchart illustrating a method for generating a freeview video according to an embodiment of the present disclosure. It is assumed that the first capturing device is shifted, and the computing device is the cloud device 130 for example. The first photographing apparatus may be any one of photographing apparatuses in a free-view system. With regard to the free view system and the offset detection of the first photographing apparatus, reference may be made to the description of the above-described embodiments. As shown in fig. 8, the method includes the following steps.
Step 810, the cloud device 130 performs post-processing on the images acquired by the N shooting devices according to the camera parameters of the N shooting devices to obtain a first free view video.
Before the first photographing device deviates, the cloud device 130 performs post-processing on the images acquired by the N photographing devices according to the camera parameters of the N photographing devices.
In some embodiments, the cloud device 130 performs three-dimensional reconstruction using images of multiple cameras captured simultaneously based on a three-dimensional reconstruction method, recovers a geometric representation of a scene over time, such as a triangular mesh or a three-dimensional point cloud, edits the position and orientation of a virtual shot (e.g., surrounding a subject), and performs frame rendering of each virtual shot to generate a surrounding video.
In other embodiments, the cloud device 130 is based on a two-dimensional image synthesis method, the layout of multiple cameras is fixed, and images of multiple cameras captured simultaneously are used to generate continuous image frames through a two-dimensional image processing method, such as frame interpolation, and then a smooth surrounding video is synthesized.
If the first shooting device is deviated, the cloud device 130 suspends the post-processing of the image sent by the first shooting device. Other non-offset shooting devices in the system can post-process the acquired image according to the camera parameters.
For example, it is assumed that N photographing devices are not shifted in a first period, a first photographing device is shifted in a second period, and N photographing devices are not shifted in a third period. The cloud device 130 performs post-processing on the images acquired by the N shooting devices in the first time period according to the camera parameters of the N shooting devices to obtain a first time period free view video. The cloud device 130 performs post-processing on the image acquired in the second time period according to the camera parameters of the N-1 shooting devices which do not shift, so as to obtain a free view video in the second time period. The cloud device 130 does not perform post-processing on the image acquired by the first photographing device in the second time period. The cloud device 130 performs post-processing on the image acquired in the third time period according to the camera parameters of the N-1 shooting devices which do not shift, so as to obtain a free visual angle video of the N-1 shooting devices in the third time period. The cloud device 130 performs post-processing on the image acquired in the third time period according to the initial camera internal parameter and the re-calibration external parameter of the first shooting device, so as to obtain a third time period free visual angle video of the first shooting device. The first free view video includes a first period free view video, a second period free view video, a third period free view video of the N-1 photographing devices, and a third period free view video of the first photographing device.
It can be understood that the first free view video is obtained by post-processing the image according to the initial camera internal parameter and the initial camera external parameter, for example, the first free view video includes a first period free view video, a second period free view video, and a third period free view video of N-1 shooting devices.
Optionally, after the first capturing device determines the offset according to the first image and the M images, and before the first capturing device performs the fine calibration, the first capturing device does not send an image to the cloud device 130.
In step 820, the cloud device 130 performs post-processing on the image acquired by the first shooting device according to the re-calibration external parameter and the initial camera internal parameter of the first shooting device, so as to obtain a second free view video.
It can be understood that the second free view video is obtained by post-processing the image according to the re-labeled external parameters and the initial camera internal parameters, for example, the second free view video includes a third period free view video of the first shooting device.
Step 830, the cloud device 130 merges the first free view video and the second free view video to obtain a free view video of the first scene.
It should be noted that before the cloud end device 130 performs post-processing on the image acquired by the first shooting device according to the external re-calibration parameter and the internal initial camera parameter of the first shooting device, the cloud end device 130 may further receive prompt information sent by the first shooting device, where the prompt information is used to indicate that the posture of the first shooting device is shifted.
The cloud device 130 may obtain a re-labeled extrinsic parameter of the first shooting device, where the re-labeled extrinsic parameter is used to perform post-processing on an image obtained by the first shooting device that generates the free-view video of the first scene. In some embodiments, the cloud device 130 determines projection points in M images acquired by M cameras according to the background point cloud of the first scene, and determines the external reference of the first camera according to the feature points of the second image acquired by the first camera and the projection points. In other embodiments, the cloud device 130 receives the re-labeled external parameters sent by the first shooting device.
Therefore, when the first shooting device deviates, the cloud device 130 can temporarily move the first shooting device out of the whole free viewing angle system, the remaining shooting devices which do not deviate keep working normally, and the first shooting device is automatically recalibrated by combining the information of the shooting devices which do not deviate. And after the recalibration is finished, the first shooting equipment is added into the free visual angle system again, so that the effect of generating the free viewpoint video is completely recovered. Therefore, the cloud device 130 can automatically process the offset machine position without influencing the normal work of other shooting devices, so that the free visual angle system is applied in a live scene and becomes very flexible.
It is to be understood that, in order to implement the functions in the above-described embodiments, the photographing apparatus includes a hardware structure and/or a software module corresponding to each function. Those of skill in the art will readily appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed in hardware or computer software driven hardware depends on the specific application scenario and design constraints of the solution.
Fig. 9 is a schematic structural diagram of a possible detection apparatus provided in an embodiment of the present application. The detection devices can be used for realizing the functions of the shooting equipment in the method embodiment, so that the beneficial effects of the method embodiment can be realized. In the embodiment of the present application, the detection apparatus may be any one of the photographing devices 1 _1to 1 _nin the free-viewing angle system as shown in fig. 1, and may also be a module (e.g., a chip) applied to the photographing device. The photographing devices 1 _1to 1 _nsynchronously photograph the first scene. Each capture device contains a camera and an IMU. The camera is used for shooting images. The IMU is used for acquiring IMU information.
As shown in FIG. 9, the detection apparatus 900 includes an image acquisition module 910, a detection module 920, a recalibration module 930, and a communication module 940. The detecting apparatus 900 may be applied to a photographing device as shown in fig. 1.
The camera is used for acquiring a first image at a first moment when IMU information of the first shooting device (such as shooting device 1 \/1) changes.
An image obtaining module 910, configured to obtain M images obtained by M shooting devices at the first time, where the M shooting devices are devices other than the first shooting device in the free-viewing angle system, IMU information of the M shooting devices is unchanged, N is an integer greater than or equal to 3, M is an integer greater than or equal to 1, and M < N. The image obtaining module 910 is configured to perform step 330.
A detecting module 920, configured to determine an offset according to the first image and the M images, where the offset is used to indicate a degree to which a change external parameter of the first shooting device shifts from an initial external parameter when the first shooting device obtains the first image at the first time. The detecting module 920 is configured to perform step 340.
A recalibration module 930, configured to determine a recalibration external parameter of the first shooting device according to the offset and the initial camera external parameter, where the recalibration external parameter is used to perform post-processing on an image acquired by the first shooting device for generating a free-view video of the first scene. The recalibration module 930 is configured to perform step 370.
Optionally, the recalibration module 930 is configured to determine, according to the background point cloud of the first scene, projection points in M images acquired by the M shooting devices at the second time; and determining a re-marking external parameter of the first shooting device according to the feature point of the second image of the second moment acquired by the first shooting device and the projection point, wherein the re-marking external parameter is used for post-processing the image acquired by the first shooting device for generating the free-view video of the first scene. The recalibration module 930 is configured to perform step 390 and step 3100.
The communication module 940 is configured to send the re-labeled external parameters to the cloud device 130. The communication module 940 is configured to perform step 380.
The storage module 950 is used to store images and applications needed to perform iterative training.
More detailed descriptions about the image obtaining module 910, the detecting module 920, the recalibrating module 930, and the communication module 940 can be directly obtained by referring to the related descriptions in the method embodiments shown in fig. 3 or fig. 7, which are not repeated herein.
As shown in fig. 10, the apparatus 1000 for generating a freeview video includes a post-processing module 1001, a generating module 1002, and a communicating module 1003.
A post-processing module 1001, configured to perform post-processing on the images acquired by the N shooting devices according to the camera parameters of the N shooting devices to obtain a first free-view video, where the camera parameters include initial camera internal parameters and initial camera external parameters;
the post-processing module 1001 is further configured to, if the computing device obtains the re-calibration external parameter of the first shooting device, perform post-processing on the image obtained by the first shooting device according to the re-calibration external parameter of the first shooting device and the initial camera internal parameter to obtain a second free-view video, where the re-calibration external parameter is an updated external parameter of the initial camera external parameter. The post-processing module 1001 is configured to perform steps 810 and 820.
A generating module 1002, configured to combine the first free view video and the second free view video to obtain a free view video of the first scene. The generating module 1002 is configured to perform step 830.
Optionally, the apparatus 1000 for generating freeview video may further include a retargeting module 1004. A recalibration module 1004, configured to obtain a recalibration external parameter of the first shooting device, where the recalibration external parameter is used to perform post-processing on an image obtained by the first shooting device for generating a free-view video of the first scene.
A communication module 1003, configured to receive prompt information sent by the first shooting device, where the prompt information is used to indicate that the posture of the first shooting device has shifted.
The communication module 1003 is further configured to receive the external reference of the re-calibration sent by the first shooting device.
The storage module 1006 is used for storing the free-view video and the application programs needed for performing the iterative training.
The more detailed description about the post-processing module 1001, the generating module 1002, and the communicating module 1003 can be directly obtained by directly referring to the related description in the embodiment of the method shown in fig. 3 or fig. 7, which is not repeated herein.
Fig. 11 is a schematic structural diagram of a shooting device 1100 provided in this embodiment. As shown, the photographing apparatus 1100 includes a processor 1110, a bus 1120, a memory 1130, a communication interface 1140, and a camera 1150.
It should be appreciated that in the present embodiment, the processor 1110 may be a CPU, and that the processor 1110 may also be other general purpose processors, digital Signal Processors (DSPs), ASICs, FPGAs, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like.
The processor may also be a Graphics Processing Unit (GPU), a neural Network Processing Unit (NPU), a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of programs according to the present disclosure.
The communication interface 1140 is used to enable the photographing apparatus 1100 to communicate with an external apparatus or device. In this embodiment, when the photographing apparatus 1100 is used to realize the functions of the photographing apparatus shown in fig. 3 or 7, the camera 1150 is used to acquire an image, and the communication interface 1140 is used to transmit the image, the external reference for re-labeling, and the indication information.
Bus 1120 may include a path for communicating information between the above components, such as processor 1110 and memory 1130. The bus 1120 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration the various busses are labeled in the drawings as bus 1120.
As one example, the capture device 1100 may include multiple processors. The processor may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or computational units for processing data (e.g., computer program instructions).
It should be noted that, in fig. 11, only the photographing apparatus 1100 includes 1 processor 1110 and 1 memory 1130 as an example, here, the processor 1110 and the memory 1130 are respectively used to indicate a type of device or apparatus, and in a specific embodiment, the number of each type of device or apparatus may be determined according to business requirements.
The memory 1130 may correspond to a storage medium, such as a magnetic disk, for storing information, such as computer instructions and images, in the above method embodiments, for example, a mechanical hard disk or a solid state hard disk.
The photographing apparatus 1100 may be a general-purpose apparatus or a dedicated apparatus. For example, the photographing device 1100 may be a mobile phone terminal, a tablet computer, a notebook computer, a VR device, an AR device, an MR device or an ER device, a vehicle-mounted photographing device, and the like, and may also be an edge device (e.g., a box carrying a chip with processing capability), and the like.
It should be understood that the shooting device 1100 according to the present embodiment may correspond to the detection apparatus 900 in the present embodiment, and may correspond to a corresponding main body executing any one of the methods according to fig. 3 or fig. 7, and the above and other operations and/or functions of each module in the detection apparatus 900 are not repeated herein for brevity in order to implement the corresponding flow of each method in fig. 3 or fig. 7, respectively.
Since the respective modules in the apparatus 1000 for generating freeview video provided by the present application can be distributively deployed on a plurality of computers in the same environment or different environments, the present application also provides a data processing system as shown in fig. 12, the data processing system includes a plurality of computers 1200, and each computer 1200 includes a memory 1201, a processor 1202, a communication interface 1203, and a bus 1204. The memory 1201, the processor 1202, and the communication interface 1203 are communicatively connected to each other through a bus 1204.
The memory 1201 may be a read-only memory, a static storage device, a dynamic storage device, or a random access memory. The memory 1201 may store computer instructions that, when executed by the processor 1202, stored in the memory 1201, the processor 1202 and the communication interface 1203 are configured to perform a portion of the method of data processing of the software system. The memory may also store data sets such as: a part of the storage resources in the memory 1201 is divided into one area for storing images and programs for implementing the function of generating a free-perspective video according to the embodiment of the present application.
The processor 1202 may be implemented as a general purpose CPU, an Application Specific Integrated Circuit (ASIC), a GPU, or any combination thereof. The processor 1202 may include one or more chips. The processor 1202 may include an AI accelerator, such as an NPU.
The communication interface 1203 enables communication between the computer 1200 and other devices or communication networks using transceiver modules such as, but not limited to, transceivers. For example, the image and the relabeling extrinsic parameters may be acquired through the communication interface 1203, or the relabeling extrinsic parameters may be fed back to the photographing apparatus.
The bus 1204 may include pathways for communicating information between various components of the computer 1200 (e.g., memory 1201, processor 1202, communication interface 1203).
A communication path is established between each of the computers 1200 via a communication network. Any one or more of the post-processing module 1001 and the generation module 1002 run on each computer 1200. Any of the computers 1200 may be a computer in a cloud data center (e.g., a server), or a computer in an edge data center, or a terminal computing device.
The functions of the cloud device 130 may be deployed on each computer 1200. For example, the GPU is used to implement the functionality of the cloud device 130.
The method steps in this embodiment may be implemented by hardware, or may be implemented by software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in Random Access Memory (RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a terminal device. Of course, the processor and the storage medium may reside as discrete components in a network device or a terminal device.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network appliance, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website, computer, server or data center to another website, computer, server or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or optical media such as Digital Video Disks (DVDs); it may also be a semiconductor medium, such as a Solid State Drive (SSD). While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (31)

1. A method for detecting a free-viewing system, the free-viewing system including N cameras, the N cameras synchronously capturing a first scene, the N cameras including a first camera including a camera and an inertial measurement unit IMU, the method performed by the first camera, the method comprising:
when the IMU information of the first shooting device changes, a first image at a first moment is acquired through the camera;
acquiring M images acquired by M shooting devices at the first moment, wherein the M shooting devices are devices except the first shooting device in the free-view system, IMU information of the M shooting devices is unchanged, N is an integer greater than or equal to 3, M is an integer greater than or equal to 1, and M < N;
determining an offset according to the first image and the M images, wherein the offset is used for indicating the degree of offset of the changed external parameter relative to the initial external parameter when the first shooting equipment acquires the first image at the first moment.
2. The method of claim 1, wherein the M cameras include a device adjacent to the first camera.
3. The method of claim 1, wherein the M cameras comprise cameras within a preset range of positions within the freeview system.
4. The method of any of claims 1-3, wherein the determining an offset from the first image and the M images comprises:
determining a re-projection error according to the first image and the M images, wherein the re-projection error is used for representing a coordinate error of a characteristic point in a detection area in the first image and the M images;
and determining the offset according to the reprojection error.
5. The method according to any of claims 1-4, wherein after said determining an offset from said first image and said M images, said method further comprises:
and sending prompt information to the computing equipment, wherein the prompt information is used for indicating that the posture of the first shooting equipment is deviated.
6. The method according to any of claims 1-5, wherein after determining an offset from the first image and the M images, the method further comprises:
if the offset is smaller than a preset offset, determining a re-calibration external parameter of the first shooting device according to the offset and the initial camera external parameter, wherein the re-calibration external parameter is used for post-processing an image acquired by the first shooting device for generating the free view angle video of the first scene.
7. The method of any of claims 1-5, wherein after determining the offset from the first image and the M images, the method further comprises:
if the offset is smaller than a preset offset, determining projection points in M images acquired by the M shooting devices at a second moment according to the background point cloud of the first scene;
and determining a re-labeling external parameter of the first shooting device according to the feature point of the second image at the second moment, which is acquired by the first shooting device, and the projection point, wherein the re-labeling external parameter is used for post-processing the image acquired by the first shooting device for generating the free-view video of the first scene.
8. The method of any of claims 1-5, wherein after determining the offset from the first image and the M images, the method further comprises:
and if the offset is greater than or equal to the preset offset, the cradle head where the first shooting equipment is located adjusts the posture of the first shooting equipment.
9. The method according to claim 8, wherein after the cradle head where the first photographing apparatus is located adjusts the attitude of the first photographing apparatus, the method further comprises:
determining projection points in M images acquired by the M shooting devices at a second moment according to the background point cloud of the first scene;
and determining a re-labeling external parameter of the first shooting device according to the feature point of the second image at the second moment, which is acquired by the first shooting device, and the projection point, wherein the re-labeling external parameter is used for post-processing the image acquired by the first shooting device for generating the free-view video of the first scene.
10. The method of claim 6, 7 or 9, further comprising:
sending the re-labeled external parameters to a computing device.
11. A method of generating a freeview video, the freeview system comprising N capture devices that capture a first scene in synchronization, the N capture devices including a first capture device, the method performed by a computing device, the method comprising:
performing post-processing on the images acquired by the N shooting devices according to the camera parameters of the N shooting devices to obtain a first free view video, wherein the camera parameters comprise initial camera internal parameters and initial camera external parameters;
if the computing device obtains the re-calibration external parameters of the first shooting device, performing post-processing on the image obtained by the first shooting device according to the re-calibration external parameters of the first shooting device and the initial camera internal parameters to obtain a second free-view video, wherein the re-calibration external parameters are the updated external parameters of the initial camera external parameters;
and combining the first free visual angle video and the second free visual angle video to obtain the free visual angle video of the first scene.
12. The method of claim 11, wherein prior to post-processing the image acquired by the first capture device according to the re-calibration extrinsic parameters and the initial camera intrinsic parameters of the first capture device, the method further comprises:
receiving prompt information sent by the first shooting device, wherein the prompt information is used for indicating that the posture of the first shooting device deviates;
and acquiring a re-calibration external parameter of the first shooting device, wherein the re-calibration external parameter is used for post-processing an image acquired by the first shooting device for generating the free-view video of the first scene.
13. The method of claim 12, wherein obtaining the re-calibration extrinsic parameters of the first capture device comprises:
receiving the re-marked external parameters sent by the first shooting equipment;
or determining projection points in M images acquired by M shooting devices according to the background point cloud of the first scene, wherein the M shooting devices are devices except the first shooting device in the free-view system, IMU information of the M shooting devices is unchanged, N is an integer greater than or equal to 3, M is an integer greater than or equal to 1, and M < N;
and determining the external reference of the first shooting device according to the feature points of the second image acquired by the first shooting device and the projection points.
14. The method of claim 12 or 13, wherein prior to said obtaining the re-calibration extrinsic parameters of the first capture device, the method further comprises:
and suspending post-processing of the image sent by the first shooting device.
15. A detection apparatus, wherein the detection apparatus is applied to a shooting device in a free-view system, the free-view system includes N shooting devices, the N shooting devices shoot a first scene synchronously, the N shooting devices include a first shooting device, the first shooting device includes a camera and an Inertial Measurement Unit (IMU), the detection apparatus includes:
when the IMU information of the first shooting device changes, the camera is used for acquiring a first image at a first moment;
an image obtaining module, configured to obtain M images obtained by M shooting devices at the first time, where the M shooting devices are devices other than the first shooting device in the free-viewing angle system, IMU information of the M shooting devices is unchanged, N is an integer greater than or equal to 3, M is an integer greater than or equal to 1, and M < N;
and the detection module is used for determining offset according to the first image and the M images, wherein the offset is used for indicating the degree of offset of the changed external parameter relative to the initial external parameter when the first shooting equipment acquires the first image at the first moment.
16. The apparatus of claim 15, wherein the M cameras comprise devices adjacent to the first camera.
17. The apparatus of claim 15, wherein the M cameras comprise cameras within a preset range of positions within the freeview system.
18. The apparatus according to any one of claims 15-17, wherein the detection module, when determining the offset amount from the first image and the M images, is specifically configured to:
determining a reprojection error according to the first image and the M images, wherein the reprojection error is used for representing coordinate errors of feature points in the detection areas in the first image and the M images;
and determining the offset according to the reprojection error.
19. The apparatus according to any one of claims 15-18, wherein the apparatus further comprises a communication module;
the communication module is used for sending prompt information to the computing device, wherein the prompt information is used for indicating that the posture of the first shooting device deviates.
20. The apparatus of any one of claims 15-19, further comprising a recalibration module;
if the offset is smaller than a preset offset, the recalibration module is configured to determine a recalibration external parameter of the first shooting device according to the offset and the initial camera external parameter, where the recalibration external parameter is used to perform post-processing on an image acquired by the first shooting device for generating the free-view video of the first scene.
21. The apparatus of any one of claims 15-19, further comprising a recalibration module;
if the offset is smaller than a preset offset, the recalibration module is used for determining projection points in the M images acquired by the M shooting devices at the second moment according to the background point cloud of the first scene;
and determining a re-marked external parameter of the first shooting device according to the feature point of the second image of the second moment acquired by the first shooting device and the projection point, wherein the re-marked external parameter is used for post-processing the image acquired by the first shooting device for generating the free-view video of the first scene.
22. The apparatus of any one of claims 15-19, further comprising a triggering module;
if the offset is greater than or equal to a preset offset, the triggering module is used for adjusting the posture of the first shooting equipment by the holder where the first shooting equipment is located.
23. The apparatus of claim 22, further comprising a recalibration module;
the recalibration module is used for determining projection points in M images acquired by the M shooting devices at a second moment according to the background point cloud of the first scene; and determining a relabeling external parameter of the first shooting device according to the feature point of the second image at the second moment acquired by the first shooting device and the projection point, wherein the relabeling external parameter is used for post-processing the image acquired by the first shooting device for generating the free-view video of the first scene.
24. The apparatus of claim 20, 21 or 23, further comprising a communication module;
the communication module is used for sending the re-marked external parameters to the computing equipment.
25. An apparatus for generating a free-view video, wherein the apparatus for generating a free-view video is applied to a computing device in a free-view system, the free-view system includes N shooting devices, the N shooting devices synchronously shoot a first scene, the N shooting devices include a first shooting device, and the apparatus includes:
the post-processing module is used for performing post-processing on the images acquired by the N shooting devices according to the camera parameters of the N shooting devices to obtain a first free view video, wherein the camera parameters comprise initial camera internal parameters and initial camera external parameters;
the post-processing module is further configured to, if the computing device obtains the re-calibration external parameter of the first shooting device, perform post-processing on the image obtained by the first shooting device according to the re-calibration external parameter of the first shooting device and the initial camera internal parameter to obtain a second free-view video, where the re-calibration external parameter is an updated external parameter of the initial camera external parameter;
and the generating module is used for combining the first free view video and the second free view video to obtain the free view video of the first scene.
26. The apparatus of claim 25, further comprising a communication module and a recalibration module;
the communication module is configured to receive prompt information sent by the first shooting device, where the prompt information is used to indicate that the posture of the first shooting device is shifted;
the recalibration module is used for acquiring recalibration external parameters of the first shooting device, and the recalibration external parameters are used for performing post-processing on images acquired by the first shooting device for generating the free-view video of the first scene.
27. The apparatus according to claim 26, wherein the recalibration module, when acquiring the recalibration external reference of the first photographing device, is specifically configured to:
receiving the re-marked external parameters sent by the first shooting equipment;
or determining projection points in M images acquired by M shooting devices according to the background point cloud of the first scene, wherein the M shooting devices are devices except the first shooting device in the free-view system, IMU information of the M shooting devices is unchanged, N is an integer greater than or equal to 3, M is an integer greater than or equal to 1, and M < N;
and determining the external reference of the first shooting device according to the feature points of the second image acquired by the first shooting device and the projection points.
28. The apparatus according to claim 26 or 27, wherein the post-processing module is configured to suspend post-processing of the image sent by the first camera.
29. A photographing apparatus, characterized by comprising: at least one processor, memory, a camera, and an Inertial Measurement Unit (IMU), wherein the camera is configured to capture an image, the IMU is configured to obtain IMU information for the capture device, the memory is configured to store computer programs and instructions, and the processor is configured to invoke the computer programs and instructions to assist the camera and the IMU in performing the method of any of claims 1-10.
30. A data processing system, comprising: at least one processor and a memory for storing a computer program and instructions for invoking the computer program and instructions to implement performing the method of any of claims 11-14.
31. A freeview system, characterized in that it comprises N filming devices that shoot a first scene synchronously, including a first filming device that includes a camera and an inertial measurement unit IMU, the first filming device executing the method of any one of claims 1 to 10 when the IMU information of the first filming device changes.
CN202111117189.9A 2021-09-23 2021-09-23 Detection method, device, equipment and system of free visual angle system Pending CN115861430A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111117189.9A CN115861430A (en) 2021-09-23 2021-09-23 Detection method, device, equipment and system of free visual angle system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111117189.9A CN115861430A (en) 2021-09-23 2021-09-23 Detection method, device, equipment and system of free visual angle system

Publications (1)

Publication Number Publication Date
CN115861430A true CN115861430A (en) 2023-03-28

Family

ID=85652355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111117189.9A Pending CN115861430A (en) 2021-09-23 2021-09-23 Detection method, device, equipment and system of free visual angle system

Country Status (1)

Country Link
CN (1) CN115861430A (en)

Similar Documents

Publication Publication Date Title
CN107646126B (en) Camera pose estimation for mobile devices
US10404915B1 (en) Method and system for panoramic video image stabilization
CN110022444B (en) Panoramic photographing method for unmanned aerial vehicle and unmanned aerial vehicle using panoramic photographing method
CN110249626B (en) Method and device for realizing augmented reality image, terminal equipment and storage medium
EP3296952B1 (en) Method and device for blurring a virtual object in a video
US20170374256A1 (en) Method and apparatus for rolling shutter compensation
US11272153B2 (en) Information processing apparatus, method for controlling the same, and recording medium
TWI700000B (en) Image stabilization method and apparatus for panoramic video, and method for evaluating image stabilization algorithm
JP2017017689A (en) Imaging system and program of entire-celestial-sphere moving image
CN112565725B (en) Projection picture anti-shake method and device, projection equipment and storage medium
JP2018207252A (en) Image processing system, control method for image processing system, and program
CN108153417B (en) Picture compensation method and head-mounted display device adopting same
KR101349347B1 (en) System for generating a frontal-view image for augmented reality based on the gyroscope of smart phone and Method therefor
CN111712857A (en) Image processing method, device, holder and storage medium
JP5321417B2 (en) Perspective transformation parameter generation device, image correction device, perspective transformation parameter generation method, image correction method, and program
JP6563300B2 (en) Free viewpoint image data generating apparatus and free viewpoint image data reproducing apparatus
JP2019121945A (en) Imaging apparatus, control method of the same, and program
CN111955005B (en) Method and system for processing 360-degree image content
US9807302B1 (en) Offset rolling shutter camera model, and applications thereof
CN115861430A (en) Detection method, device, equipment and system of free visual angle system
CN111416943B (en) Camera anti-shake method, camera anti-shake apparatus, aerial survey camera, and computer-readable storage medium
US10891805B2 (en) 3D model establishing device and calibration method applying to the same
US11856298B2 (en) Image processing method, image processing device, image processing system, and program
CN113763544A (en) Image determination method, image determination device, electronic equipment and computer-readable storage medium
CN114371819A (en) Augmented reality screen system and augmented reality screen display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination