CN114500849B - Multi-view surrounding shooting correction method and system - Google Patents

Multi-view surrounding shooting correction method and system Download PDF

Info

Publication number
CN114500849B
CN114500849B CN202210158375.5A CN202210158375A CN114500849B CN 114500849 B CN114500849 B CN 114500849B CN 202210158375 A CN202210158375 A CN 202210158375A CN 114500849 B CN114500849 B CN 114500849B
Authority
CN
China
Prior art keywords
coordinate
midpoint
coordinates
picture
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210158375.5A
Other languages
Chinese (zh)
Other versions
CN114500849A (en
Inventor
杨君蔚
谈新
李瑞民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Media Tech Co ltd
Original Assignee
Shanghai Media Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Media Tech Co ltd filed Critical Shanghai Media Tech Co ltd
Priority to CN202210158375.5A priority Critical patent/CN114500849B/en
Publication of CN114500849A publication Critical patent/CN114500849A/en
Application granted granted Critical
Publication of CN114500849B publication Critical patent/CN114500849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

The invention relates to the technical field of multi-view surrounding shooting correction, in particular to a multi-view surrounding shooting correction method and system, which are suitable for correcting pictures of N cameras, and specifically comprise the following steps: s1, taking one camera of the N cameras as a reference camera, marking a reference point in a reference picture shot by the reference camera, and shooting target pictures containing the reference point by the cameras of the rest N-1 cameras; step S2, respectively determining the reference coordinates of the reference point in the reference picture and the coordinates in each target picture; and step S3, calculating correction parameters of each target picture relative to the reference picture according to the coordinates and the reference coordinates, and correcting the target picture by using the correction parameters. The invention solves the problem of limited placement of the correction rod in a real scene by constructing the virtual correction rod.

Description

Multi-view surrounding shooting correction method and system
Technical Field
The invention relates to the technical field of live broadcast picture processing, in particular to a multi-view surrounding shooting correction method and system.
Background
In a surround photographing method and system described in a multi-view surround photographing method and system (application No. 202110600717. X) according to the prior art, a picture correction method for a multi-camera is disclosed. In the prior art, the correction of the multi-camera adopts a single-rod method, namely, a correction rod is placed in the middle of a shot picture before shooting, and a plurality of camera positions are aligned to the correction rod through preliminary focusing. Then, each camera shoots a picture, but the correction method described in the prior art can meet some situations in the actual live broadcast process, such as the situation that a correction rod cannot be placed in a shot place due to the limitation of the shot place; for example, although the correction rod is allowed to be placed on site, the time for which the correction rod is allowed to be placed is too short for the reasons of shooting process, and correction of all the positions cannot be completed within a limited time; for example, after correction is completed, a certain camera position is moved in the shooting process, the original correction parameters cannot be used, and the correction rod needs to be replaced. In the above cases, the prior art method relies on the correction rod to correct the camera picture, and the field implementation is limited by certain conditions.
In addition, in the multi-view surrounding shooting system described in the prior art, the pictures shot by each machine position need to be subjected to spatial consistency correction to ensure that the generated animation is smooth. The correction system only performs video signal recording and frame extraction according to time points. The correction of the camera picture and the video animation rendering are completed by rendering broadcasting software. In practical application, this structure cannot be divided to meet the time requirement. The 360-degree frame of 300 frames needs to be output in about 10 seconds, and in the practical use process, the video of 300 frames needs about 30 seconds of rendering time, and the rendering time linearly increases along with the length of the video, so that the requirements cannot be met.
Disclosure of Invention
The invention also aims to provide a multi-view surrounding shooting correction method and system, which solve the technical problems; the specific scheme is as follows:
a multi-view surrounding shooting correction method is suitable for correcting pictures of N cameras, and specifically comprises the following steps:
s1, taking one camera of the N cameras as a reference camera, marking a reference point in a reference picture shot by the reference camera, and shooting target pictures containing the reference point by the cameras of the rest N-1 cameras;
step S2, respectively determining the reference coordinates of the reference point in the reference picture and the coordinates in each target picture;
step S3, calculating correction parameters of each target picture relative to the reference picture according to the coordinates and the reference coordinates;
and S4, correcting the target picture by using the correction parameters.
Preferably, the step S3 includes the following steps:
step S30, determining a first reference midpoint coordinate, a second reference midpoint coordinate and a reference anchor point coordinate according to the reference coordinates of the reference points in the reference picture, wherein the first reference midpoint coordinate is the midpoint between the first reference point and the third reference point in the reference picture, the second reference midpoint coordinate is the midpoint between the second reference point and the fourth reference point in the reference picture, and the reference anchor point coordinate is the midpoint between the first reference midpoint coordinate and the second reference midpoint coordinate;
step S31, determining a first midpoint coordinate, a second midpoint coordinate and an anchor point coordinate according to the coordinates of the reference point in the target picture, wherein the first midpoint coordinate is the midpoint of the first reference point and the third reference point in the target picture, the second midpoint coordinate is the midpoint of the second reference point and the fourth reference point in the target picture, and the anchor point coordinate is the midpoint of the first midpoint coordinate and the second midpoint coordinate;
and S32, determining a rotation angle according to the first midpoint coordinate, the second midpoint coordinate and the anchor point coordinate.
Preferably, the specific steps of the step S32 are:
step S321, determining whether the lateral coordinate of the first midpoint coordinate is equal to the lateral coordinate of the second midpoint coordinate or the lateral coordinate of the anchor point coordinate, if yes, the rotation angle is 0, and if no, executing step S322;
step S322, the rotation angle is calculated by the following formula:
θ=arctan((X 6 -X 5 )/((Y 6 -Y 5 ))
or alternatively
θ=arctan((X 7 -X 5 )/((Y 7 -Y 5 ))
Wherein θ is the rotation angle, X 5 Y being the transverse coordinate of the first midpoint coordinate 5 The longitudinal coordinate, X, of the first midpoint coordinate 6 Y being the transverse coordinate of the second midpoint coordinate 6 The longitudinal coordinate, X, of the second midpoint coordinate 7 Y being the transverse coordinates of the anchor coordinates 7 Is the longitudinal coordinate of the anchor point coordinate.
Preferably, the step S32 further includes:
step S33, in the reference picture, the length of a line segment of a connecting line between the first reference midpoint coordinate and the second reference midpoint coordinate is taken as a standard length;
and step S34, determining a scaling ratio according to the first midpoint coordinate, the second midpoint coordinate and the standard length.
Preferably, the step S34 specifically includes:
step S341, calculating the length of the midpoint line segment of each target frame according to the following formula:
where len is the midpoint segment length;
step S342, calculating the scaling of each target frame by the following formula:
s=len/l
where s is the scaling and l is the standard length.
Preferably, the step S4 specifically includes:
step S40, translating all the anchor coordinates to the positions of the coordinate origins, and rotating and/or scaling the target picture around the translated anchor coordinates;
step S41, judging whether the rotation angle is equal to 0, and if so, not rotating; otherwise, executing step S42;
step S42, judging whether the rotation angle is smaller than 0, if so, rotating the target picture clockwise around the anchor point coordinates according to the rotation angle; otherwise, step S43 is performed;
and S43, rotating the target picture anticlockwise around the anchor point coordinates according to the rotation angle.
Preferably, the step S43 further includes:
step S44, judging whether the scaling ratio is equal to 1, if so, not scaling; otherwise, executing step S45;
step S45, judging whether the scaling is smaller than 1, if so, shrinking the target picture around the anchor point coordinates according to the scaling; otherwise, executing step S46;
and step S46, amplifying the target picture around the anchor point coordinates according to the scaling ratio.
Preferably, the step S46 further includes:
step S47, intercepting the minimum effective area of the target picture around the anchor point coordinates, and scaling the effective area to the resolution of the standard size;
and S48, placing the anchor point coordinates to the position of the center point of the picture, and uploading the anchor point coordinates to an external rendering and broadcasting server.
Preferably, the reference point includes:
a first reference point located in the upper left corner, a second reference point located in the lower left corner, a third reference point located in the upper right corner, and a fourth reference point located in the lower right corner.
A multi-view surround shooting correction system, the multi-view surround shooting correction method comprising:
the recording module comprises all cameras;
the input end of the frame extraction module is connected with the output end of the recording module;
the input end of the correction module is connected with the output end of the frame extraction module, and a GPU processor and a filter are arranged in the correction module;
the input end of the transmission module is connected with the output end of the correction module, and the output end of the transmission module is connected with an external rendering and broadcasting server.
The invention has the beneficial effects that: by adopting the technical scheme, the problem that the placement of the correction rod in a real scene is limited is solved by constructing the virtual correction rod.
Drawings
FIG. 1 is a schematic diagram of a mark location in an embodiment of the present invention;
FIG. 2 is a schematic view of a virtual correction bar according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating steps of a correction method for multi-view surrounding photographing according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the step S3 in the embodiment of the present invention;
FIG. 5 is a schematic diagram showing the steps of step S32 according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the step S34 in the embodiment of the present invention;
FIG. 7 is a schematic diagram showing the steps of step S4 according to an embodiment of the present invention;
FIG. 8 is a flow chart of correction in an embodiment of the present invention;
FIG. 9 is a schematic diagram of an affine transformation matrix according to an embodiment of the invention;
fig. 10 is a schematic diagram of a module composition of a multi-view surround shooting correction system according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
The multi-view surrounding shooting correction method is suitable for correcting the pictures of N cameras, and is characterized by specifically comprising the following steps as shown in fig. 3:
s1, taking one camera of N cameras as a reference camera, marking a reference point in a reference picture shot by the reference camera, and shooting target pictures containing the reference point by the cameras of the rest N-1 cameras;
step S2, respectively determining the reference coordinates of the reference point in the reference picture and the coordinates in each target picture;
step S3, calculating correction parameters of each target picture relative to the reference picture according to the coordinates and the reference coordinates;
and S4, correcting the target picture by using the correction parameters.
Specifically, in this embodiment, after the multi-view surrounding camera is built, each camera needs to go through a focusing process (i.e. the lens is aligned to the object to be photographed) due to the problems of the field environment and the installation process. After alignment, the pictures of each machine position are not consistent in space, and when a surrounding video is generated, strong picture shaking feeling can be generated, so that the video cannot be actually played, and therefore, the step S1 completes the preliminary focusing action. According to the background technology, the conventional solution is to place a reference object in a real scene, for example, a rod is placed, so that the camera of each camera position can be guaranteed to shoot the reference object, the reference object is kept in the center of a picture, the picture level, the size, the displacement and the brightness of the picture shot by each camera of each camera position are basically consistent with naked eyes, and then the picture input by each camera position is processed in real time through an algorithm to guarantee the consistency of the pixel level, but in the actual live shooting scene, there are situations that the reference object cannot be placed, for example, a sports game lens, even if the reference object can be placed, the position of the reference object cannot be adjusted in time due to the process of relative position change between the shot picture and the reference object, so that in step S3, correction parameters needed for a target picture are calculated in a way of constructing a virtual correction rod in a shot image.
Specifically, the software provided in this embodiment calculates the zoom, rotation, and translation parameters of each camera opportunity frame relative to the first reference camera position frame. And may export a corrective parameter file in accordance with Adobe After Effects software export format.
In particular, the reference points include,
a first reference point located in the upper left corner, a second reference point located in the lower left corner, a third reference point located in the upper right corner, and a fourth reference point located in the lower right corner.
Further, these four reference points must appear simultaneously in the pictures of all camera positions. The four spots are marked on site with a clearly colored marker. The marking may be made with a tape or the like that highlights the color.
In a preferred embodiment, as shown in fig. 4, step S3 includes the steps of:
step S30, determining a first reference midpoint coordinate, a second reference midpoint coordinate and a reference anchor point coordinate according to the reference coordinates of the reference points in the reference picture, wherein the first reference midpoint coordinate is the midpoint between the first reference point and the third reference point in the reference picture, the second reference midpoint coordinate is the midpoint between the second reference point and the fourth reference point in the reference picture, and the reference anchor point coordinate is the midpoint between the first reference midpoint coordinate and the second reference midpoint coordinate;
step S31, determining a first midpoint coordinate, a second midpoint coordinate and an anchor point coordinate according to the coordinates of the reference point in the target picture, wherein the first midpoint coordinate is the midpoint of the first reference point and the third reference point, the second midpoint coordinate is the midpoint of the second reference point and the fourth reference point, and the anchor point coordinate is the midpoint of the first midpoint coordinate and the second midpoint coordinate;
and S32, determining the rotation angle according to the first midpoint coordinate, the second midpoint coordinate and the anchor point coordinate.
Specifically, as shown in fig. 1 and 2, four reference points in a captured image are dotted by image processing software, and coordinates of the four reference points in the image are determined:
first reference point A (X 1 ,Y 1 );
Second reference point B (X 2 ,Y 2 );
Third reference point C (X 3 ,Y 3 );
Fourth reference point D (X 4 ,Y 4 );
Determining a first midpoint coordinate, a second midpoint coordinate and an anchor point coordinate according to the coordinates of the reference point in the target picture;
specifically, a first midpoint coordinate E (X 5 ,Y 5 );
Second midpoint coordinate F (X 6 ,Y 6 );
Wherein X is 5 =(X 1 +X 3 )/2;
X 6 =(X 2 +X 4 )/2;
Y 5 =(Y 1 +Y 3 )/2;
Y 6 =(Y 2 +Y 4 )/2;
Specifically, all camera positions take one picture, and then four mark points of each camera position picture are clicked in the software provided in the present embodiment. Determining coordinates of the mark points in the picture;
further, a represents an upper left corner reference point, B represents a lower left corner reference point, C represents an upper right corner reference point, and D represents a lower right corner reference point. Assuming A, B that two reference points are connected to form a left rod, and A, B is taken as an upper point and a lower point of the left rod; assume C, D that two reference points are connected as a right pole, and C, D denotes the upper and lower points of the right pole. The cameras can be approximately aligned with the center point (i.e. the G point) of the two points by experience, and in the embodiment, since the middle part is not provided with a correction rod, the algorithm assumes that the central line between the AB and CD rods, namely EF, is used as a virtual correction rod, and can realize multi-position correction as long as the scaling, rotation and translation parameters between the EF virtual correction rods of the camera position pictures are calculated, and the multi-position correction device is compatible with rendering broadcasting software provided in the prior art.
At the same time, after each camera shoots a picture, the pictures are summarized into a system using the algorithm, and the coordinates of four A, B, C, D points are artificially marked on each picture. The system calculates E, F, G point coordinates;
specifically, the anchor point position is the midpoint of the virtual correction rod, and the anchor point coordinates are: anchor point G (X) 7 ,Y 7 )。
Specifically, as shown in fig. 5, the specific steps of step S32 are:
step S321, judging whether the transverse coordinate of the first midpoint coordinate is equal to the transverse coordinate of the second midpoint coordinate or the transverse coordinate of the anchor point coordinate, if so, the rotation angle is 0;
step S322, if not, the rotation angle is calculated by the following formula:
θ=arctan((X 6 -X 5 )/((Y 6 -Y 5 ))
or alternatively
θ=arctan((X 7 -X 5 )/((Y 7 -Y 5 ))
Wherein θ is the rotation angle, X 5 Is the transverse coordinate of the first midpoint coordinate, Y 5 Longitudinal coordinate of first midpoint coordinate, X 6 Is the transverse coordinate of the second midpoint coordinate, Y 6 Longitudinal coordinate of second midpoint coordinate, X 7 Is the transverse coordinate of the anchor point coordinate, Y 7 Is the longitudinal coordinate of the anchor point coordinate.
From the calculation results, the results of the two algorithms are identical. It should be noted that the expression of the rotation angle required in the "Adobe After Effects software" used in the algorithm in the prior art is-180 ° ~180 ° Whereas most programming languages obtain angles from-pi to pi, a transformation of the angular representation is also required in order to be compatible with the original algorithm.
Specifically, step S32 further includes:
step S33, in the reference picture, the length of a line segment connecting the first reference midpoint coordinate and the second reference midpoint coordinate is taken as a standard length;
step S34, determining the scaling according to the first midpoint coordinates, the second midpoint coordinates and the standard length.
In a preferred embodiment, as shown in fig. 6, step S34 specifically includes:
step S341, calculating the length of the midpoint line segment of each target frame according to the following formula:
where len is the midpoint segment length;
step S342, calculating the scaling of each target frame by the following formula:
s=len/l
where s is the scale and l is the standard length.
In a preferred embodiment, the correction of the target frame is performed according to image processing software, where the image processing software divides the main program interface into four regions, and a file list region is located right above the main program interface. The lower part is divided into a left part and a right part, and the left part is a preview area, the middle part is a result area, a step operation area and a debugging area. The right is divided into an upper part and a lower part again, wherein the upper part is used for various function buttons, and the lower part is used for previewing pictures.
Specifically, after the program is normally run, the program reads all images shot by the machine positions under the designated path, and the images are in a list of the file list area. Each row represents a picture file, column 1 of the list is a serial number, and the second column is the name of the read image file, and the naming rule is a 17-bit number. In which bits 1 to 5 are the stream number, bits 6 to 9 are the camera number, for example, "0012" indicates a camera number 12, and bits 10 to 12 are the frame number, and in general, the program analysis in this embodiment is the 1 st frame, so bits 10 to 12 are always 001. The 8 columns from column 3 are, in turn, coordinate values of A, B, C, D points, which are all defaults to 0 if the system is first run. The coordinate values of E, F, G points, the included angle with the vertical line and the length of the central line are sequentially shown on the right side column. These are intermediate transition values that are only used when troubleshooting is performed, while the subsequent clockwise rotation angle, scaling the two values, is to be used. The rightmost right downward translation and temporary test data are used for temporary test.
Specifically, in the program of this embodiment, the program main interface includes a button for a main function, and clicking the "exit system" causes the system to completely save some parameter values and exit. And clicking the 'no exit', and directly exiting without saving. The "display picture" button completes the display of the picture, and clicking the button, the system reads the position of the list in the file list area, displays the picture according to the position, for example, the picture file with the serial number of 12 is selected in the list, clicking the "display picture", and the interface lower part displays the serial number: 0012_012_001.y422 picture.
Further, clicking the "calculate in batch with the first as the reference point" button can calculate all the values of the items marked with pictures. Clicking "generate AE file" saves the calculated values to the AE file.
For convenience of positioning, the image displayed on the main program interface enlarges the original image by 2 times, but only a part of the image is displayed due to the oversize of the original image, and the positions of the other parts are adjusted by the buttons of up, down, left and right. The schematic areas will also change during the adjustment process.
After the reference point is found in the program main interface by adjusting the position, the mouse is clicked in the area, and the coordinate values are displayed in the text boxes of X and Y in the lower left area of the program main interface. If trimming is desired, trimming may be achieved by the "-" and "+" buttons.
Specifically, clicking "set to top left" fills in the values in the "X" and "Y" text boxes into the list of file list areas at the top left X "and" top left Y "of the currently selected line. Similarly, clicking the "set to bottom left" will fill in the values in the "X" and "Y" text boxes into the list of file list areas, at the "bottom left X" and "bottom left Y" of the currently selected row, and so on, clicking the buttons "set up right" and "set down right" of the main program interface to achieve the same function.
Clicking the "calculate" button of the main program interface may operate on the value of the currently selected row in the file list area list. By clicking the "set reference standard" button, the value of the currently selected row in the file list area list can be calculated, and the value of the row can be used as the reference standard (the length of the row is automatically 100%, and the other lengths are referenced by the length).
Specifically, the correction program provided by the image processing software mainly performs functions including: the picture files shot by all camera positions under the appointed path are read and can be displayed at any time; displaying the image so that a user manually marks the reference points on the marks on the image; completing an algorithm of anchor points, rotation angles and scaling according to the algorithm; a file compatible with "Adobe After Effects software" is generated.
Specifically, as shown in fig. 7 and 8, step S4 specifically includes,
step S40, translating all anchor coordinates to the positions of the coordinate origins, and rotating and/or scaling the target picture around the translated anchor coordinates;
step S41, judging whether the rotation angle is equal to 0, if so, not rotating; otherwise, executing step S42;
step S42, judging whether the rotation angle is smaller than 0, if so, rotating the target picture clockwise around the anchor point coordinates according to the rotation angle; otherwise, step S43 is performed;
step S43, the target picture rotates anticlockwise around the anchor point coordinates according to the rotation angle.
In a preferred embodiment, step S43 further comprises:
step S44, judging whether the scaling ratio is equal to 1, if so, not scaling; otherwise, executing step S45;
step S45, judging whether the scaling ratio is smaller than 1, if so, reducing the target picture around the anchor point coordinates according to the scaling ratio; otherwise, executing step S46;
step S46, amplifying the target picture around the anchor point coordinates according to the scaling.
Specifically, the AE file expresses the deviation of all machine positions relative to the world coordinate system, wherein the anchor point position is the key, the center position of all machine positions is changed around the anchor point position, and the rotation and the scaling are performed.
The AE file contains the following key contents: anchor point, corner relative to anchor point and scaling value relative to anchor point.
Specifically, the above steps, the processing of the above parameters is implemented by a radiation matrix, according to known parameters: anchor coordinates (cx, cy), rotation angle θ, frame ratio, effective frame ratio, center point of frame (hw, hh), affine transformation matrix is derived as follows:
the affine transformation matrix is shown in fig. 9;
according to the matrix formula described above, the pseudo code processed in the transfer_npp filter of the present embodiment is as follows:
double aCoeffs [2] [3]; affine matrix
double angle=0.0-s- > angle 3.14159265359/180.0; angle of rotation
double scale = s- > scale/100.0 x s- > crop; /(zoom)
double alpha=cos(angle)*scale;
double beta=sin(angle)*scale;
double cx=iw/2.0+(s->cx-stage->planes_in[0].width/2)*stage->planes_in[i].width/stage->planes_in[0].width;
double cy=ih/2.0+(s->cy-stage->planes_in[0].height/2)*stage->planes_in[i].height/stage->planes_in[0].height;
double hw=iw/2.0;
double hh=ih/2.0;
aCoeffs[0][0]=alpha;
aCoeffs[0][1]=beta;
aCoeffs[0][2]=hw-alpha*cx-beta*cy;
aCoeffs[1][0]=-beta;
aCoeffs[1][1]=alpha;
aCoeffs[1][2]=hh+beta*cx-alpha*cy;
err=nppiWarpAffine_8u_C1R(in->data[i],(NppiSize){iw,ih},in->linesize[i],(NppiRect){0,0,iw,ih},out->data[i],out->linesize[i],(NppiRect){0,0,ow,oh},aCoeffs,NPPI_INTER_NN);
The codes are used for replacing corresponding image processing functions of the transferring_npp filter, so that correction of one GPU filter to each camera position shooting picture can be realized, 4 CPU filters in the prior art method are replaced, and the picture correction processing performance is greatly improved.
In a preferred embodiment, step S46 is followed by,
step S47, intercepting a minimum effective area of a target picture around anchor coordinates, and scaling the effective area to a resolution of a standard size;
and S48, placing the anchor point coordinates at the position of the center point of the picture, and uploading the anchor point coordinates to an external rendering and broadcasting server.
A multi-view surround shooting correction system, which is applied to any one of the above embodiments, as shown in fig. 10, includes,
a recording module 1 comprising all cameras;
the input end of the frame extraction module 2 is connected with the output end of the recording module 1;
and the input end of the correction module 3 is connected with the output end of the frame extraction module 2, and a GPU processor and a filter are arranged in the correction module 3.
And the input end of the transmission module 4 is connected with the output end of the correction module 3, and the output end of the transmission module 4 is connected with an external rendering and broadcasting server.
In a preferred embodiment, the plurality of correction systems simultaneously record and correct the photographed images of the cameras, in the multi-view surrounding photographing process, video rendering is split into two parts of image correction and digital zooming in a distributed architecture mode, the image correction parts are split into the correction systems, and the filter is used for performing image correction.
Specifically, in the conventional multi-view surround shooting mode, each frame shot needs to be spatially consistent and corrected to ensure smoothness of the generated animation. The correction system only performs video signal recording and frame extraction according to time points. The correction of the camera picture and the video animation rendering are completed by rendering broadcasting software. In practical application, this structure cannot be divided to meet the time requirement. (360-degree frames of 300 frames need to be output in about 10 seconds) in the actual use process, the rendering time of 300 frames of video needs about 30 seconds, and the rendering time linearly increases along with the length of the video.
While this embodiment optimizes corrective performance in two ways: firstly, changing a software architecture into a distributed architecture to process video correction of a plurality of camera positions; and secondly, processing a correction algorithm through the GPU.
The specific treatment method comprises the following steps: the distributed architecture improvement method is to split video rendering into two parts: correction and digital zoom. The video correction is related to the video camera, and correction data is fixed after the video camera is positioned; the correction requires scaling, rotation and cropping of the picture, requiring very much computing resources. The digital zoom is different along with different parameters of the template, and the calculation amount required by the correction is smaller than that required by the correction. Based on the above situation, the method in the embodiment splits correction into the correction system and cooperates with GPU processing, so that the calculation pressure of the external rendering broadcasting server can be reduced greatly in theory. For the first and last positions, the continuous 1-2 seconds of bedding pictures are needed, and the pictures can be split through the correction systems, namely, one camera can be recorded and corrected simultaneously by a plurality of correction systems through vision. At present, through testing, the time of correcting and rendering in a correction system can basically reach 1:1 performance, namely, 1 frame of picture is corrected and finished within 27ms by using a GPU; correction is completed within 111ms with cpu correction. Compared with the time consumed by video frame extraction transmission and disc writing after correction, the increased correction time does not affect the frame extraction performance too much.
In the traditional method, the software processing correction algorithm uses pad, rotate, crop, scale filters in a ffmpeg filter frame to realize real-time correction of AE correction parameters (rotation, scaling and translation) on the images of each camera, adopts a CPU to process the images, and has larger consumption of CPU resources and longer processing time in practical application.
In this embodiment, the correction algorithm is optimized, and still the ffmpeg filter framework is used to maintain compatibility, but the filter for processing the picture uses a trans_npp filter instead. The filter is a transpose operation of the picture in ffmpeg using nvidia cuda hardware. The Ffmpeg native trans_npp filter only supports 90, 180, 270 degree picture rotation, and does not support processing pictures according to AE correction parameters. Therefore, the present embodiment modifies the transfer_npp filter and processes the screen correction data using the affine processing function nppiwarp_8u_c1 r provided by the nvidia npp framework.
Specifically, the multiple cameras in the embodiment of the invention are preferably 4K cameras, and are used for multi-camera image acquisition. The 4K camera selected in the system is a special camera for broadcasting and television, supports 3840x2160 resolution 12G SDI baseband signals, and collects video pictures in a 50p mode. The system is compatible with access from high definition to 8K video signals. Because the problems of implementation process such as horizon and the like can be encountered in the actual installation process of the cameras, after the installation of the camera positions of the plurality of cameras is completed, the images actually captured by each camera have certain level, zoom, offset and other differences. The method has the advantages that correction and cutting are needed in the process of animation video rendering, and the output picture is smooth and free from jitter after the common effective area of multiple machine positions is intercepted. Therefore, in the process of selecting the video camera, the higher the input resolution is, the higher the resolution of the processed output picture is, and the specific placement quantity depends on the final picture display effect according to the actual needed scene.
Specifically, the correction system can record the shooting pictures of the camera in real time, establish a program material library for storage and database retrieval for use by a post-production, automatic broadcasting and VOD system, and when an operator determines to capture pictures at a certain time point on site, the correction system can request to acquire one frame picture or continuous frame pictures through a capture instruction. The advantage of using a camera as the shooting front end is that continuous pictures can be acquired, for example, continuous motion of one machine position for one second can be acquired, and then pictures of other machine positions can be switched to generate more diversified video effects. The recorded video pictures are stored in a memory and a hard disk in the correction system, and the video stored in the memory is used for ensuring timeliness and providing frame pictures required for rendering within 1 second. However, since the memory is not infinite, video pictures exceeding a certain time (depending on the memory size of the correction system) are stored on the hard disk after h.264 or h.265 coding, and can also be used for post-production after live broadcast.
The foregoing description is only illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the scope of the invention, and it will be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations of the present invention, and are intended to be included within the scope of the present invention.

Claims (9)

1. The multi-view surrounding shooting correction method is suitable for correcting the pictures of N cameras and is characterized by comprising the following steps of:
s1, taking one camera of the N cameras as a reference camera, marking a reference point in a reference picture shot by the reference camera, and shooting target pictures containing the reference point by the cameras of the rest N-1 cameras;
step S2, respectively determining the reference coordinates of the reference point in the reference picture and the coordinates in each target picture;
step S3, calculating correction parameters of each target picture relative to the reference picture according to the coordinates and the reference coordinates;
s4, correcting the target picture by the correction parameters;
the step S3 includes the steps of:
step S30, determining a first reference midpoint coordinate, a second reference midpoint coordinate and a reference anchor point coordinate according to the reference coordinates of the reference points in the reference picture, wherein the first reference midpoint coordinate is the midpoint between the first reference point and the third reference point in the reference picture, the second reference midpoint coordinate is the midpoint between the second reference point and the fourth reference point in the reference picture, and the reference anchor point coordinate is the midpoint between the first reference midpoint coordinate and the second reference midpoint coordinate;
step S31, determining a first midpoint coordinate, a second midpoint coordinate and an anchor point coordinate according to the coordinates of the reference point in the target picture, wherein the first midpoint coordinate is the midpoint of the first reference point and the third reference point in the target picture, the second midpoint coordinate is the midpoint of the second reference point and the fourth reference point in the target picture, and the anchor point coordinate is the midpoint of the first midpoint coordinate and the second midpoint coordinate;
and S32, determining a rotation angle according to the first midpoint coordinate, the second midpoint coordinate and the anchor point coordinate.
2. The multi-view surrounding photographing correction method according to claim 1, wherein the specific steps of step S32 are as follows:
step S321, determining whether the lateral coordinate of the first midpoint coordinate is equal to the lateral coordinate of the second midpoint coordinate or the lateral coordinate of the anchor point coordinate, if yes, the rotation angle is 0, and if no, executing step S322;
step S322, the rotation angle is calculated by the following formula:
θ=arctan((X 6 -X 5 )/((Y 6 -Y 5 ))
or alternatively
θ=arctan((X 7 -X 5 )/((Y 7 -Y 5 ))
Wherein θ is the rotation angle, X 5 Y being the transverse coordinate of the first midpoint coordinate 5 The longitudinal coordinate, X, of the first midpoint coordinate 6 Y being the transverse coordinate of the second midpoint coordinate 6 The longitudinal coordinate, X, of the second midpoint coordinate 7 Y being the transverse coordinates of the anchor coordinates 7 Is the longitudinal coordinate of the anchor point coordinate.
3. The multi-view surrounding photographing correction method as claimed in claim 2, wherein the step S32 further comprises:
step S33, in the reference picture, the length of a line segment of a connecting line between the first reference midpoint coordinate and the second reference midpoint coordinate is taken as a standard length;
and step S34, determining a scaling ratio according to the first midpoint coordinate, the second midpoint coordinate and the standard length.
4. The multi-view surrounding photographing correction method as claimed in claim 3, wherein the step S34 specifically comprises:
step S341, calculating the length of the midpoint line segment of each target frame according to the following formula:
where len is the midpoint segment length;
step S342, calculating the scaling of each target frame by the following formula:
s=len/l
where s is the scaling and l is the standard length.
5. The multi-view surrounding photographing correction method as claimed in claim 3, wherein the step S4 specifically comprises:
step S40, translating all the anchor coordinates to the positions of the coordinate origins, and rotating and/or scaling the target picture around the translated anchor coordinates;
step S41, judging whether the rotation angle is equal to 0, and if so, not rotating; otherwise, executing step S42;
step S42, judging whether the rotation angle is smaller than 0, if so, rotating the target picture clockwise around the anchor point coordinates according to the rotation angle; otherwise, step S43 is performed;
and S43, rotating the target picture anticlockwise around the anchor point coordinates according to the rotation angle.
6. The multi-view surrounding photographing correction method according to claim 5, wherein the step S43 further comprises:
step S44, judging whether the scaling ratio is equal to 1, if so, not scaling; otherwise, executing step S45;
step S45, judging whether the scaling is smaller than 1, if so, shrinking the target picture around the anchor point coordinates according to the scaling; otherwise, executing step S46;
and step S46, amplifying the target picture around the anchor point coordinates according to the scaling ratio.
7. The multi-view surrounding photographing correction method as claimed in claim 6, wherein the step S46 further comprises:
step S47, intercepting the minimum effective area of the target picture around the anchor point coordinates, and scaling the effective area to the resolution of the standard size;
and S48, placing the anchor point coordinates to the position of the center point of the picture, and uploading the anchor point coordinates to an external rendering and broadcasting server.
8. The multi-view surrounding photographing correction method of claim 1, wherein the reference points include:
a first reference point located in the upper left corner, a second reference point located in the lower left corner, a third reference point located in the upper right corner, and a fourth reference point located in the lower right corner.
9. A multi-view surrounding photographing correction system, applied to the multi-view surrounding photographing correction method as set forth in any one of claims 1 to 8, comprising:
the recording module comprises all cameras;
the input end of the frame extraction module is connected with the output end of the recording module;
the input end of the correction module is connected with the output end of the frame extraction module, and a GPU processor and a filter are arranged in the correction module;
the input end of the transmission module is connected with the output end of the correction module, and the output end of the transmission module is connected with an external rendering and broadcasting server.
CN202210158375.5A 2022-02-21 2022-02-21 Multi-view surrounding shooting correction method and system Active CN114500849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210158375.5A CN114500849B (en) 2022-02-21 2022-02-21 Multi-view surrounding shooting correction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210158375.5A CN114500849B (en) 2022-02-21 2022-02-21 Multi-view surrounding shooting correction method and system

Publications (2)

Publication Number Publication Date
CN114500849A CN114500849A (en) 2022-05-13
CN114500849B true CN114500849B (en) 2023-11-24

Family

ID=81483242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210158375.5A Active CN114500849B (en) 2022-02-21 2022-02-21 Multi-view surrounding shooting correction method and system

Country Status (1)

Country Link
CN (1) CN114500849B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115426460B (en) * 2022-09-01 2023-11-10 上海东方传媒技术有限公司 Multi-view surrounding shooting bifocal shooting method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905741A (en) * 2014-03-19 2014-07-02 合肥安达电子有限责任公司 Ultra-high-definition panoramic video real-time generation and multi-channel synchronous play system
CN107203984A (en) * 2017-02-21 2017-09-26 合肥安达创展科技股份有限公司 Correction system is merged in projection for third party software
KR20180016187A (en) * 2016-08-05 2018-02-14 한국전자통신연구원 Multiple image analysis method for aligning multiple camera, and image analysis display apparatus
CN108629732A (en) * 2017-03-15 2018-10-09 比亚迪股份有限公司 Vehicle looks around panorama image generation method, device and vehicle
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera
CN111457886A (en) * 2020-04-01 2020-07-28 北京迈格威科技有限公司 Distance determination method, device and system
CN112365537A (en) * 2020-10-13 2021-02-12 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN112970249A (en) * 2018-12-21 2021-06-15 康蒂-特米克微电子有限公司 Assembly for calibrating a camera and measurement of such an assembly
CN113382177A (en) * 2021-05-31 2021-09-10 上海东方传媒技术有限公司 Multi-view-angle surrounding shooting method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018206190A1 (en) * 2018-04-23 2019-10-24 Robert Bosch Gmbh Method for detecting an arrangement of at least two cameras of a multi-camera system of a mobile carrier platform to each other and method for detecting an arrangement of the camera to an object outside the mobile carrier platform
US11256214B2 (en) * 2019-10-18 2022-02-22 Looking Glass Factory, Inc. System and method for lightfield capture

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905741A (en) * 2014-03-19 2014-07-02 合肥安达电子有限责任公司 Ultra-high-definition panoramic video real-time generation and multi-channel synchronous play system
KR20180016187A (en) * 2016-08-05 2018-02-14 한국전자통신연구원 Multiple image analysis method for aligning multiple camera, and image analysis display apparatus
CN107203984A (en) * 2017-02-21 2017-09-26 合肥安达创展科技股份有限公司 Correction system is merged in projection for third party software
CN108629732A (en) * 2017-03-15 2018-10-09 比亚迪股份有限公司 Vehicle looks around panorama image generation method, device and vehicle
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera
CN112970249A (en) * 2018-12-21 2021-06-15 康蒂-特米克微电子有限公司 Assembly for calibrating a camera and measurement of such an assembly
CN111457886A (en) * 2020-04-01 2020-07-28 北京迈格威科技有限公司 Distance determination method, device and system
CN112365537A (en) * 2020-10-13 2021-02-12 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN113382177A (en) * 2021-05-31 2021-09-10 上海东方传媒技术有限公司 Multi-view-angle surrounding shooting method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多相机组合系统的球面角坐标系及其实验测定方法;李文瑞;刘沛鑫;;成都信息工程学院学报(第06期);全文 *

Also Published As

Publication number Publication date
CN114500849A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN111698390B (en) Virtual camera control method and device, and virtual studio implementation method and system
US8963951B2 (en) Image processing apparatus, moving-image playing apparatus, and processing method and program therefor to allow browsing of a sequence of images
US9531965B2 (en) Controller in a camera for creating a registered video image
US7424218B2 (en) Real-time preview for panoramic images
US8289345B2 (en) Display device
CN104365083B (en) Image processing apparatus, image processing method and program
US6674461B1 (en) Extended view morphing
US20060165310A1 (en) Method and apparatus for a virtual scene previewing system
CN113382177B (en) Multi-view-angle surrounding shooting method and system
CN104103068B (en) For controlling the method and apparatus of virtual camera
WO2013186056A1 (en) Method and apparatus for fusion of images
JPWO2005024723A1 (en) Image composition system, image composition method and program
CN110401795A (en) Image processing apparatus, image processing method and program
CN110691175B (en) Video processing method and device for simulating motion tracking of camera in studio
JP2014529930A (en) Selective capture and display of a portion of a native image
CN114500849B (en) Multi-view surrounding shooting correction method and system
JPH08149356A (en) Moving picture display device
CN110730340B (en) Virtual audience display method, system and storage medium based on lens transformation
CN112533005B (en) Interaction method and system for VR video slow live broadcast
CN112003999A (en) Three-dimensional virtual reality synthesis algorithm based on Unity 3D
CN107396133A (en) Free viewpoint video director method and system
CN113507575B (en) Human body self-photographing lens generation method and system
WO2021032105A1 (en) Code stream processing method and device, first terminal, second terminal and storage medium
CN113240700A (en) Image processing method and device, computer readable storage medium and electronic device
KR100321904B1 (en) An apparatus and method for extracting of camera motion in virtual studio

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant