CN111986133A - Virtual advertisement implanting method applied to bullet time - Google Patents

Virtual advertisement implanting method applied to bullet time Download PDF

Info

Publication number
CN111986133A
CN111986133A CN202010844356.9A CN202010844356A CN111986133A CN 111986133 A CN111986133 A CN 111986133A CN 202010844356 A CN202010844356 A CN 202010844356A CN 111986133 A CN111986133 A CN 111986133A
Authority
CN
China
Prior art keywords
plane
advertisement
scene
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010844356.9A
Other languages
Chinese (zh)
Inventor
杨文康
张迎梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Plex VR Digital Technology Shanghai Co Ltd
Original Assignee
Plex VR Digital Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Plex VR Digital Technology Shanghai Co Ltd filed Critical Plex VR Digital Technology Shanghai Co Ltd
Priority to CN202010844356.9A priority Critical patent/CN111986133A/en
Publication of CN111986133A publication Critical patent/CN111986133A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a virtual advertisement implanting method applied to bullet time, which comprises the following steps: s1: detecting a scene designated plane, performing three-dimensional reconstruction of the scene, acquiring sparse three-dimensional point cloud of the scene, searching a plane in the sparse point cloud, and selecting the plane expected to be implanted with an advertisement by a user; s2: specifying an implantation position of the advertisement on the plane, wherein the position is selected by a user through an image of a certain view angle through a UI (user interface), and calculating corresponding three-dimensional point coordinates on the scene plane; and finally, aligning the advertisement plane and the scene and generating the virtual advertisement. The invention has simple operation and less manual intervention; supporting three-dimensional reconstruction of a scene; the position, size and angle of the advertisement are interactively adjusted to meet personalized requirements; and the real-time playing requirement of high resolution is met.

Description

Virtual advertisement implanting method applied to bullet time
Technical Field
The invention relates to the field of three-dimensional reconstruction and image processing of computer vision and computer graphics, in particular to a virtual advertisement implantation method applied to bullet time.
Background
Bullet time (Bullet time) is a special effect technology of photography assisted by a computer, and is often applied to televisions, advertisements or computer games to achieve visual effects such as strengthening slow shots, time stillness and the like, so that audiences or users can obtain brand new experience. Bullet times can be divided into static bullet times and dynamic bullet times. In static bullet time, the camera system takes a picture of the object and presents the picture from different angles at the same time to achieve the effect of time freezing. In the dynamic bullet time, the camera system presents pictures at different angles at different moments, and dynamic live-action pictures can be watched from multiple angles.
Virtual advertising is an emerging digital marketing means in recent years, and utilizes a virtual reality technology to implant advertisements of customers into a field environment. The virtual advertising technology can enable a static billboard in a scene to move, realize directional delivery in different regional markets and meet individual customization of different customers, and further realize maximization of income.
Virtual advertisements are implanted in the bullet time, which has great potential commercial value. The method not only can enable the audience to experience the visual special effect brought by the bullet time, but also can perform secondary content synthesis on the shot video, thereby realizing the full personalized customization of the advertisement product; the key point and difficulty is how to ensure that the implanted advertisement is consistent and coordinated with the live-action picture in different visual angles of the bullet time.
Therefore, those skilled in the art are dedicated to develop a virtual advertisement implanting method applied to bullet time to implement secondary content synthesis of video under multiple viewing angles, and combine virtual advertisement with real scene to generate accurate and real advertisement picture in real time. Meanwhile, the user can interactively adjust the position, the size and the angle of the advertisement in the image so as to meet the personalized requirements of the user.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, realize secondary content synthesis of videos under multiple visual angles, combine virtual advertisements with real scenes and generate accurate and real advertisement pictures in real time.
In order to achieve the above object, the present invention provides a virtual advertisement implanting method applied to a bullet time, comprising the steps of:
s1: detecting a scene designated plane, performing three-dimensional reconstruction of the scene in the step S1 to obtain sparse three-dimensional point cloud of the scene, searching a plane in the sparse three-dimensional point cloud, and selecting a plane expected to be implanted with an advertisement by a user;
s2: the method comprises the steps of performing plane alignment and virtual advertisement generation on an advertisement, specifying an implantation position of the advertisement on a plane, wherein the position is selected by a user through an image of a certain visual angle through a UI (user interface), and calculating a corresponding three-dimensional point coordinate on a scene plane; and finally, aligning the advertisement plane and the scene and generating the virtual advertisement.
Further, the step S1 further includes:
s11, algorithm input: synchronously acquiring a group of images from a multi-camera system, wherein each camera acquires a frame of picture; calibrating the camera by utilizing the group of images to acquire internal and external parameters of the camera; the internal and external parameters of the camera and the group of images are used as the input of a plane detection algorithm;
s12, feature matching: performing feature matching on the group of images to obtain matching feature points of two adjacent images;
s13, triangulation: performing triangulation by using the camera internal and external parameters and the corresponding feature point image coordinates, and calculating three-dimensional coordinates of the feature points in a scene to obtain the sparse three-dimensional point cloud of the scene;
s14, plane fitting: fitting all planes existing in the sparse three-dimensional point cloud by using a Ransac algorithm;
s15, selecting a designated plane: selecting an image anchor point of an advertisement plane to be implanted by a user through a UI (user interface) in an image with a specified view angle; and determining a parameter equation of the designated plane according to the anchor point.
Further, the step S2 further includes:
s21, inputting an algorithm including an advertisement image to be implanted, the camera internal and external parameters calibrated in the step S1, the scene plane equation fitted in the step S1, an image anchor point for specifying the advertisement to be implanted and a video sequence for acquiring the advertisement to be implanted;
s22, calculating the three-dimensional coordinates of the advertisement implantation anchor point: calculating the three-dimensional point coordinates of the advertisement plane to be placed according to the camera internal and external parameters and the scene plane equation;
s23, aligning the advertisement plane with the scene plane: calculating a rotation matrix and a translation vector required by aligning the advertisement plane and the scene plane according to the advertisement plane anchor point, the advertisement plane normal vector, the scene plane normal vector and the three-dimensional position of the advertisement implantation anchor point; setting a transformation matrix for adjusting the size and the angle of the advertisement;
s24, calculating a homography matrix: re-projecting the advertisement plane corner points to the images under all the viewing angles according to the camera internal and external parameters, and calculating corresponding image coordinates, namely re-projecting the advertisement plane corner points; calculating a homography matrix according to the reprojection coordinates of the corner points of the advertisement plane and the corner point coordinates of the advertisement image;
s25, image perspective transformation and synthesis: and carrying out perspective transformation on the Logo image by utilizing the homography matrix, and synthesizing the transformed image and the live-action image.
Further, the step S12 further includes:
s121, feature detection: performing feature detection on an input image, and detecting image feature points by using an AKAZE detection algorithm;
s122, feature matching: carrying out violence matching on the characteristics of two adjacent images, and determining the matched characteristic points of the two images;
s123, interior point detection: the characteristics after violence matching may have outliers; and (3) removing error matching by using a Cross-Ratio-Check method, namely carrying out once violence matching on feature descriptors in the two images, and taking the matching meeting a Ratio threshold value in the two groups of matching results as an interior point.
Further, the rannac algorithm in step S14 is:
estimating a mathematical model of a plane from a group of point clouds containing external points in an iterative mode; in the model, after a plane is found in each iteration, interior points belonging to the plane are removed from the point cloud; performing the Randac algorithm iteration again on the residual point cloud, and searching a plane; and stopping executing the Randac algorithm until the number of the points in the residual point cloud is less than a threshold value, and obtaining all possible plane sets in the three-dimensional point cloud.
Further, the step S15 further includes:
s151: selecting an image anchor point: selecting a specified view angle picture, drawing detected feature points in the view angle picture, and picking up N feature points on a certain plane; after the feature point is picked up, the three-dimensional coordinates of the feature point can be determined;
s152: ground plane determination: and after the ground plane feature points are determined, the corresponding three-dimensional coordinates are brought into all plane equations, and the loss is calculated according to the following formula:
Figure BDA0002642535450000031
wherein N is the number of picked anchor points, and (A, B, C and D) are parameters of planes in the set S, and the plane with the minimum loss is selected as a designated plane.
Further, in the step S21:
the plane equation is the scene plane determined in step S15, and is a general parameter equation of a plane, where a normal vector n of a plane unit is 0p=(A,B,C);
The user selects an image coordinate point to be implanted with the advertisement in the specified view angle image through a UI (user interface); when the advertisement implantation position needs to be changed, the image point coordinates are picked up again.
Further, the calculation formula of the three-dimensional anchor point is as follows, wherein R, K, (u, v), t, (a, B, C, D) are all known parameters:
Figure BDA0002642535450000032
wherein [ R | t ] is external reference of the current camera, K is internal reference of the current camera, (u, v) is implantation position of the advertisement in the current visual angle, and (A, B, C, D) are scene plane equation parameters.
Further, in the step S23:
the advertisement plane is a rectangle which is preset in a world coordinate system and is positioned on an xy axis; the center of the advertisement image is positioned at the origin of a world coordinate system, the length is 2, the height is 2r, and r is the aspect ratio of the advertisement image; its normal vector is na(0,0,1), four corner points of the advertisement plane are pa1=(1,-r,0),pa2=(-1,-r,0),pa3=(-1,r,0),pa4(1, r,0), the anchor point of the advertisement plane is Pa0=(0,0,0);
In the step S23, a rotation and translation matrix required for aligning the advertisement plane and the scene in the three-dimensional space needs to be calculated; unit normal vector n according to scene planepAdvertisement plane normal vector naPlane anchor point PpAnd advertisement plane anchor PaRotation matrix for advertisement to scene plane alignment
Figure BDA0002642535450000041
Translation vector ta2p=ppWherein c is np·na,v=na×npI is a unit matrix, [ v ]]×An antisymmetric matrix of v;
setting a scaling matrix s and a rotation matrix R around the z-axis of the modelzWherein
Figure BDA0002642535450000042
Figure BDA0002642535450000043
Wherein(s)x,sy) Scaling factors of the advertisement plane on an xy axis are adopted, alpha is a rotation angle around the z axis, and the scaling factors are adjusted by a user through a UI according to actual requirements; thus, the total rotation matrix
Figure BDA0002642535450000044
Translation vector
Figure BDA0002642535450000045
After the advertisement plane is aligned with the scene plane, the corner coordinates thereof
Figure BDA0002642535450000046
Wherein i ∈ {1,2,3,4 };
the step S23 is performed by the user before generating the final virtual advertisement, and when the proper dimension and angle of the advertisement placement are obtained, the step S23 is terminated.
Further, in the step S24:
the re-projection advertisement plane corner points are re-projection coordinates of the advertisement plane corner points on images of all viewing angles under the condition that the internal and external parameters of each camera and the three-dimensional coordinates of the advertisement plane corner points are known;
and calculating the homography matrix, namely calculating the homography matrix of the advertisement image and the images of all visual angles according to the projection image point and the advertisement image corner point of the advertisement plane corner point.
The present invention is directed to the implementation of video ad-placement in multiple viewing angles in a straightforward and efficient manner, combining virtual ad technology with bullet time. The invention is characterized in that:
1. the operation is simple, and the manual intervention is less;
2. supporting three-dimensional reconstruction of a scene;
3. the position, size and angle of the advertisement are interactively adjusted to meet personalized requirements;
4. and the real-time playing requirement of high resolution is met.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a flow chart of a virtual advertisement placement method applied to bullet time according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart of the scene specific plane detection of the preferred embodiment of the present invention;
FIG. 3 is a flowchart of the advertisement plane alignment and virtual advertisement generation of a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a bullet time camera array according to a preferred embodiment of the present invention;
FIG. 5 is an image anchor point pick-up of a preferred embodiment of the present invention;
FIG. 6 is an advertisement Logo in accordance with a preferred embodiment of the present invention;
FIG. 7 is a schematic diagram of virtual advertisement placement according to a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
In the drawings, structurally identical elements are represented by like reference numerals, and structurally or functionally similar elements are represented by like reference numerals throughout the several views. The size and thickness of each component shown in the drawings are arbitrarily illustrated, and the present invention is not limited to the size and thickness of each component. The thickness of the components may be exaggerated where appropriate in the figures to improve clarity.
As shown in fig. 1, the method of the present patent comprises two stages, each stage comprising the following steps:
stage 1: scene specific plane detection
As shown in fig. 2, at this stage, a three-dimensional reconstruction of the scene is performed, a sparse three-dimensional point cloud of the scene is obtained, possible planes are searched in the point cloud, and a plane desired to be advertised is selected by the user
1. Inputting an algorithm: synchronously acquiring a set of color (RGB) images from a multi-camera system, each camera acquiring only one frame of picture; calibrating the camera by using the group of images to acquire internal and external parameters of the camera; the internal and external parameters of the camera and the group of images are used as the input of a plane detection algorithm
2. And (3) feature matching: performing feature matching on the group of images to obtain matching feature points of two adjacent images
3. Triangulation: carrying out triangulation by utilizing the internal and external parameters of the camera and the corresponding image coordinates of the feature points, calculating the three-dimensional coordinates of the feature points in the scene, and obtaining the sparse three-dimensional point cloud of the scene
4. Plane fitting: fitting all planes existing in the three-dimensional point cloud by using a Ransac algorithm
5. Selecting an appointed plane: selecting an image anchor point of an advertisement plane to be implanted by a user through a UI (user interface) in an image with a specified view angle; determining a parametric equation for the designated plane based on the anchor point
And (2) stage: advertisement plane alignment and virtual advertisement generation
As shown in fig. 3, the step of stage 2 is:
1. inputting an algorithm: an advertisement image to be implanted; camera internal and external parameters calibrated in the stage 1; the plane equation fitted in stage 1 (i.e., scene plane); specifying an image anchor (picked up by the user in the UI) where the advertisement is to be implanted; collecting video sequence needing to be embedded with advertisement
2. Calculating the three-dimensional coordinates of the advertisement implantation anchor point: calculating the three-dimensional point coordinates of the plane where the advertisement needs to be placed according to the internal and external parameters of the camera and the scene plane equation
3. The advertisement plane is aligned with the scene plane: calculating a rotation matrix and a translation vector required by aligning the advertisement plane and the scene plane according to the advertisement plane anchor point, the advertisement plane normal vector, the scene plane normal vector and the three-dimensional position of the advertisement implantation anchor point; in addition, a transformation matrix for adjusting the size and the angle of the advertisement is set as required
4. Homography matrix calculation: according to the internal and external parameters of the camera, projecting the corner points of the advertisement plane to the image under each visual angle again, and calculating the corresponding image coordinates; calculating homography matrix according to reprojection coordinates of corner points of advertisement plane and corner point coordinates of advertisement image
5. Image perspective transformation and synthesis: carrying out perspective transformation on the Logo image by utilizing a homography matrix, and synthesizing the transformed image and the live-action image
The virtual advertisement implantation in bullet time is realized by utilizing a stereoscopic vision technology; calibrating internal and external parameters of a camera by using RGB images synchronously acquired from a multi-camera system, performing three-dimensional reconstruction on a scene, calculating a three-dimensional point cloud and a plane equation set of the scene, and selecting a plane equation for advertisement implantation; and aligning the advertisement plane and the scene plane according to the image position designated by the user, and transforming the advertisement picture to generate the virtual advertisement. The method mainly comprises the steps of determining a three-dimensional scene plane, aligning advertisements with the scene plane, synthesizing virtual advertisements and the like.
Example one
Stage 1: scene specific plane detection
The stage is a preprocessing stage, and after a bullet time camera system is built, a group of data needs to be synchronously acquired for a scene in advance for determining a plane parameter equation for advertisement implantation; under the condition that the pose of the camera is unchanged in the same scene, the phase is only needed to be executed once.
1. Construction, calibration and data acquisition of a multi-camera system
Building a multi-camera system: the method has no special requirements on the camera model, the number of the cameras can be selected according to the actual requirements of the scene size or the bullet time, and a set of RGB images are synchronously acquired from different angles by using the built cameras; a system built by 40 IoI Industry cameras is used for data acquisition. FIG. 4 is a schematic diagram of a conventional array of bullet time cameras.
Calibrating the multi-camera system: and calibrating, namely calculating internal Parameters (Intrasic Parameters) and external Parameters (Extrinsic Parameters) of the camera. The camera parameters used in this patent are calibrated by the AGI software.
2. Feature matching
And (3) feature detection: performing feature detection on the input image, and selecting algorithms such as SIFT, SURF, ORB, AKAZE and the like; the AKAZE detection algorithm is used in the patent, and image characteristic points are detected;
and (3) feature matching: carrying out Brute Force Matching (Brute Force Matching) on the characteristics of two adjacent images, and determining the matched characteristic points of the two images;
interior point detection: the feature after violence matching may have an Outlier (Outlier); the patent utilizes a Cross-Ratio-Check method to remove error matching, namely, feature descriptors in two images are respectively used as a QueryDescriptor and a TracInDescriptor to carry out once violence matching, and the matching meeting a Ratio threshold value in two groups of matching results is used as an interior point (Inlier)
3. Triangulation
Triangulation: according to the internal and external parameters of the camera and the matched feature point coordinates, the corresponding three-dimensional coordinates of the point can be determined; and calculating three-dimensional points of the features in all the images to form sparse point cloud of the scene.
4. Fitting of planes
Random sample consensus algorithm (ranac): a mathematical model of a plane is iteratively estimated from a set of point clouds containing exterior points. In order to find all existing planes, after finding a plane each time, removing interior points belonging to the plane from the point cloud; performing Randac iteration again on the residual point cloud, and searching a plane; and stopping executing the Randac algorithm until the number of the points in the residual point cloud is less than a certain threshold (such as 10). In this step, a set S of all possible planes in the three-dimensional point cloud can be obtained.
5. Selecting a scene three-dimensional plane:
selecting an image anchor point: in the step, selecting a specified view angle picture, drawing detected feature points in the view angle picture, and picking up N feature points on a certain plane; this process is done by the user through the UI (typically three points can be selected); after the feature point is picked up, the three-dimensional coordinates of the feature point can be determined at the same time (the triangulation result in query step 3)
Determining a scene three-dimensional plane: after the plane feature points are determined, the three-dimensional coordinates of the plane feature points are substituted into all plane equations in the set S, and the loss is calculated according to the following formula:
Figure BDA0002642535450000071
where N is the number of anchor points picked up and (A, B, C, D) are parameters of the planes in the set S. Selecting the smallest lossThe plane serves as a designated plane.
And (2) stage: advertisement plane alignment and virtual advertisement generation
In the stage 1, a parameter equation of a scene plane where the advertisement is implanted can be determined by utilizing feature matching, plane fitting and user interaction; this process may be done in a pre-processing stage. In the stage 2, the implantation position of the advertisement on the plane needs to be specified, the position is selected by a user on an image of a certain view angle through a UI, and the corresponding three-dimensional point coordinate on the scene plane is calculated according to the position; and finally, aligning the advertisement plane and the scene and generating the virtual advertisement.
1. Data preparation
The plane equation: the plane equation is the scene plane determined in stage 1, and the equation is a general parameter equation A · x + B · y + C · z + D of the plane, where the normal vector n of the plane unit is 0p=(A,B,C);
Location of pickup advertisement placement: as shown in fig. 5, the user selects an image coordinate point to be implanted with an advertisement through the UI in the designated perspective image; when the advertisement implantation position needs to be changed, the coordinates of the image points are picked up again;
an advertisement image, as shown in fig. 6.
2. Computing three-dimensional anchor points for advertisement placement
Under the condition of known camera internal and external parameters, plane equation and anchor point image coordinate, the three-dimensional coordinate P corresponding to the point on the plane can be calculatedp=(xp,yp,zp). The calculation formula is as follows:
Figure BDA0002642535450000081
[ R | t ] is the external reference of the current camera, K is the internal reference of the current camera, and (u, v) is the advertisement at the current view angle
Wherein the content of the first and second substances,
the (A), (B), (C) and (D) are parameters of a scene plane equation.
3. Advertisement plane and scene plane alignment
And (3) advertising plane: the advertisement plane is a rectangle which is preset in a world coordinate system and is positioned on an xy axis; its center is at the origin of the world coordinate system, and has a length of 2 and a height of 2r (r is the aspect ratio of the advertisement image). Thus, its normal vector is na(0,0,1), four corner points p of the advertisement planea1=(1,-r,0),pa2=(-1,-r,0),pa3=(-1,r,0),pa4(1, r,0), anchor point P of advertisement planea0=(0,0,0);
In this step, a rotation and translation matrix required for aligning the advertisement plane and the scene in the three-dimensional space needs to be calculated. Unit normal vector n according to scene planepAdvertisement plane normal vector naPlane anchor point PpAnd advertisement plane anchor PaRotation matrix for advertisement to scene plane alignment
Figure BDA0002642535450000082
Translation vector ta2p=ppWherein c is np·na,v=na×npI is a unit matrix, [ v ]]×An antisymmetric matrix of v;
in order to facilitate the user to adjust the model, a scale transformation matrix s and a rotation matrix R around the z axis of the model need to be setzWherein
Figure BDA0002642535450000083
Wherein(s)x,sy) The scaling factor of the advertisement plane on the xy axis and the rotation angle alpha around the z axis are adjusted by the user according to actual requirements through the UI. Thus, the total rotation matrix
Figure BDA0002642535450000084
Translation vector
Figure BDA0002642535450000085
After the advertisement plane is aligned with the scene plane, the corner coordinates thereof
Figure BDA0002642535450000086
Where i ∈{1,2,3,4};
The step is debugged by the user before generating the final virtual advertisement, and the step does not need to be continuously executed after the proper dimension and angle of the implanted advertisement are obtained.
4. Homography matrix computation
Re-projecting the corner points of the advertisement plane: under the condition that the internal and external parameters of each camera and the three-dimensional coordinates of corner points of the advertisement plane are known, the reprojection coordinates of the corner points on each view angle image can be calculated;
calculating a homography matrix: and calculating the homography matrix of the advertisement image and the images of all the visual angles according to the projection image points of the corner points of the advertisement plane and the corner points of the advertisement image.
5. Image perspective transformation and synthesis
As shown in fig. 7, the advertisement image is perspective-transformed using the homography matrix; and overlaying the transformed advertisement image to the live-action image.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A virtual advertisement implanting method applied to bullet time is characterized by comprising the following steps:
s1: detecting a scene designated plane, performing three-dimensional reconstruction of the scene in the step S1 to obtain sparse three-dimensional point cloud of the scene, searching a plane in the sparse three-dimensional point cloud, and selecting a plane expected to be implanted with an advertisement by a user;
s2: the method comprises the steps of performing plane alignment and virtual advertisement generation on an advertisement, specifying an implantation position of the advertisement on a plane, wherein the position is selected by a user through an image of a certain visual angle through a UI (user interface), and calculating a corresponding three-dimensional point coordinate on a scene plane; and finally, aligning the advertisement plane and the scene and generating the virtual advertisement.
2. The virtual advertisement implanting method applied to bullet times of claim 1, wherein the step S1 further comprises:
s11, algorithm input: synchronously acquiring a group of images from a multi-camera system, wherein each camera acquires a frame of picture; calibrating the camera by utilizing the group of images to acquire internal and external parameters of the camera; the internal and external parameters of the camera and the group of images are used as the input of a plane detection algorithm;
s12, feature matching: performing feature matching on the group of images to obtain matching feature points of two adjacent images;
s13, triangulation: performing triangulation by using the camera internal and external parameters and the corresponding feature point image coordinates, and calculating three-dimensional coordinates of the feature points in a scene to obtain the sparse three-dimensional point cloud of the scene;
s14, plane fitting: fitting all planes existing in the sparse three-dimensional point cloud by using a Ransac algorithm;
s15, selecting a designated plane: selecting an image anchor point of an advertisement plane to be implanted by a user through a UI (user interface) in an image with a specified view angle; and determining a parameter equation of the designated plane according to the anchor point.
3. The virtual advertisement implanting method applied to bullet times of claim 2, wherein the step S2 further comprises:
s21, inputting an algorithm including an advertisement image to be implanted, the camera internal and external parameters calibrated in the step S1, the scene plane equation fitted in the step S1, an image anchor point for specifying the advertisement to be implanted and a video sequence for acquiring the advertisement to be implanted;
s22, calculating the three-dimensional coordinates of the advertisement implantation anchor point: calculating the three-dimensional point coordinates of the advertisement plane to be placed according to the camera internal and external parameters and the scene plane equation;
s23, aligning the advertisement plane with the scene plane: calculating a rotation matrix and a translation vector required by aligning the advertisement plane and the scene plane according to the advertisement plane anchor point, the advertisement plane normal vector, the scene plane normal vector and the three-dimensional position of the advertisement implantation anchor point; setting a transformation matrix for adjusting the size and the angle of the advertisement;
s24, calculating a homography matrix: re-projecting the advertisement plane corner points to the images under all the viewing angles according to the camera internal and external parameters, and calculating corresponding image coordinates, namely re-projecting the advertisement plane corner points; calculating a homography matrix according to the reprojection coordinates of the corner points of the advertisement plane and the corner point coordinates of the advertisement image;
s25, image perspective transformation and synthesis: and carrying out perspective transformation on the Logo image by utilizing the homography matrix, and synthesizing the transformed image and the live-action image.
4. The virtual advertisement implanting method applied to bullet times of claim 2, wherein the step S12 further comprises:
s121, feature detection: performing feature detection on an input image, and detecting image feature points by using an AKAZE detection algorithm;
s122, feature matching: carrying out violence matching on the characteristics of two adjacent images, and determining the matched characteristic points of the two images;
s123, interior point detection: the characteristics after violence matching may have outliers; and (3) removing error matching by using a Cross-Ratio-Check method, namely carrying out once violence matching on feature descriptors in the two images, and taking the matching meeting a Ratio threshold value in the two groups of matching results as an interior point.
5. The virtual advertisement implanting method applied to bullet times of claim 2, wherein the ranaca algorithm in the step S14 is:
estimating a mathematical model of a plane from a group of point clouds containing external points in an iterative mode; in the model, after a plane is found in each iteration, interior points belonging to the plane are removed from the point cloud; performing the Randac algorithm iteration again on the residual point cloud, and searching a plane; and stopping executing the Randac algorithm until the number of the points in the residual point cloud is less than a threshold value, and obtaining all possible plane sets in the three-dimensional point cloud.
6. The virtual advertisement implanting method applied to bullet times of claim 1, wherein the step S15 further comprises:
s151: selecting an image anchor point: selecting a specified view angle picture, drawing detected feature points in the view angle picture, and picking up N feature points on a certain plane; after the feature point is picked up, the three-dimensional coordinates of the feature point can be determined;
s152: ground plane determination: and after the ground plane feature points are determined, the corresponding three-dimensional coordinates are brought into all plane equations, and the loss is calculated according to the following formula:
Figure FDA0002642535440000021
wherein N is the number of picked anchor points, and (A, B, C and D) are parameters of planes in the set S, and the plane with the minimum loss is selected as a designated plane.
7. The virtual advertisement implanting method applied to bullet times of claim 3, wherein in the step S21:
the plane equation is the scene plane determined in step S15, and is a general parameter equation of a plane, where a normal vector n of a plane unit is 0p=(A,B,C);
The user selects an image coordinate point to be implanted with the advertisement in the specified view angle image through a UI (user interface); when the advertisement implantation position needs to be changed, the image point coordinates are picked up again.
8. The virtual advertisement implanting method applied to bullet times of claim 3, wherein the formula of the three-dimensional anchor point is as follows, wherein R, K, (u, v), t, (A, B, C, D) are all known parameters:
Figure FDA0002642535440000031
wherein [ R | t ] is external reference of the current camera, K is internal reference of the current camera, (u, v) is implantation position of the advertisement in the current visual angle, and (A, B, C, D) are scene plane equation parameters.
9. The virtual advertisement implanting method applied to bullet times of claim 3, wherein in the step S23:
the advertisement plane is a rectangle which is preset in a world coordinate system and is positioned on an xy axis; the center of the advertisement image is positioned at the origin of a world coordinate system, the length is 2, the height is 2r, and r is the aspect ratio of the advertisement image; its normal vector is na(0,0,1), four corner points of the advertisement plane are pa1=(1,-r,0),pa2=(-1,-r,0),pa3=(-1,r,0),pa4(1, r,0), the anchor point of the advertisement plane is Pa0=(0,0,0);
In the step S23, a rotation and translation matrix required for aligning the advertisement plane and the scene in the three-dimensional space needs to be calculated; unit normal vector n according to scene planepAdvertisement plane normal vector naPlane anchor point PpAnd advertisement plane anchor PaRotation matrix for advertisement to scene plane alignment
Figure FDA0002642535440000032
Translation vector ta2p=ppWherein c is np·na,v=na×npI is a unit matrix, [ v ]]×An antisymmetric matrix of v;
setting a scaling matrix s and a rotation matrix R around the z-axis of the modelzWherein
Figure FDA0002642535440000033
Figure FDA0002642535440000034
Wherein(s)x,sy) Scaling factors of the advertisement plane on an xy axis are adopted, alpha is a rotation angle around the z axis, and the scaling factors are adjusted by a user through a UI according to actual requirements; thus, the total rotation matrix
Figure FDA0002642535440000035
Translation vector
Figure FDA0002642535440000036
After the advertisement plane is aligned with the scene plane, the corner coordinates thereof
Figure FDA0002642535440000037
Wherein i ∈ {1,2,3,4 };
the step S23 is performed by the user before generating the final virtual advertisement, and when the proper dimension and angle of the advertisement placement are obtained, the step S23 is terminated.
10. The virtual advertisement implanting method applied to bullet times of claim 1, wherein in the step S24:
the re-projection advertisement plane corner points are re-projection coordinates of the advertisement plane corner points on images of all viewing angles under the condition that the internal and external parameters of each camera and the three-dimensional coordinates of the advertisement plane corner points are known;
and calculating the homography matrix, namely calculating the homography matrix of the advertisement image and the images of all visual angles according to the projection image point and the advertisement image corner point of the advertisement plane corner point.
CN202010844356.9A 2020-08-20 2020-08-20 Virtual advertisement implanting method applied to bullet time Pending CN111986133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010844356.9A CN111986133A (en) 2020-08-20 2020-08-20 Virtual advertisement implanting method applied to bullet time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010844356.9A CN111986133A (en) 2020-08-20 2020-08-20 Virtual advertisement implanting method applied to bullet time

Publications (1)

Publication Number Publication Date
CN111986133A true CN111986133A (en) 2020-11-24

Family

ID=73443825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010844356.9A Pending CN111986133A (en) 2020-08-20 2020-08-20 Virtual advertisement implanting method applied to bullet time

Country Status (1)

Country Link
CN (1) CN111986133A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1144588A (en) * 1994-03-14 1997-03-05 美国赛特公司 A system for implanting image into video stream
WO2011121117A1 (en) * 2010-04-02 2011-10-06 Imec Virtual camera system
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN105976399A (en) * 2016-04-29 2016-09-28 北京航空航天大学 Moving object detection method based on SIFT (Scale Invariant Feature Transform) feature matching
WO2019096016A1 (en) * 2017-11-14 2019-05-23 深圳岚锋创视网络科技有限公司 Method for achieving bullet time capturing effect and panoramic camera
CN109842811A (en) * 2019-04-03 2019-06-04 腾讯科技(深圳)有限公司 A kind of method, apparatus and electronic equipment being implanted into pushed information in video
CN110599605A (en) * 2019-09-10 2019-12-20 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111372122A (en) * 2020-02-27 2020-07-03 腾讯科技(深圳)有限公司 Media content implantation method, model training method and related device
CN111986296A (en) * 2020-08-20 2020-11-24 叠境数字科技(上海)有限公司 CG animation synthesis method for bullet time

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1144588A (en) * 1994-03-14 1997-03-05 美国赛特公司 A system for implanting image into video stream
WO2011121117A1 (en) * 2010-04-02 2011-10-06 Imec Virtual camera system
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN105976399A (en) * 2016-04-29 2016-09-28 北京航空航天大学 Moving object detection method based on SIFT (Scale Invariant Feature Transform) feature matching
WO2019096016A1 (en) * 2017-11-14 2019-05-23 深圳岚锋创视网络科技有限公司 Method for achieving bullet time capturing effect and panoramic camera
CN109842811A (en) * 2019-04-03 2019-06-04 腾讯科技(深圳)有限公司 A kind of method, apparatus and electronic equipment being implanted into pushed information in video
CN110599605A (en) * 2019-09-10 2019-12-20 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111372122A (en) * 2020-02-27 2020-07-03 腾讯科技(深圳)有限公司 Media content implantation method, model training method and related device
CN111986296A (en) * 2020-08-20 2020-11-24 叠境数字科技(上海)有限公司 CG animation synthesis method for bullet time

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯涛;陈智民;: "2016里约奥运会田径赛场虚拟跟踪定位系统的应用解析――里约奥运田径赛场Ncam虚拟跟踪定位系统的使用心得浅谈", 现代电视技术, no. 04, 15 April 2017 (2017-04-15) *
李硕明;陈越;: "基于联合特征匹配的多视角三维重建方法", 计算机系统应用, no. 10, 15 October 2016 (2016-10-15) *

Similar Documents

Publication Publication Date Title
CN108711185B (en) Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
EP2930689B1 (en) Method for rendering
CN109314753B (en) Method and computer-readable storage medium for generating intermediate views using optical flow
CN101616237B (en) Image processing apparatus, image processing method
Pagés et al. Affordable content creation for free-viewpoint video and VR/AR applications
EP0930585B1 (en) Image processing apparatus
US20120180084A1 (en) Method and Apparatus for Video Insertion
Meilland et al. A unified rolling shutter and motion blur model for 3D visual registration
CN111986296A (en) CG animation synthesis method for bullet time
JP2001274973A (en) Device and method for synthesizing microscopic image and computer-readable recording medium with microscopic image synthesizing processing program recorded thereon
CN111986133A (en) Virtual advertisement implanting method applied to bullet time
CN111192308A (en) Image processing method and device, electronic equipment and computer storage medium
Inamoto et al. Free viewpoint video synthesis and presentation of sporting events for mixed reality entertainment
CN115834800A (en) Method for controlling shooting content synthesis algorithm
CN111260544B (en) Data processing method and device, electronic equipment and computer storage medium
Malerczyk et al. 3D reconstruction of sports events for digital TV
JP2002094849A (en) Wide view image pickup device
Lee et al. [POSTER] Geometric Mapping for Color Compensation Using Scene Adaptive Patches
Papadakis et al. Virtual camera synthesis for soccer game replays
Liang et al. Video2Cartoon: Generating 3D cartoon from broadcast soccer video
KR20160101762A (en) The method of auto stitching and panoramic image genertation using color histogram
JP2001256492A (en) Device and method for composing image and computer readable recording medium for recording its program
CN110544203A (en) Motion least square method and line constraint combined parallax image splicing method
CN113298868B (en) Model building method, device, electronic equipment, medium and program product
US11900635B2 (en) Organic camera-pose mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination