CN111986133B - Virtual advertisement implantation method applied to bullet time - Google Patents

Virtual advertisement implantation method applied to bullet time Download PDF

Info

Publication number
CN111986133B
CN111986133B CN202010844356.9A CN202010844356A CN111986133B CN 111986133 B CN111986133 B CN 111986133B CN 202010844356 A CN202010844356 A CN 202010844356A CN 111986133 B CN111986133 B CN 111986133B
Authority
CN
China
Prior art keywords
plane
advertisement
image
scene
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010844356.9A
Other languages
Chinese (zh)
Other versions
CN111986133A (en
Inventor
杨文康
张迎梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Plex VR Digital Technology Shanghai Co Ltd
Original Assignee
Plex VR Digital Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Plex VR Digital Technology Shanghai Co Ltd filed Critical Plex VR Digital Technology Shanghai Co Ltd
Priority to CN202010844356.9A priority Critical patent/CN111986133B/en
Publication of CN111986133A publication Critical patent/CN111986133A/en
Application granted granted Critical
Publication of CN111986133B publication Critical patent/CN111986133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual advertisement implantation method applied to bullet time, which comprises the following steps: s1: detecting a scene designated plane, carrying out three-dimensional reconstruction of the scene, obtaining a sparse three-dimensional point cloud of the scene, searching a plane in the sparse point cloud, and selecting a plane expected to be implanted with advertisements by a user; s2: designating an implantation position of an advertisement on the plane, wherein the position is selected by a user on an image at a certain visual angle through a UI, and calculating corresponding three-dimensional point coordinates on the plane of the scene according to the position; finally, the advertisement plane is aligned with the scene and the virtual advertisement is generated. The invention has simple operation and less manual intervention; supporting three-dimensional reconstruction of a scene; supporting interactive adjustment of advertisement positions, sizes and angles so as to meet personalized requirements; and the real-time playing requirement of high resolution is met.

Description

Virtual advertisement implantation method applied to bullet time
Technical Field
The invention relates to the field of three-dimensional reconstruction and image processing of computer vision and computer graphics, in particular to a virtual advertisement implantation method applied to bullet time.
Background
Bullet time (Bullet time) is used as a computer-aided photography special effect technology and is often applied to televisions, advertisements or computer games so as to realize the visual effects of strengthening slow shots, time stills and the like, and enable spectators or users to obtain brand new experience. Bullet times can be categorized into static bullet times and dynamic bullet times. In static bullet time, the camera system photographs the object from different angles and presents the picture at the same time to achieve the effect of time-setting. In the dynamic bullet time, the camera system presents pictures with different angles at different moments, and can watch dynamic live-action pictures from multiple angles.
Virtual advertising is an emerging digital marketing tool in recent years that utilizes virtual reality technology to implant a customer's advertisements into a live environment. The virtual advertisement technology can enable static billboards in a scene to move, so that the targeted delivery in markets of different areas and the individual customization of different clients are realized, and the maximization of the income is realized.
The virtual advertisement is implanted in bullet time, so that the method has great potential commercial value. The audience can experience the visual special effect brought by bullet time, and secondary content synthesis can be carried out on the shot video, so that the full personalized customization of advertisement products is realized; the key point and the difficulty are how to ensure that the implanted advertisement keeps consistent and consistent with the live-action picture in different visual angles of bullet time.
Therefore, those skilled in the art have been working to develop a virtual advertisement implantation method applied to bullet time to realize secondary content synthesis of video under multiple viewing angles, and combine virtual advertisement with live action to generate accurate and real advertisement pictures in real time. Meanwhile, the user can interactively adjust the position, size and angle of the advertisement in the image so as to meet the personalized requirements.
Disclosure of Invention
The invention aims to solve the technical problems of overcoming the defects of the prior art, realizing secondary content synthesis of video under multiple visual angles, combining virtual advertisements with real scenes and generating accurate and real advertisement pictures in real time.
In order to achieve the above object, the present invention provides a virtual advertisement implantation method applied to bullet time, comprising the steps of:
s1: detecting a scene designated plane, performing three-dimensional reconstruction of the scene in the step S1, obtaining a sparse three-dimensional point cloud of the scene, searching a plane in the sparse three-dimensional point cloud, and selecting a plane expected to be implanted with advertisements by a user;
s2: the method comprises the steps of aligning an advertisement plane and generating a virtual advertisement, designating the implantation position of the advertisement on the plane, selecting the position on an image with a certain visual angle by a user through a UI, and calculating corresponding three-dimensional point coordinates on the scene plane by the position; finally, the advertisement plane is aligned with the scene and the virtual advertisement is generated.
Further, the step S1 further includes:
S11, algorithm input: synchronously acquiring a group of images from a multi-view camera system, wherein each camera acquires a frame of picture; calibrating the camera by using the group of images to obtain internal and external parameters of the camera; the internal and external parameters of the camera and the set of images are used as inputs of a plane detection algorithm;
s12, feature matching: performing feature matching on the group of images to obtain matching feature points of two adjacent images;
s13, triangulation: performing triangulation by using the internal and external parameters of the camera and the corresponding image coordinates of the feature points, and calculating three-dimensional coordinates of the feature points in the scene to obtain the sparse three-dimensional point cloud of the scene;
s14, plane fitting: fitting all planes existing in the sparse three-dimensional point cloud by utilizing Ransac algorithm;
S15, selecting a designated plane: selecting an image anchor point of an advertisement plane to be implanted in an image of a specified visual angle by a user through a UI; and determining a parameter equation of the designated plane according to the anchor point.
Further, the step S2 further includes:
s21, inputting an advertisement image to be implanted by an algorithm, calibrating internal and external parameters of the camera in the step S1, fitting a scene plane equation in the step S1, designating an image anchor point of the advertisement to be implanted and acquiring a video sequence to be implanted with the advertisement;
S22, calculating three-dimensional coordinates of the advertisement implantation anchor points: calculating three-dimensional point coordinates of an advertisement plane to be placed according to the internal and external parameters of the camera and the scene plane equation;
S23, aligning the advertising plane with the scene plane: calculating a rotation matrix and a translation vector required by the alignment of the advertisement plane and the scene plane according to the advertisement plane anchor point, the advertisement plane normal vector, the scene plane normal vector and the three-dimensional position of the advertisement implantation anchor point; setting a required transformation matrix for adjusting the size and angle of the advertisement;
s24, calculating a homography matrix: re-projecting the advertisement plane corner points to the images under each view angle according to the internal and external parameters of the camera, and calculating the corresponding image coordinates, namely re-projecting the advertisement plane corner points; calculating a homography matrix according to the re-projection coordinates of the angular points of the advertisement plane and the angular point coordinates of the advertisement image;
s25, image perspective transformation and synthesis: and performing perspective transformation on the Logo image by utilizing the homography matrix, and synthesizing the transformed image and the live-action image.
Further, the step S12 further includes:
S121, feature detection: performing feature detection on an input image, and detecting image feature points by using AKAZE detection algorithm;
s122, feature matching: carrying out violent matching on the characteristics of two adjacent images, and determining the matched characteristic points of the two images;
S123, interior point detection: the feature after violent matching may have outliers; and removing error matching by using a Cross-Ratio-Check method, namely performing one-time violent matching on feature descriptors in the two images, and taking matching meeting a Ratio threshold value in two groups of matching results as an inner point.
Further, the Ransac algorithm in the step S14 is:
Estimating a mathematical model of a plane from a group of point clouds containing outer points in an iterative mode; in the model, after each iteration finds a plane, removing the inner points belonging to the plane from the point cloud; performing the Ransac algorithm iteration again on the rest point clouds, and searching a plane; and stopping executing the Ransac algorithm until the number of points in the residual point cloud is smaller than a threshold value, and obtaining all possible plane sets in the three-dimensional point cloud.
Further, the step S15 further includes:
S151: selecting an image anchor point: selecting a designated view angle picture, drawing the detected characteristic points in the view angle picture, and picking up N characteristic points on a certain plane; after the characteristic points are picked up, the three-dimensional coordinates of the characteristic points can be determined;
S152: ground plane determination: when the ground plane characteristic points are determined, the corresponding three-dimensional coordinates are brought into all plane equations, and the loss is calculated according to the following formula: Where N is the number of anchor points picked up, and (a, B, C, D) is a parameter of a plane in the set S, and a plane with the smallest loss is selected as the designated plane.
Further, in the step S21:
The plane equation is the scene plane determined in the step S15, and the equation is a general parameter equation a·x+b·y+c·z+d=0 of the plane, wherein a plane unit normal vector n p = (a, B, C);
The image anchor point for advertisement implantation is used for selecting an image coordinate point to be implanted with the advertisement in the appointed visual angle image by a user through a UI; when the advertisement implantation position needs to be changed, the coordinates of the image points are picked up again.
Further, the calculation formula of the three-dimensional anchor point is shown as follows, wherein R, K, (u, v), t, (A, B, C, D) are known parameters:
wherein [ R|t ] is an external parameter of the current camera, K is an internal parameter of the current camera, (u, v) is an implantation position of the advertisement in the current view angle, and (A, B, C, D) is a scene plane equation parameter.
Further, in the step S23:
The advertisement plane is a rectangle which is preset on the xy axis in the world coordinate system; the center of the advertisement image is positioned at the origin of a world coordinate system, the length is 2, the height is 2r, and r is the aspect ratio of the advertisement image; the normal vector of the method is n a = (0, 1), the four corner points of the advertisement plane are P a1=(1,-r,0),pa2=(-1,-r,0),pa3=(-1,r,0),pa4 = (1, r, 0), and the anchor point of the advertisement plane is P a0 = (0, 0);
In the step S23, rotation and translation matrices required for the advertisement plane to be aligned with the scene in the three-dimensional space need to be calculated; the rotation matrix of the advertisement aligned with the scene plane is based on the unit normal vector n p of the scene plane, the advertisement plane normal vector n a, the plane anchor point P p and the advertisement plane anchor point P a Translation vector t a2p=pp, where c=n p·na,v=na×np, I is the identity matrix, [ v ] × is the antisymmetric matrix of v;
setting a scaling matrix s and a rotation matrix R z around the z-axis of the model, wherein Wherein (s x,sy) is a scaling factor of the advertising plane on the xy axis, alpha is a rotation angle around the z axis, and both are adjusted by a user according to actual requirements through a UI; thus, the total rotation matrix/>Translation vector
After the advertising plane is aligned with the scene plane, the angular point coordinates of the advertising planeWhere i ε {1,2,3,4};
The step S23 is debugged by the user before the final virtual advertisement is generated, and when the proper scale and angle of the implanted advertisement are obtained, the execution of the step S23 is stopped.
Further, in the step S24:
the re-projection advertisement plane corner points are calculated under the condition that the internal and external parameters of each camera and the three-dimensional coordinates of the advertisement plane corner points are known, and the re-projection coordinates of the advertisement plane corner points on each view angle image are calculated;
and calculating a homography matrix, namely calculating homography matrixes of the advertisement image and the images at each view angle according to projection image points of the advertisement plane corner and the advertisement image corner.
The present invention is directed to combining virtual advertising technology with bullet time in a straightforward and efficient manner to achieve advertising of video at multiple viewing angles. The invention is characterized in that:
1. the operation is simple, and the manual intervention is less;
2. supporting three-dimensional reconstruction of a scene;
3. supporting interactive adjustment of advertisement positions, sizes and angles so as to meet personalized requirements;
4. and the real-time playing requirement of high resolution is met.
The conception, specific structure, and technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, features, and effects of the present invention.
Drawings
FIG. 1 is a flow chart of a virtual advertisement implantation method for bullet time application in accordance with a preferred embodiment of the present invention;
FIG. 2 is a scene specific plane detection flow chart of a preferred embodiment of the invention;
FIG. 3 is a flow chart of advertisement plane alignment and virtual advertisement generation in accordance with a preferred embodiment of the present invention;
FIG. 4 is a schematic representation of a bullet time camera array of a preferred embodiment of the present invention;
FIG. 5 is an image anchor pick-up of a preferred embodiment of the present invention;
FIG. 6 is an advertising Logo in accordance with a preferred embodiment of the present invention;
FIG. 7 is a schematic diagram of virtual advertisement placement according to a preferred embodiment of the present invention.
Detailed Description
The following description of the preferred embodiments of the present invention refers to the accompanying drawings, which make the technical contents thereof more clear and easy to understand. The present invention may be embodied in many different forms of embodiments and the scope of the present invention is not limited to only the embodiments described herein.
In the drawings, like structural elements are referred to by like reference numerals and components having similar structure or function are referred to by like reference numerals. The dimensions and thickness of each component shown in the drawings are arbitrarily shown, and the present invention is not limited to the dimensions and thickness of each component. The thickness of the components is exaggerated in some places in the drawings for clarity of illustration.
As shown in fig. 1, the method of the present patent includes two stages, each of which includes the following steps:
Stage 1: scene specific plane detection
As shown in fig. 2, three-dimensional reconstruction of a scene is performed at this stage, a sparse three-dimensional point cloud of the scene is obtained, planes which may exist are searched in the point cloud, and a plane in which an advertisement is expected to be implanted is selected by a user
1. Algorithm input: synchronously acquiring a group of color (RGB) images from a multi-view camera system, wherein each camera only needs to acquire one frame of picture; calibrating the camera by using the group of images to obtain internal and external parameters of the camera; the camera internal and external parameters and the group of images are used as the input of the plane detection algorithm
2. Feature matching: performing feature matching on the group of images to obtain matching feature points of two adjacent images
3. Triangulation: performing triangulation by using the internal and external parameters of the camera and the corresponding image coordinates of the characteristic points, and calculating the three-dimensional coordinates of the characteristic points in the scene to obtain sparse three-dimensional point cloud of the scene
4. Plane fitting: fitting all planes existing in the three-dimensional point cloud by utilizing Ransac algorithm
5. Selecting a designated plane: selecting an image anchor point of an advertisement plane to be implanted in an image of a specified visual angle by a user through a UI; from the anchor point, a parametric equation specifying a plane is determined
Stage 2: advertisement plane alignment and virtual advertisement generation
As shown in fig. 3, the step of stage 2 is:
1. algorithm input: an advertisement image to be implanted; the camera internal and external parameters calibrated in the stage 1; the plane equation fitted in stage 1 (i.e., the scene plane); designating an image anchor point (picked up by a user in the UI) to which an advertisement is to be placed; capturing video sequences requiring advertising
2. Calculating three-dimensional coordinates of the advertisement implantation anchor points: according to the internal and external parameters of the camera and the plane equation of the scene, calculating the coordinates of the three-dimensional points on which the advertising plane needs to be placed
3. The advertising plane is aligned with the scene plane: calculating a rotation matrix and a translation vector required by the alignment of the advertisement plane and the scene plane according to the advertisement plane anchor point, the advertisement plane normal vector, the scene plane normal vector and the three-dimensional position of the advertisement implantation anchor point; in addition, a transformation matrix for adjusting the size and angle of the advertisement is set
4. Homography matrix calculation: re-projecting the angular points of the advertising plane to the images under each view angle according to the internal parameters and the external parameters of the camera, and calculating the corresponding image coordinates; calculating homography matrix according to the re-projection coordinates of the advertisement plane corner points and the corner point coordinates of the advertisement image
5. Image perspective transformation and synthesis: perspective transformation is carried out on Logo images by utilizing homography matrixes, and the transformed images and live-action images are synthesized
The patent utilizes the stereoscopic vision technology to realize the implantation of the virtual advertisement in bullet time; performing camera internal and external parameter calibration by utilizing RGB images synchronously collected from a multi-camera system, performing three-dimensional reconstruction on a scene, calculating a three-dimensional point cloud and plane equation set of the scene, and selecting a plane equation for advertisement implantation; and according to the image position appointed by the user, aligning the advertisement plane with the scene plane, transforming the advertisement picture and generating the virtual advertisement. The method mainly comprises the steps of determining a three-dimensional plane of a scene, aligning advertisements with the plane of the scene, synthesizing virtual advertisements and the like.
Example 1
Stage 1: scene specific plane detection
The stage is a preprocessing stage, and after a bullet time camera system is built, a group of data is required to be synchronously acquired for a scene in advance and used for determining a plane parameter equation for advertisement implantation; in the case where the same scene, the camera pose, remains unchanged, this phase is performed only once and only once.
1. Construction, calibration and data acquisition of a multi-camera system
Building a multi-camera system: the method has no special requirement on the model of the camera, the number of the cameras can be selected according to the scene size or the actual requirement of bullet time, and a group of RGB images are synchronously acquired from different angles by using the built cameras; the system built by using 40 IoI Industry cameras in the patent is used for data acquisition. Fig. 4 is a schematic diagram of a common camera array pattern for bullet time.
Calibrating a multi-camera system: calibration, i.e. the calculation of the internal parameters (INTRINSIC PARAMETERS) and the external parameters (Extrinsic Parameters) of the camera. The camera parameters used in this patent are calibrated by the AGI software.
2. Feature matching
And (3) feature detection: performing feature detection on an input image, and selecting SIFT, SURF, ORB, AKAZE algorithms; the AKAZE detection algorithm is used in the patent to detect the image characteristic points;
Feature matching: carrying out violent matching (Brute Force Matching) on the characteristics of two adjacent images, and determining the matched characteristic points of the two images;
interior point detection: the violently matched features may have outliers (Outlier); the patent uses a Cross-Ratio-Check method to remove the error matching, namely, feature descriptors in two images are respectively used as QueryDescriptor and TrainDescriptor for carrying out one-time violent matching, and the matching meeting the Ratio threshold value in the two sets of matching results is used as an inner point (Inlier)
3. Triangulation
Triangulation: according to the internal and external parameters of the camera and the matched coordinates of the characteristic points, the corresponding three-dimensional coordinates of the points can be determined; and calculating three-dimensional points of the features in all the images to form sparse point clouds of the scene.
4. Plane fitting
Random sample consensus algorithm (Ransac): from a set of point clouds containing outliers, a mathematical model of the plane is estimated in an iterative manner. In order to find all existing planes, after each time a plane is found, the interior points belonging to the plane are removed from the point cloud; performing Ransac iteration again on the residual point cloud, and searching a plane; until the number of points in the remaining point cloud is less than a certain threshold (e.g., 10), execution of the Ransac algorithm is stopped. In this step, a set S of planes of all possible existence in the three-dimensional point cloud can be obtained.
5. Selecting a three-dimensional plane of a scene:
Selecting an image anchor point: in the step, a designated view angle picture is selected, the detected characteristic points are drawn in the view angle picture, and N characteristic points on a certain plane are picked up; this process is done by the user through the UI (typically three points are selectable); after the feature point is picked up, the three-dimensional coordinates of the feature point can be determined simultaneously (the triangulation result in step 3 is queried)
Determining a three-dimensional plane of a scene: when the plane feature points are determined, three-dimensional coordinates of the plane feature points are brought into all plane equations in the set S, and the loss is calculated according to the following formula: where N is the number of anchor points picked up and (A, B, C, D) is a parameter of the plane in set S. And selecting the plane with the minimum loss as the designated plane.
Stage 2: advertisement plane alignment and virtual advertisement generation
In the stage 1, a parameter equation of a scene plane where the advertisement is implanted can be determined by utilizing feature matching, plane fitting and user interaction; this process may be completed in a pre-processing stage. In stage 2, it is necessary to specify the placement position of the advertisement on the plane, which is selected by the user on the image of a certain view through the UI, and thereby calculate the corresponding three-dimensional point coordinates on the plane of the scene; finally, the advertisement plane is aligned with the scene and the virtual advertisement is generated.
1. Data preparation
Plane equation: the plane equation is a scene plane determined in the stage 1, and the equation is a general parameter equation A.x+B.y+C.z+D=0 of the plane, wherein plane unit normal vector n p = (A, B, C);
Pick up the location of advertisement placement: as shown in fig. 5, the user selects an image coordinate point to be advertised in a specified view angle image through the UI; when the advertisement implantation position needs to be changed, the coordinates of the image points are picked up again;
Advertising images, as shown in fig. 6.
2. Calculating three-dimensional anchor points for advertisement placement
Under the condition that the internal and external parameters of the camera, the plane equation and the coordinates of the anchor point image are known, the corresponding three-dimensional coordinates P p=(xp,yp,zp of the point on the plane can be calculated. The calculation formula is as follows:
[ R|t ] is the external reference of the current camera, K is the internal reference of the current camera, (u, v) is the advertisement at the current visual angle
Wherein,
And (A, B, C, D) are scene plane equation parameters.
3. The advertising plane is aligned with the scene plane
Advertisement plane: the advertisement plane is a rectangle which is preset on the xy axis in the world coordinate system; the center of the advertisement image is positioned at the origin of a world coordinate system, the length is 2, and the height is 2r (r is the aspect ratio of the advertisement image). Therefore, the normal vector thereof is n a = (0, 1), four corner points P a1=(1,-r,0),pa2=(-1,-r,0),pa3=(-1,r,0),pa4 = (1, r, 0) of the advertisement plane, and an anchor point P a0 = (0, 0) of the advertisement plane;
In this step, the rotation and translation matrices required to align the ad plane with the scene in three dimensions need to be calculated. The rotation matrix of the advertisement aligned with the scene plane is based on the unit normal vector n p of the scene plane, the advertisement plane normal vector n a, the plane anchor point P p and the advertisement plane anchor point P a Translation vector t a2p=pp, where c=n p·na,v=na×np, I is the identity matrix, [ v ] × is the antisymmetric matrix of v;
To facilitate the adjustment of the model by the user, a scaling matrix s and a rotation matrix R z around the z-axis of the model are required to be set, wherein Wherein (s x,sy) is the scaling factor of the advertising plane on the xy axis, and alpha is the rotation angle around the z axis, which are all adjusted by the user according to the actual requirement through the UI. Thus, the total rotation matrixTranslation vector/>
After the advertising plane is aligned with the scene plane, the angular point coordinates thereofWhere i ε {1,2,3,4};
This step is debugged by the user before the final virtual advertisement is generated, and after the appropriate dimensions and angles for the embedded advertisement are obtained, it is not necessary to continue to perform this step.
4. Homography matrix computation
Reprojecting the advertising plane corner points: under the condition that the internal and external parameters of each camera and the three-dimensional coordinates of the angular points of the advertising plane are known, the re-projection coordinates of the angular points on the images of each view angle can be calculated;
Calculating a homography matrix: and calculating homography matrixes of the advertisement image and each view angle image according to projection image points of the advertisement plane corner points and the advertisement image corner points.
5. Image perspective transformation and synthesis
As shown in fig. 7, perspective transformation is performed on the advertisement image using a homography matrix; and superposing the transformed advertisement image into the live-action image.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention without requiring creative effort by one of ordinary skill in the art. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (3)

1. A virtual advertisement implantation method for bullet time, comprising the steps of:
S1: detecting a scene designated plane, performing three-dimensional reconstruction of the scene in the step S1, obtaining a sparse three-dimensional point cloud of the scene, searching a plane in the sparse three-dimensional point cloud, and selecting a plane expected to be implanted with advertisements by a user; step S1 further includes:
S11, algorithm input: synchronously acquiring a group of images from a multi-view camera system, wherein each camera acquires a frame of picture; calibrating the camera by using the group of images to obtain internal and external parameters of the camera; the internal and external parameters of the camera and the set of images are used as inputs of a plane detection algorithm;
s12, feature matching: performing feature matching on the group of images to obtain matching feature points of two adjacent images;
s13, triangulation: performing triangulation by using the internal and external parameters of the camera and the corresponding image coordinates of the feature points, and calculating three-dimensional coordinates of the feature points in the scene to obtain the sparse three-dimensional point cloud of the scene;
S14, plane fitting: fitting all planes existing in the sparse three-dimensional point cloud by utilizing Ransac algorithm; the Ransac algorithm is as follows:
Estimating a mathematical model of a plane from a group of point clouds containing outer points in an iterative mode; in the mathematical model, after each iteration finds a plane, the interior points belonging to the plane are removed from the point cloud; performing the Ransac algorithm iteration again on the rest point clouds, and searching a plane; stopping executing the Ransac algorithm until the number of points in the residual point cloud is smaller than a threshold value to obtain all possible plane sets in the three-dimensional point cloud;
S15, selecting a designated plane: selecting an image anchor point of an advertisement plane to be implanted in an image of a specified visual angle by a user through a UI; according to the anchor point, determining a parameter equation of the designated plane, wherein the step S15 further includes:
S151: selecting an image anchor point: selecting a designated view angle picture, drawing the detected characteristic points in the view angle picture, and picking up N characteristic points on a certain plane; after the characteristic points are picked up, the three-dimensional coordinates of the characteristic points can be determined;
S152: ground plane determination: when the plane characteristic points are determined, the corresponding three-dimensional coordinates are brought into all plane equations, and the loss is calculated according to the following formula: N is the number of picked anchor points, (A, B, C, D) is the parameter of plane equation in the set S, and the plane with the minimum loss is selected as the appointed plane;
s2: the method comprises the steps of aligning an advertisement plane and generating a virtual advertisement, designating the implantation position of the advertisement on the plane, selecting the position on an image with a certain visual angle by a user through a UI, and calculating corresponding three-dimensional point coordinates on the scene plane by the position; finally, the advertisement plane is aligned with the scene and the virtual advertisement is generated, and the step S2 further comprises:
S21, inputting an algorithm, wherein the algorithm comprises an advertisement image to be implanted, camera internal and external parameters calibrated in the step S1, a scene plane equation fitted in the step S1, an image anchor point for designating the advertisement to be implanted and a video sequence for acquiring the advertisement to be implanted, and the step S21 comprises the following steps:
The plane equation is the scene plane determined in the step S15, and the equation is a general parameter equation a·x+b·y+c·z+d=0 of the plane, wherein a plane unit normal vector n p = (a, B, C);
the image anchor point for advertisement implantation is used for selecting an image coordinate point to be implanted with the advertisement in the appointed visual angle image by a user through a UI; when the advertisement implantation position needs to be changed, the coordinates of the image points are picked up again;
S22, calculating three-dimensional coordinates of the advertisement implantation anchor points: calculating three-dimensional point coordinates of an advertisement plane to be placed according to the internal and external parameters of the camera and the scene plane equation;
the calculation formula is shown as follows, wherein R, K, (u, v), t, (A, B, C, D) are known parameters:
Wherein [ R|t ] is an external parameter of the current camera, K is an internal parameter of the current camera, (u, v) is an implantation position of the advertisement in the current visual angle, and (A, B, C, D) is a scene plane equation parameter;
S23, aligning the advertising plane with the scene plane: calculating a rotation matrix and a translation vector required by the alignment of the advertisement plane and the scene plane according to the advertisement plane anchor point, the advertisement plane normal vector, the scene plane normal vector and the three-dimensional position of the advertisement implantation anchor point; setting a required transformation matrix for adjusting the size and angle of the advertisement;
The advertisement plane is a rectangle which is preset on the xy axis in the world coordinate system; the center of the advertisement image is positioned at the origin of a world coordinate system, the length is 2, the height is 2r, and r is the aspect ratio of the advertisement image; the normal vector of the method is n a = (0, 1), the four corner points of the advertisement plane are P a1=(1,-r,0),pa2=(-1,-r,0),pa3=(-1,r,0),pa4 = (1, r, 0), and the anchor point of the advertisement plane is P a0 = (0, 0);
In the step S23, rotation and translation matrices required for the advertisement plane to be aligned with the scene in the three-dimensional space need to be calculated; the rotation matrix of the advertisement aligned with the scene plane is based on the unit normal vector n p of the scene plane, the advertisement plane normal vector n a, the plane anchor point P p and the advertisement plane anchor point P a Translation vector t a2p=pp, where c=n p·na,v=na×np, I is the identity matrix, [ v ] × is the antisymmetric matrix of v;
Setting a scaling matrix s and a rotation matrix R z around the z-axis of the mathematical model, wherein Wherein (s x,sy) is a scaling factor of the advertising plane on the xy axis, alpha is a rotation angle around the z axis, and both are adjusted by a user according to actual requirements through a UI; thus, the total rotation matrix/>Translation vector
After the advertising plane is aligned with the scene plane, the angular point coordinates of the advertising planeWhere i ε {1,2,3,4};
Step S23 is debugged by a user before the final virtual advertisement is generated, and after the proper scale and angle of the implanted advertisement are obtained, the step S23 is stopped;
s24, calculating a homography matrix: re-projecting the advertisement plane corner points to the images under each view angle according to the internal and external parameters of the camera, and calculating the corresponding image coordinates, namely re-projecting the advertisement plane corner points; calculating a homography matrix according to the re-projection coordinates of the angular points of the advertisement plane and the angular point coordinates of the advertisement image;
s25, image perspective transformation and synthesis: and performing perspective transformation on the Logo image by utilizing the homography matrix, and synthesizing the transformed image and the live-action image.
2. The virtual advertisement implantation method applied to bullet time as set forth in claim 1, wherein said step S12 further comprises:
S121, feature detection: performing feature detection on an input image, and detecting image feature points by using AKAZE detection algorithm;
s122, feature matching: carrying out violent matching on the characteristics of two adjacent images, and determining the matched characteristic points of the two images;
S123, interior point detection: the feature after violent matching may have outliers; and removing error matching by using a Cross-Ratio-Check method, namely performing one-time violent matching on feature descriptors in the two images, and taking matching meeting a Ratio threshold value in two groups of matching results as an inner point.
3. The virtual advertisement implantation method applied to bullet time as set forth in claim 1, wherein in said step S24:
the re-projection advertisement plane corner points are calculated under the condition that the internal and external parameters of each camera and the three-dimensional coordinates of the advertisement plane corner points are known, and the re-projection coordinates of the advertisement plane corner points on each view angle image are calculated;
and calculating a homography matrix, namely calculating homography matrixes of the advertisement image and the images at each view angle according to projection image points of the advertisement plane corner and the advertisement image corner.
CN202010844356.9A 2020-08-20 2020-08-20 Virtual advertisement implantation method applied to bullet time Active CN111986133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010844356.9A CN111986133B (en) 2020-08-20 2020-08-20 Virtual advertisement implantation method applied to bullet time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010844356.9A CN111986133B (en) 2020-08-20 2020-08-20 Virtual advertisement implantation method applied to bullet time

Publications (2)

Publication Number Publication Date
CN111986133A CN111986133A (en) 2020-11-24
CN111986133B true CN111986133B (en) 2024-05-03

Family

ID=73443825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010844356.9A Active CN111986133B (en) 2020-08-20 2020-08-20 Virtual advertisement implantation method applied to bullet time

Country Status (1)

Country Link
CN (1) CN111986133B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1144588A (en) * 1994-03-14 1997-03-05 美国赛特公司 A system for implanting image into video stream
WO2011121117A1 (en) * 2010-04-02 2011-10-06 Imec Virtual camera system
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN105976399A (en) * 2016-04-29 2016-09-28 北京航空航天大学 Moving object detection method based on SIFT (Scale Invariant Feature Transform) feature matching
WO2019096016A1 (en) * 2017-11-14 2019-05-23 深圳岚锋创视网络科技有限公司 Method for achieving bullet time capturing effect and panoramic camera
CN109842811A (en) * 2019-04-03 2019-06-04 腾讯科技(深圳)有限公司 A kind of method, apparatus and electronic equipment being implanted into pushed information in video
CN110599605A (en) * 2019-09-10 2019-12-20 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111372122A (en) * 2020-02-27 2020-07-03 腾讯科技(深圳)有限公司 Media content implantation method, model training method and related device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986296B (en) * 2020-08-20 2024-05-03 叠境数字科技(上海)有限公司 CG animation synthesis method for bullet time

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1144588A (en) * 1994-03-14 1997-03-05 美国赛特公司 A system for implanting image into video stream
WO2011121117A1 (en) * 2010-04-02 2011-10-06 Imec Virtual camera system
CN102982548A (en) * 2012-12-11 2013-03-20 清华大学 Multi-view stereoscopic video acquisition system and camera parameter calibrating method thereof
CN105976399A (en) * 2016-04-29 2016-09-28 北京航空航天大学 Moving object detection method based on SIFT (Scale Invariant Feature Transform) feature matching
WO2019096016A1 (en) * 2017-11-14 2019-05-23 深圳岚锋创视网络科技有限公司 Method for achieving bullet time capturing effect and panoramic camera
CN109842811A (en) * 2019-04-03 2019-06-04 腾讯科技(深圳)有限公司 A kind of method, apparatus and electronic equipment being implanted into pushed information in video
CN110599605A (en) * 2019-09-10 2019-12-20 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111372122A (en) * 2020-02-27 2020-07-03 腾讯科技(深圳)有限公司 Media content implantation method, model training method and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
2016里约奥运会田径赛场虚拟跟踪定位系统的应用解析――里约奥运田径赛场Ncam虚拟跟踪定位系统的使用心得浅谈;冯涛;陈智民;;现代电视技术;20170415(04);全文 *
基于联合特征匹配的多视角三维重建方法;李硕明;陈越;;计算机系统应用;20161015(10);全文 *

Also Published As

Publication number Publication date
CN111986133A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN108711185B (en) Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
CN109314753B (en) Method and computer-readable storage medium for generating intermediate views using optical flow
US6717586B2 (en) Apparatus, method, program code, and storage medium for image processing
EP2930689B1 (en) Method for rendering
CN111986296B (en) CG animation synthesis method for bullet time
US20030012410A1 (en) Tracking and pose estimation for augmented reality using real features
Meilland et al. A unified rolling shutter and motion blur model for 3D visual registration
CN110648274B (en) Method and device for generating fisheye image
CN110458964B (en) Real-time calculation method for dynamic illumination of real environment
CN108958469A (en) A method of hyperlink is increased in virtual world based on augmented reality
Inamoto et al. Intermediate view generation of soccer scene from multiple videos
JP2001274973A (en) Device and method for synthesizing microscopic image and computer-readable recording medium with microscopic image synthesizing processing program recorded thereon
CN114913308A (en) Camera tracking method, device, equipment and storage medium
CN113763544A (en) Image determination method, image determination device, electronic equipment and computer-readable storage medium
CN111192308B (en) Image processing method and device, electronic equipment and computer storage medium
CN111986133B (en) Virtual advertisement implantation method applied to bullet time
Inamoto et al. Free viewpoint video synthesis and presentation of sporting events for mixed reality entertainment
CN115834800A (en) Method for controlling shooting content synthesis algorithm
CN111260544B (en) Data processing method and device, electronic equipment and computer storage medium
Malerczyk et al. 3D reconstruction of sports events for digital TV
JP2001222707A (en) Method and device for synthesizing intermediate picture and recording medium stored with intermediate picture synthesization program
CN113298868B (en) Model building method, device, electronic equipment, medium and program product
Lee et al. [POSTER] Geometric Mapping for Color Compensation Using Scene Adaptive Patches
Kawai et al. From image inpainting to diminished reality
JP2001256492A (en) Device and method for composing image and computer readable recording medium for recording its program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant