CN112837375B - Method and system for camera positioning inside real space - Google Patents

Method and system for camera positioning inside real space Download PDF

Info

Publication number
CN112837375B
CN112837375B CN202110287270.5A CN202110287270A CN112837375B CN 112837375 B CN112837375 B CN 112837375B CN 202110287270 A CN202110287270 A CN 202110287270A CN 112837375 B CN112837375 B CN 112837375B
Authority
CN
China
Prior art keywords
image
positioning
camera
shooting
display panel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110287270.5A
Other languages
Chinese (zh)
Other versions
CN112837375A (en
Inventor
高发宝
殷元江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qiwei Visual Media Technology Co ltd
Original Assignee
Beijing Qiwei Visual Media Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qiwei Visual Media Technology Co ltd filed Critical Beijing Qiwei Visual Media Technology Co ltd
Priority to CN202110287270.5A priority Critical patent/CN112837375B/en
Publication of CN112837375A publication Critical patent/CN112837375A/en
Application granted granted Critical
Publication of CN112837375B publication Critical patent/CN112837375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

A method and system for camera positioning inside a real space, wherein the method comprises: setting a positioning target; shooting towards a positioning target at a first position to acquire a first shooting image of the positioning target; shooting towards a positioning target at a plurality of second positions to acquire a second shooting image group of the positioning target; respectively extracting a first image feature and a second image feature group; determining a mapping relation between a shooting position and image characteristics based on the relation between the position differences and the corresponding characteristic differences; acquiring a positioning shooting image of a positioning target; extracting positioning image features; and determining the position of the camera to be positioned inside the real space based on the mapping relation and the positioning image characteristics. According to the method, the actual position of the camera to be positioned can be obtained by using only a plurality of cameras and even only one camera, so that the problem that the cameras are positioned by using expensive OptiTrack, starTracker and other devices in the prior art is avoided, and the cost of virtual shooting is saved.

Description

Method and system for camera positioning inside real space
Technical Field
The present disclosure relates to virtual photography, and more particularly to a method and system for camera positioning inside a real space.
Background
With the development of virtual image technology, "virtual shooting" is occurring in more and more fields, and plays an important role. The "virtual shooting" is to perform all shots in a virtual scene in a computer according to shooting actions required by a director during movie shooting, game making, or virtual live broadcasting room shooting. All the elements required for shooting the lens, including scenes, figures, lights, etc., are integrated into a computer, and then the director can "command" the performance and action of the character on the computer according to his own intention, thus obtaining images from any angle. In short, any scene that the director wants to shoot is shot. All data entered into the computer originates entirely from the real world, i.e., the virtual scene and virtual character entered into the computer must be a "holographic" copy of the real world and actors, equivalent to a virtual world cloning a real world into the computer, thereby physically opening the boundaries of "virtual" and "real".
In the process of virtual photographing, a specific position of a real camera in the real world needs to be acquired so that the positions of the virtual camera and the real camera are matched to generate a display effect of a final correct virtual photographed image. The existing cameras are all positioned in real space by installing external professional positioning equipment for calculation, common positioning systems comprise OptiTrack, starTracker, ART-Track, rayleigh and the like, and the equipment requires additional system cost and occupies system space.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a method for camera positioning inside a real space, comprising: setting a positioning target on a boundary surface of a real space; shooting towards the positioning target at a first position opposite to the positioning target to acquire a first shooting image of the positioning target; a second photographed image group photographed toward the positioning target at a plurality of second positions other than the first position to acquire the positioning target, the second photographed image group including images photographed at each of the second positions; respectively extracting image features of a first shooting image and a second shooting image group to obtain a first image feature and a second image feature group; determining a mapping relationship between a shooting position and image features of an image of a positioning target shot at the shooting position based on a position difference between each second position and the first position and a corresponding relationship between a feature difference between each second image feature and the first image feature; adjusting the shooting direction of a camera to be positioned towards a positioning target to acquire a positioning shooting image of the positioning target; extracting image features of the positioning shooting image to obtain positioning image features; and determining the position of the camera to be positioned inside the real space based on the mapping relation and the positioning image characteristics.
The method of the present disclosure first sets a positioning target at a real space boundary surface, and then photographs the positioning target at different positions, respectively. And obtaining a mapping relation between the shooting positions and the image features according to the data of the plurality of groups of shooting positions and the image features of the corresponding shooting images, so as to determine the position of the camera to be positioned according to the mapping relation and the image of the positioning target shot by the camera to be positioned. According to the method, the actual position of the camera to be positioned can be obtained by using only a plurality of cameras and even only one camera, so that the camera is prevented from being positioned by using expensive OptiTrack, starTracker and other devices in the prior art, and the cost of virtual shooting is saved. In addition, the method disclosed by the invention does not need additional positioning equipment, and further simplifies a virtual shooting system.
Preferably, the real space is a living room, and the system further comprises: a first LED display panel forming the bottom surface of the direct broadcasting room, and a second LED display panel and a third LED display panel forming two adjacent side walls of the direct broadcasting room, which are respectively orthogonal to the first LED display panel; the first LED display panel, the second LED display panel, and the third LED are further configured to display a background image. The method of the present disclosure may be preferably applied to the positioning of a camera within a living room having an LED display panel, and since the LED display panel itself has a display function, a positioning target may be directly displayed in an image manner through the LED display panel without additionally setting a positioning target in a physical form, thereby simplifying the positioning method of the present disclosure.
According to another aspect of the present disclosure, there is provided a method for virtual shooting, including: creating a virtual scene in the virtual space; creating a virtual camera in the virtual space for capturing a virtual scene; a camera to be positioned for shooting a real scene in the real space is arranged in the real space; the above method for camera positioning inside a real space; adjusting the shooting position of the virtual camera in the virtual space to be consistent with the position of the camera to be positioned in the real space; and acquiring a virtual scene image shot by the virtual camera to serve as a background image of a real scene image shot by the camera to be positioned. The virtual shooting method can accurately obtain the position of the real camera and realize the position matching of the virtual camera and the real camera by means of the camera positioning method, so that in the virtual shooting, the virtual background picture and the picture of the real scene part can be completely matched, and the picture reality of the virtual shooting is greatly improved.
According to a third aspect of the present disclosure, embodiments of the present disclosure disclose a system for camera positioning inside a real space, comprising: a positioning target setting unit configured to set a positioning target on a boundary surface of the real space; a first photographing unit configured to photograph toward the positioning target at a first position facing the positioning target to acquire a first photographed image of the positioning target; a second photographing unit configured to photograph toward the positioning target at a plurality of second positions other than the first position to obtain a second photographed image group of the positioning target, the second photographed image group including images photographed at each of the second positions; the feature extraction unit is connected with the first shooting unit and the second shooting unit and is configured to extract image features of the first shooting image and the second shooting image group respectively to obtain a first image feature and a second image feature group; a calculation unit configured to determine a mapping relationship of the shooting position and the image features of the image of the positioning target shot at the shooting position based on a relationship between a position difference of each second position and the first position and a feature difference of each corresponding second image feature and first image feature; the camera to be positioned is configured to face the positioning target to acquire a positioning shooting image of the positioning target; the feature extraction unit is connected with the camera to be positioned and is also configured to extract image features of the positioning shooting image to obtain positioning image features; and the computing unit is further configured to determine the position of the camera to be positioned in the real space based on the mapping relation and the positioning image characteristics.
According to a fourth aspect of the present disclosure, embodiments of the present disclosure disclose a system for virtual photography, comprising: a virtual scene creation unit configured to create a virtual scene in a virtual space; a virtual camera created in the virtual space for photographing a virtual scene; the system for camera positioning inside a real space, wherein the camera to be positioned is further configured to shoot a real scene inside the real space; an adjusting unit configured to adjust a photographing position of the virtual camera in the virtual space to coincide with a position of the camera to be positioned inside the real space; and an image processing unit configured to acquire a virtual scene image photographed by the virtual camera as a background image of a real scene image photographed by the camera to be positioned.
According to a fifth aspect of the present disclosure, embodiments of the present disclosure disclose a computer device comprising: a processor; and a memory storing a computer program which, when executed by the processor, causes the processor to perform the above-described method for camera positioning inside a real space.
According to a fourth aspect of the present disclosure, embodiments of the present disclosure disclose a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the above-described method for camera positioning inside a real space.
These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 shows a schematic diagram of a method for camera positioning inside a real space according to an exemplary embodiment;
FIG. 2 illustrates a flowchart of a method for camera positioning inside a real space, according to an example embodiment;
FIG. 3 illustrates a flowchart of a method for camera positioning inside a real space, according to an example embodiment;
Fig. 4 illustrates a top view of a real space during use of a method for camera positioning inside the real space according to an exemplary embodiment;
Fig. 5 shows images of a positioning target photographed at various positions in fig. 4;
FIG. 6 shows a schematic diagram of locating a camera to be located using a method for camera location inside a real space according to an example embodiment;
fig. 7 shows a schematic diagram of a method for virtual shooting according to an exemplary embodiment;
FIG. 8 illustrates a block diagram of a system for camera positioning inside a real space according to an exemplary embodiment;
fig. 9 shows a block diagram of a system for virtual photographing according to an exemplary embodiment;
fig. 10 is a block diagram showing a structure of an exemplary electronic device that can be applied to the exemplary embodiment.
Detailed Description
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. As used herein, the term "plurality" means two or more, and the term "based on" should be interpreted as "based at least in part on". Furthermore, the term "and/or" and "at least one of … …" encompasses any and all possible combinations of the listed items.
Exemplary embodiments of the present disclosure are described in detail below with reference to the attached drawings.
Fig. 1 shows a schematic diagram of a method for camera positioning inside a real space according to an exemplary embodiment. The method generally comprises the steps of:
step S101, setting a positioning target on a boundary surface of a real space;
step S102, shooting towards a positioning target at a first position opposite to the positioning target to acquire a first shooting image of the positioning target;
step S103 of photographing toward the positioning target at a plurality of second positions other than the first position to acquire a second photographed image group of the positioning target, the second photographed image group including images photographed at each of the second positions;
step S104, respectively extracting image features of a first shooting image and a second shooting image group to obtain a first image feature and a second image feature group;
Step S105 of determining a mapping relationship between a photographing position and image features of an image of a positioning target photographed at the photographing position based on a positional difference between each second position and the first position and a corresponding relationship between feature differences between each second image feature and the first image feature;
Step S106, adjusting the shooting direction of the camera 150 to be positioned towards the positioning target to obtain a positioning shooting image of the positioning target;
Step S107, extracting image features of the positioning shooting image to obtain positioning image features; and
Step S108, determining the position of the camera 150 to be positioned inside the real space based on the mapping relation and the positioning image features.
It should be noted that the "camera" described herein may be a camera for capturing still images or a video camera for capturing moving images.
The real space may be a preset space of any shape in the real world, for example, a cubic space, a cuboid space, a spherical space, etc., and may be specifically determined according to shooting requirements. The shape of the boundary surface of the real space may also vary according to the shape of the real space. In the present embodiment, the real space is preferably a rectangular parallelepiped space, and therefore the boundary surface thereof is 6 rectangular planes surrounding the rectangular parallelepiped. In step S101, the positioning target may be set on any one of the above 6 rectangular planes. Of course, in other embodiments of the present disclosure, the boundary surface may have other shapes, which will not be described in detail herein.
The reference to "facing" in step S102 means: the connecting line between the device for shooting the positioning target and the midpoint of the positioning target is perpendicular to the boundary surface where the positioning target is located. If the boundary surface where the positioning target is located is a curved surface, the straight line is perpendicular to the tangent plane of the curved surface. Specifically, taking the cube real space as an example, the positioning object is disposed on one of the rectangular boundary surfaces, then the position facing the positioning object should be on a straight line perpendicular to the boundary surface on which the positioning object is located and passing through the center of the positioning object. At a position facing the positioning target, the first photographed image photographed toward the positioning target should be a front view of the positioning target. The first captured image may serve as a baseline reference map for subsequent positioning steps.
In step S103, the plurality of second positions are different from the first position. The plurality of second positions may also include the "facing" positions described above, while also including the plurality of "non-facing" positions. The first location and the plurality of second locations may be known, or the location information may be obtained by subsequent measurements (i.e., the spatial coordinates of the points where the first location and the second location are located may be known). In a specific process of shooting a positioning target, whether the shooting device is positioned at a position opposite to the positioning target or not, shooting should be carried out towards the positioning target, so that the positioning target is positioned in the center of a shot image, and subsequent image feature extraction and image comparison are facilitated.
In step S104, the image feature refers to the attribute of the captured positioning target in the entire captured image. The above-mentioned attributes may be, for example: the pixel size of the positioning object in the photographed image, the proportion of the positioning object in the entire photographed image, the side length size, the area size, the distortion degree, etc. of the positioning object in the photographed image.
Due to the difference in shooting positions, the resulting shot images of the positioning target tend to be different. In other words, there is a one-to-one correspondence between the photographing position and the photographed image of the positioning target, and in step S105, by comparing the positioning target images photographed at different positions, the one-to-one correspondence may be obtained by fitting calculation, thereby obtaining a mapping relationship between the photographing position and the image characteristics of the image of the positioning target photographed at the photographing position. According to the mapping relation, the position information of any shooting point in the real space can be obtained through shooting images of the positioning target. For example, in step S106, a point whose position is unknown may be arbitrarily selected, and then the positioning target is photographed to obtain a positioning photographed image. Then in step S107, the image features are extracted in the same manner as in step S104, resulting in a localization image feature. Finally, in step S108, the location information of the unknown point is obtained based on the mapping relationship obtained in S105 and the location image feature of the current location.
Fig. 2 shows a flowchart of a method for camera positioning inside a real space according to an exemplary embodiment. The method of fig. 2 is substantially the same as the method of fig. 1, but some of the steps in the method of fig. 1 are refined, as shown in fig. 2, and the method comprises the steps of:
step S201, setting a positioning target on a boundary surface of a real space;
Step S202, acquiring a first shooting image through a first positioning camera;
step S203, a second shooting image group is acquired through a plurality of second positioning cameras;
step S204, extracting image features of a first shooting image and a second shooting image group respectively to obtain a first image feature and a second image feature group;
step S205 of determining a mapping relationship between a photographing position and image features of an image of a positioning target photographed at the photographing position based on a positional difference between each second position and the first position and a corresponding relationship between feature differences between each second image feature and the first image feature;
Step S206, adjusting the shooting direction of the camera 150 to be positioned towards the positioning target to obtain a positioning shooting image of the positioning target;
step S207, extracting image features of the positioning shooting image to obtain positioning image features; and
Step S208, determining the position of the camera 150 to be positioned inside the real space based on the mapping relation and the positioning image features.
To describe the method of the present disclosure in more detail, the method will be described later in connection with embodiments having more details. In one embodiment according to the present disclosure, the real space is a cuboid living room. The living room has a floor and two side walls as shown in fig. 6. The bottom surface and the two side walls are perpendicular to each other and form a corner of the direct broadcasting room. The bottom surface and the two side walls are both composed of LED display panels, and the LED display panels can display background images different from the real world, so that the live broadcasting room has a virtual live broadcasting effect. Preferably, each LED display panel is further connected to a picture rendering server, respectively, and the picture rendering server is used for controlling the display of the LED display panel. Of course in other embodiments of the present disclosure, the real space may not be a live room, but a movie shooting site, an animate site, etc. In addition, where the real space may be a living room, the living room may also have more or less than 3 LED display panels, for example, 4 display panels, or only 2 display panels (only bottom and one side wall), and implementations of the present disclosure are not limited in any way by the above factors.
In this embodiment, the positioning target may be a specific image displayed by the LED display panel, so that the positioning image may be displayed by means of the LED display panel of the living room, without setting a positioning target in a physical form, thereby simplifying the positioning method of the present disclosure. In other embodiments without an LED display panel, the positioning object may also be a physical object arranged on a wall of a real space or other form of boundary surface, for example a marking disc, a sticker, etc. In this embodiment, the positioning target may also be a rectangular checkerboard image, and since the rectangle is a standard shape, setting the positioning target to be rectangular is advantageous for subsequent positioning calculation. The checkerboard image includes a plurality of black and white alternating grid cells, and each grid is the same size. The positioning target is set to be a checkerboard grid to obtain more positioning reference units, for example, each grid unit can be set to be a positioning reference, so that in the subsequent step of determining the mapping relation, the data of each grid unit can be collected, more fitting data can be obtained, and a better fitting effect can be achieved. In one embodiment of the present disclosure, the image features may be selected in a subsequent step to include the features (e.g., the side lengths) of each grid cell of the checkerboard image in the captured image, thereby increasing the richness of the fit data.
In this embodiment, the positioning camera 150 to be positioned at an unknown position may be positioned by a plurality of positioning cameras at different known positions. The plurality of positioning cameras comprise a first positioning camera located at a first position and a plurality of second positioning cameras located at a plurality of second positions respectively. In step S202, the first camera may be independently turned on to capture the positioning target, so as to obtain a first captured image. In step S203, a plurality of second positioning cameras may be turned on simultaneously to capture the positioning target, where each second positioning camera obtains a second captured image, and the plurality of second captured images form a second captured image group. The first positioning camera and the second positioning camera set the same picture when shooting so that the sizes of the images obtained by shooting each time are equal.
Specifically, as shown in fig. 6, a rectangular coordinate system may be established for a living room (i.e., real space). For convenience of explanation, it may be assumed that the positioning target is displayed on the second LED display panel 112, but may be disposed on the third LED display panel 113. And in order to facilitate the subsequent positioning, the positioning target is preferably displayed at the midpoint of the second LED display panel 112. The X-Y plane represents a horizontal plane, with a direction parallel to the second LED display panel 112 being defined as an X direction, a direction parallel to the third LED display panel 113 being defined as a Y direction, and a vertical direction being defined as a Z direction. For ease of positioning, the center of the first LED display panel 111 may be defined as the origin of the X-axis, and the center of the second LED display panel 112 may be defined as the origin of the Y-axis. In this embodiment, as shown in fig. 4, the first positions are the positions located at the origin O of the X-Y plane, and the second positions are preferably 4 positions, which are the X1 position where the origin O moves a first preset distance along the X negative axis, the X2 position where the origin O moves a first preset distance along the X positive axis, the Y1 position where the origin O moves a second preset distance along the Y positive axis, and the Y2 position where the origin O moves a second preset distance along the Y negative axis, respectively. The first preset distance and the second preset distance may be determined according to the size of the live broadcast room. Fig. 5 shows images of the positioning target photographed at the respective positions in fig. 4, and each of fig. 5 represents a photographed image of the positioning target photographed at the different positions. From the figure we see that the features of the resulting captured image are different due to the different capturing positions. Specifically, at the Y1 point, since the photographing distance is short, the size of the positioning target in the image is large, and conversely, is small at Y2. Since the shooting directions and the plane of the positioning target are at a certain angle at the positions X1 and X2, the positioning target image is distorted to a certain extent (the length of the left side edge or the right side edge of the checkerboard image is different from that of the checkerboard image of the first shot image).
Based on the characteristics of the photographed image, the feature image may be defined as the size of the area of the checkerboard grid in the photographed image and the lengths of the left and right sides of the checkerboard grid. The corresponding relation exists between the size of the area and the position of the shooting point on the Y axis, so that the position of the shooting point on the Y axis can be represented; the lengths of the two side edges have a corresponding relation with the position of the shooting point on the X axis, so that the position of the shooting point on the X axis can be represented. The position information of the camera 150 to be positioned in the X-Y plane can basically be determined by the above two image features.
Specifically, the feature difference between each of the second captured images and the first captured image may be calculated with respect to the image feature in the first captured image. Taking the second captured image captured at the Y1 position as an example, assuming that the area size of the positioning target in the second captured image is 150% in the first captured image, the feature difference may be noted as 50%. Since the position information of the first position O and the second position Y1 is known, a position difference of the two positions can be obtained, and then the feature difference and the position difference are data in which there is a correspondence relationship. And obtaining enough corresponding relation data, and fitting the function mapping relation of the position difference relative to the image characteristic difference. In step S205, the fitting step described above may be accomplished using specialized mathematical software such as matlab, map, and the like. Similarly, the mapping relation between the position difference and the difference of the side lengths of the grid of the chess board can be fitted. In other embodiments of the present disclosure, the feature differences may be differences between one side of the checkerboard grid in the first captured image and one side of the checkerboard grid in the second captured image, or other values that may be used as the feature differences of the image, and in any case, implementations of the present disclosure are not limited by the specific type of feature differences of the image.
Since the vertical position of the camera 150 to be positioned is not a concern during live broadcasting (since cameras of a living room are generally disposed at the same preset height), the above-described embodiment selects only the first position and the plurality of second positions within one X-Y plane (horizontal plane), so that only the X-Y coordinates can be positioned, and the Z coordinates cannot be positioned. In some other embodiments of the present disclosure, a plurality of second positions may also be provided at different heights to enable positioning of the Z-axis of the camera 150 to be positioned.
After the functional mapping relation of the above-mentioned position with respect to the image feature is obtained, in step S208, the position information of any of the to-be-positioned cameras 150 whose position is unknown may be obtained by obtaining the image feature of the captured image thereof, according to the mapping relation.
Fig. 3 shows a flowchart illustrating a method for camera positioning inside a real space according to an exemplary embodiment. The method is a variation of the method of fig. 2, as shown in fig. 3, and comprises the steps of:
step S301, setting a positioning target on a boundary surface of a real space;
Step S302, setting a positioning camera at a first position and acquiring a first shooting image;
Step S303, setting the positioning cameras at a plurality of second positions in sequence, and acquiring a plurality of second shooting images in sequence to form a second shooting image group;
Step S304, extracting image features of a first shooting image and a second shooting image group respectively to obtain a first image feature and a second image feature group;
step S305 of determining a mapping relationship between a photographing position and image features of an image of a positioning target photographed at the photographing position based on a positional difference between each second position and the first position and a corresponding relationship between feature differences between each second image feature and the first image feature;
Step S306, the shooting direction of the camera 150 to be positioned is adjusted towards the positioning target to obtain a positioning shooting image of the positioning target;
Step S307, extracting image features of the positioning shooting image to obtain positioning image features; and
Step S308, determining the position of the camera 150 to be positioned inside the real space based on the mapping relation and the positioning image features.
The method of the present embodiment may implement positioning using only one positioning camera. Specifically, the positioning camera is sequentially moved and shot at the first position and the plurality of second positions, and the first shot image and the plurality of second shot images can be obtained by only one positioning camera, so that a plurality of cameras are not required to be used in the method of the embodiment, and the method and the system of the disclosure are simplified. The other steps of this embodiment are similar to the method shown in fig. 2 and will not be described in detail here.
Fig. 7 illustrates a flowchart showing a method for virtual photographing according to an exemplary embodiment. The method comprises the following steps:
Step S701, creating a virtual scene in a virtual space;
step S702, creating a virtual camera 220 for shooting a virtual scene in a virtual space;
step S703, setting the to-be-positioned camera 150 for shooting the real scene in the real space;
Step S704, the method for camera positioning inside the real space;
step S705, adjusting the shooting position of the virtual camera 220 in the virtual space to coincide with the position of the camera 150 to be positioned inside the real space;
step S706, acquiring a virtual scene image captured by the virtual camera 220 as a background image of a real scene image captured by the camera 150 to be positioned;
in step S707, the background image is displayed using the first LED display panel 111, the second LED display panel 112, and the third LED display panel 113.
In step S701, a virtual scene is created in a virtual space using a 3D image processing engine. The 3D image processing engine described above includes, but is not limited to OpenGL, unreal, quake and the like. The virtual scene will be used for subsequent virtual captured images.
In step S702, the virtual camera 220 is not a real camera, but a camera that is an artifact in the virtual space, and the virtual camera 220 is capable of capturing a virtual scene in the virtual space. In actual operation, the virtual camera 220 may be an application module built in the above-mentioned 3D engine. Parameters of the virtual camera 220 may be directly adjusted in the 3D engine, for example: camera position, shooting angle, picture, resolution, focal length, etc.
In step S703, the camera 150 to be positioned is set inside the real space. The camera 150 to be positioned will be used to take a real scene of the real space. In a live broadcasting room, a plurality of cameras for shooting real scenes are generally arranged due to the requirements of different shooting angles, and the viewing angles of the plurality of cameras are switched through a controller connected with the plurality of cameras.
In step S704, the camera 150 to be positioned is positioned by applying the above camera positioning method, so as to obtain the position information of the real camera to be positioned in the real space, and the specific process is referred to the method in fig. 1, which is not described herein.
In step S705, the position parameters of the virtual camera 220 are adjusted in the 3D engine so that the position of the virtual camera 220 in the virtual space and the position of the to-be-positioned camera 150 in the real space coincide. Thus, the virtual camera 220 and the real camera to be positioned have a matching photographing position and photographing angle.
Step S706 is a step of the image synthesis process. To obtain a virtually photographed image, a virtual scene image photographed by the virtual camera 220 may be acquired as a background image of a real scene image photographed by the camera 150 to be positioned. In other words, the virtual scene photographed by the virtual camera 220 is used to replace the background portion of the real scene photographed by the real camera, and the foreground portion of the real scene (e.g., a presenter, a live platform, etc. within the live room) is preserved, thereby obtaining a synthesized virtual photographed image. The resulting image effect is that the foreground part of the real scene (presenter, live broadcast station, etc.) exists in the environment with the virtual scene as the background, and because the positions of the virtual camera 220 and the real camera are already matched, the virtual background picture and the picture of the real scene part are completely matched, thus greatly improving the picture reality of the virtual shooting.
In step S707, if the real space is a live room with an LED display panel, the virtual background image may be displayed through the LED panel, so as to achieve the effect of the virtual live room. Specifically, the virtual background image data may be sent from the computing device running the 3D engine to a rendering server for controlling the LED display panel to display, and the rendering server controls the LED display panel to display the background image after receiving the data.
The present disclosure also discloses a system 100 for camera positioning inside a real space, comprising: a positioning target setting unit 110, a first photographing unit 120, a second photographing unit 130, a feature extraction unit 140, a calculation unit 160, and a camera 150 to be positioned. The positioning target setting unit 110 is configured to set a positioning target on a boundary surface of the real space. The first photographing unit 120 is configured to photograph toward the positioning target at a first position facing the positioning target to acquire a first photographed image of the positioning target. The second photographing unit 130 is configured to photograph toward the positioning target at a plurality of second positions other than the first position to acquire a second photographed image group of the positioning target, the second photographed image group including images photographed at each of the second positions. The feature extraction unit 140 is connected to the first photographing unit 120 and the second photographing unit 130, and is configured to extract image features of the first photographed image and the second photographed image group, respectively, to obtain a first image feature and a second image feature group. The calculation unit 160 is configured to determine a mapping relationship of the shooting position and the image feature of the image of the positioning target shot at the shooting position based on the positional difference of each second position and the first position and the relationship between the feature differences of each corresponding second image feature and first image feature. The camera to be positioned 150 is configured to face a positioning target to acquire a positioning photographed image of the positioning target. The feature extraction unit 140 is further connected to the camera 150 to be positioned, and is further configured to extract image features of the positioning captured image, resulting in positioning image features. The computing unit 160 is further configured to determine a position of the camera 150 to be positioned inside the real space based on the mapping relation and the positioning image features. Unless otherwise specified, the connections generally refer to communications connections, i.e., data that may be communicated with each other.
In one embodiment of the present disclosure, the first photographing unit 120 and the second photographing unit 130 include the same positioning camera. The positioning camera is configured to acquire a first photographed image at a first position; and sequentially moving to a plurality of second positions, and sequentially acquiring a plurality of second shooting images to form a second shooting image group. The movement of the positioning camera can be controlled by a cradle head for supporting the camera.
In another embodiment of the present disclosure, the first photographing unit 120 and the second photographing unit 130 include different plurality of positioning cameras including a first positioning camera at a first position and a plurality of second positioning cameras at a plurality of second positions, respectively; the first positioning camera is configured to acquire a first shooting image at a first position; the second positioning camera respectively acquires a plurality of second shooting images at a plurality of second positions to form a second shooting image group.
The real space may be a living room having a first LED display panel 111 constituting a bottom surface thereof, and a second LED display panel 112 and a third LED display panel 113 constituting two adjacent sidewalls of the living room, which are orthogonal to the first LED display panel 111, respectively. And the positioning target setting unit 110 includes a second LED display panel 112 or a third LED display panel 113, and is configured to display the positioning target in an image manner.
Here, the operations of the respective components in the system 100 for camera positioning inside the real space are similar to the operations of steps S101 to S108 described above, respectively, and are not described here again.
The present disclosure also discloses a system 200 for virtual photography, comprising: a virtual scene creation unit 210, a virtual camera 220, a camera to be positioned 150, a system 100 for camera positioning as described above, an adjustment unit 230 and an image processing unit 240. The virtual scene creation unit 210 is configured to create a virtual scene in a virtual space. The virtual camera 220 is created in a virtual space for photographing a virtual scene. The camera 150 to be positioned is disposed inside the real space for photographing a real scene inside the real space. The adjustment unit 230 is configured to adjust a photographing position of the virtual camera 220 in the virtual space to coincide with a position of the camera 150 to be positioned inside the real space. The image processing unit 240 is configured to acquire a virtual scene image captured by the virtual camera 220 as a background image of a real scene image captured by the camera 150 to be positioned.
The real space may be a living room, and the system 200 further includes: a first LED display panel 111 constituting a bottom surface of the living broadcasting room, and a second LED display panel 112 and a third LED display panel 113 constituting two adjacent sidewalls of the living broadcasting room, which are orthogonal to the first LED display panel 111, respectively; the first LED display panel 111, the second LED display panel 112, and the third LED are also configured to display a background image.
Here, the operations of the respective components in the system 200 for virtual shooting are similar to those of the steps S701 to S707 described above, respectively, and are not described here again.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
Referring to fig. 10, a block diagram of a structure of an electronic device 1000 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for the operation of the device 1000 can also be stored. The computing unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004. The calculation unit 1001 may be the same unit as the calculation unit 106 in fig. 8, or may be two different units.
Various components in device 1000 are connected to I/O interface 1005, including: an input unit 1006, an output unit 1007, a storage unit 1008, and a communication unit 1009. The input unit 1006 may be any type of device capable of inputting information to the device 1000, the input unit 1006 may receive input numeric or character information, and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. The output unit 1007 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 1008 may include, but is not limited to, magnetic disks, optical disks. Communication unit 1009 allows device 1000 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 1001 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units 160 running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The computing unit 1001 performs the respective methods and processes described above, for example, a method for camera positioning inside a real space. For example, in some embodiments, the method for camera positioning inside real space may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communication unit 1009. When a computer program is loaded into RAM 1003 and executed by computing unit 1001, one or more steps of the method for camera positioning inside real space described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the method for camera positioning inside the real space by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely illustrative embodiments or examples and that the scope of the present disclosure is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (18)

1. A method for camera positioning inside a real space, comprising:
setting a positioning target on a boundary surface of the real space;
shooting towards the positioning target at a first position opposite to the positioning target to acquire a first shooting image of the positioning target;
Capturing a second captured image group toward the positioning target at a plurality of second positions other than the first position to acquire the positioning target, the second captured image group including images captured at each of the second positions;
Respectively extracting image features of the first shooting image and the second shooting image group to obtain a first image feature and a second image feature group;
Determining a mapping relationship between a shooting position and image features of an image of a positioning target shot at the shooting position based on a position difference between each second position and the first position and a corresponding relationship between a feature difference between each second image feature and the first image feature;
adjusting the shooting direction of a camera to be positioned towards the positioning target to acquire a positioning shooting image of the positioning target;
extracting image features of the positioning shooting image to obtain positioning image features; and
And determining the position of the to-be-positioned camera in the real space based on the mapping relation and the positioning image characteristics.
2. The method of claim 1, wherein the real space interior includes a plurality of positioning cameras for positioning the camera to be positioned, the plurality of positioning cameras including a first positioning camera located at the first position and a plurality of second positioning cameras located at the plurality of second positions, respectively, and capturing a first captured image of the positioning target toward the positioning target at the first position facing the positioning target further includes:
acquiring the first shooting image through the first positioning camera; and
Capturing a second captured image set toward the positioning target at a plurality of second positions other than the first position to acquire the positioning target further includes,
And acquiring the second shooting image group through a plurality of second positioning cameras.
3. The method of claim 1, wherein the real space interior includes one positioning camera for positioning the camera to be positioned, and capturing a first captured image of the positioning target toward the positioning target at a first position facing the positioning target further comprises:
setting the positioning camera at the first position and acquiring the first shooting image; and
Capturing a second captured image set toward the positioning target at a plurality of second positions other than the first position to acquire the positioning target further includes,
And sequentially arranging the positioning cameras at a plurality of second positions, and sequentially acquiring a plurality of second shooting images to form a second shooting image group.
4. A method as claimed in any one of claims 1 to 3, wherein the real space is a living room having a first LED display panel constituting a bottom surface thereof, and a second LED display panel and a third LED display panel constituting two adjacent side walls of the living room respectively orthogonal to the first LED display panel, the positioning target being displayed in an image manner on the second LED display panel or the third LED display panel.
5. The method of claim 4, wherein the positioning target is a rectangular checkerboard image, the checkerboard image includes a plurality of grid cells, and the image features include features of each grid cell in the captured image.
6. The method of claim 5, wherein the checkerboard image is displayed in the center of the second or third LED display panel.
7. The method of claim 6, wherein the plurality of second locations are at the same preset level as the first location, and the plurality of second locations further comprise a plurality of locations on a first straight line that is a straight line passing through a point where the first location is located and parallel to the second LED display panel and a plurality of locations on a second straight line that is a straight line passing through a point where the first location is located and parallel to the third LED display panel.
8. A method for virtual photography, comprising:
creating a virtual scene in the virtual space;
creating a virtual camera in the virtual space for shooting the virtual scene;
a camera to be positioned for shooting a real scene in the real space is arranged in the real space;
The method for camera positioning inside a real space according to any of claims 1 to 7;
Adjusting the shooting position of the virtual camera in the virtual space to be consistent with the position of the camera to be positioned in the real space; and
And acquiring the virtual scene image shot by the virtual camera to serve as a background image of the real scene image shot by the to-be-positioned camera.
9. The method of claim 8, wherein the real space is a living room having a first LED display panel constituting a bottom surface thereof, and a second LED display panel and a third LED display panel respectively orthogonal to the first LED display panel constituting two adjacent side walls of the living room, and further comprising, after acquiring an image of the virtual scene photographed by the virtual camera as a virtually photographed background image:
Displaying the background image using the first LED display panel, the second LED display panel, and the third LED display panel.
10. A system for camera positioning inside a real space, comprising:
a positioning target setting unit configured to set a positioning target on a boundary surface of the real space;
A first photographing unit configured to photograph toward the positioning target at a first position facing the positioning target to acquire a first photographed image of the positioning target;
a second photographing unit configured to photograph toward the positioning target at a plurality of second positions other than the first position to acquire a second photographed image group of the positioning target, the second photographed image group including images photographed at each of the second positions;
The feature extraction unit is connected with the first shooting unit and the second shooting unit and is configured to extract image features of the first shooting image and the second shooting image group respectively to obtain a first image feature and a second image feature group;
A calculation unit configured to determine a mapping relationship of a photographing position and image features of an image of a positioning target photographed at the photographing position based on a relationship between a position difference of each of the second positions and the first positions and a feature difference of each of the corresponding second image features and the first image features; and
A camera to be positioned configured to face the positioning target to acquire a positioning shooting image of the positioning target; wherein the method comprises the steps of
The feature extraction unit is connected with the camera to be positioned and is further configured to extract image features of the positioning shooting image to obtain positioning image features;
the computing unit is further configured to determine a position of the to-be-positioned camera inside the real space based on the mapping relation and the positioning image feature.
11. The system of claim 10, wherein
The first shooting unit comprises a first positioning camera positioned at the first position, and the first positioning camera is configured to acquire the first shooting image;
The second photographing unit includes a plurality of second positioning cameras respectively located at a plurality of the second positions, the plurality of second positioning cameras configured to acquire the second photographed image group.
12. The system of claim 10, wherein
The first shooting unit and the second shooting unit comprise the same positioning camera;
The positioning camera is configured to acquire the first photographed image at the first position; and sequentially moving to a plurality of second positions, and sequentially acquiring a plurality of second shooting images to form a second shooting image group.
13. The system of claim 10, wherein the real space is a living room having a first LED display panel constituting a bottom surface thereof, and a second LED display panel and a third LED display panel constituting two adjacent sidewalls of the living room, respectively orthogonal to the first LED display panel, and
The positioning target setting unit includes a second LED display panel or a third LED display panel, and is configured to display the positioning target in an image manner.
14. A system for virtual photography, comprising:
A virtual scene creation unit configured to create a virtual scene in a virtual space;
a virtual camera created in the virtual space for photographing the virtual scene;
the system for camera positioning inside a real space according to any of claims 10 to 13, wherein the camera to be positioned is further configured for capturing a real scene inside the real space;
An adjusting unit configured to adjust a shooting position of the virtual camera in the virtual space to coincide with a position of the camera to be positioned inside the real space; and
And the image processing unit is configured to acquire a virtual scene image shot by the virtual camera as a background image of a real scene image shot by the to-be-positioned camera.
15. The system of claim 14, wherein the real space is a living room, the system further comprising: a first LED display panel forming the bottom surface of the direct broadcasting room, and a second LED display panel and a third LED display panel forming two adjacent side walls of the direct broadcasting room, which are respectively orthogonal to the first LED display panel;
The first LED display panel, the second LED display panel, and the third LED are further configured to display the background image.
16. A computer device, comprising:
A memory, a processor and a computer program stored on the memory,
Wherein the processor is configured to execute the computer program to implement the steps of the method of any one of claims 1 to 9.
17. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the method of any of claims 1 to 9.
18. A computer program product comprising a computer program, wherein the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 9.
CN202110287270.5A 2021-03-17 2021-03-17 Method and system for camera positioning inside real space Active CN112837375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110287270.5A CN112837375B (en) 2021-03-17 2021-03-17 Method and system for camera positioning inside real space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110287270.5A CN112837375B (en) 2021-03-17 2021-03-17 Method and system for camera positioning inside real space

Publications (2)

Publication Number Publication Date
CN112837375A CN112837375A (en) 2021-05-25
CN112837375B true CN112837375B (en) 2024-04-30

Family

ID=75930334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110287270.5A Active CN112837375B (en) 2021-03-17 2021-03-17 Method and system for camera positioning inside real space

Country Status (1)

Country Link
CN (1) CN112837375B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780617A (en) * 2016-11-24 2017-05-31 北京小鸟看看科技有限公司 A kind of virtual reality system and its localization method
CN109643465A (en) * 2016-06-20 2019-04-16 Cy游戏公司 System etc. for creating mixed reality environment
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
CN110675348A (en) * 2019-09-30 2020-01-10 杭州栖金科技有限公司 Augmented reality image display method and device and image processing equipment
CN111243025A (en) * 2020-01-16 2020-06-05 任志忠 Method for positioning target in real-time synthesis of movie and television virtual shooting
CN112116572A (en) * 2020-09-14 2020-12-22 景德镇瓷与链智能科技有限公司 Method for accurately positioning surface position image of object by camera
JP2020204856A (en) * 2019-06-17 2020-12-24 株式会社バンダイナムコアミューズメント Image generation system and program
CN112312111A (en) * 2020-10-30 2021-02-02 北京字节跳动网络技术有限公司 Virtual image display method and device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4869430B1 (en) * 2010-09-24 2012-02-08 任天堂株式会社 Image processing program, image processing apparatus, image processing system, and image processing method
JP6425780B1 (en) * 2017-09-22 2018-11-21 キヤノン株式会社 Image processing system, image processing apparatus, image processing method and program
CN109547925A (en) * 2018-12-07 2019-03-29 纳恩博(北京)科技有限公司 Location updating method, the display methods of position and navigation routine, vehicle and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109643465A (en) * 2016-06-20 2019-04-16 Cy游戏公司 System etc. for creating mixed reality environment
CN106780617A (en) * 2016-11-24 2017-05-31 北京小鸟看看科技有限公司 A kind of virtual reality system and its localization method
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
JP2020204856A (en) * 2019-06-17 2020-12-24 株式会社バンダイナムコアミューズメント Image generation system and program
CN110675348A (en) * 2019-09-30 2020-01-10 杭州栖金科技有限公司 Augmented reality image display method and device and image processing equipment
CN111243025A (en) * 2020-01-16 2020-06-05 任志忠 Method for positioning target in real-time synthesis of movie and television virtual shooting
CN112116572A (en) * 2020-09-14 2020-12-22 景德镇瓷与链智能科技有限公司 Method for accurately positioning surface position image of object by camera
CN112312111A (en) * 2020-10-30 2021-02-02 北京字节跳动网络技术有限公司 Virtual image display method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A real camera interface enabling to shoot objects in virtual space";Keito Tanizaki等;《International Workshop on Advanced Image Technology 2021》;全文 *
"基于圆心真实图像坐标计算的高精度相机标定方法";卢晓冬等;《中国激光》;第47卷(第3期);全文 *

Also Published As

Publication number Publication date
CN112837375A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
US11605214B2 (en) Method, device and storage medium for determining camera posture information
US11330172B2 (en) Panoramic image generating method and apparatus
CN108961423B (en) Virtual information processing method, device, equipment and storage medium
CN113426117B (en) Shooting parameter acquisition method and device for virtual camera, electronic equipment and storage medium
CN112882576B (en) AR interaction method and device, electronic equipment and storage medium
US11922568B2 (en) Finite aperture omni-directional stereo light transport
CN109688343A (en) The implementation method and device of augmented reality studio
CN116057577A (en) Map for augmented reality
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
US20030146922A1 (en) System and method for diminished reality
WO2022102476A1 (en) Three-dimensional point cloud densification device, three-dimensional point cloud densification method, and program
JP7262530B2 (en) Location information generation method, related device and computer program product
CN113436348B (en) Three-dimensional model processing method and device, electronic equipment and storage medium
WO2019196871A1 (en) Modeling method and related device
WO2021217403A1 (en) Method and apparatus for controlling movable platform, and device and storage medium
CN112837375B (en) Method and system for camera positioning inside real space
CN114520903B (en) Rendering display method, rendering display device, electronic equipment and storage medium
CN116485969A (en) Voxel object generation method, voxel object generation device and computer-readable storage medium
CN114782611A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN110827411B (en) Method, device, equipment and storage medium for displaying augmented reality model of self-adaptive environment
CN111131689B (en) Panoramic image restoration method and system
CN112843694A (en) Picture shooting method and device, storage medium and electronic equipment
JPWO2019244200A1 (en) Learning device, image generator, learning method, image generation method and program
US20230245364A1 (en) Method for Processing Video, Electronic Device, and Storage Medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant