CN117201931A - Camera parameter acquisition method, device, computer equipment and storage medium - Google Patents

Camera parameter acquisition method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117201931A
CN117201931A CN202311061200.3A CN202311061200A CN117201931A CN 117201931 A CN117201931 A CN 117201931A CN 202311061200 A CN202311061200 A CN 202311061200A CN 117201931 A CN117201931 A CN 117201931A
Authority
CN
China
Prior art keywords
camera
parameters
information
entity
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311061200.3A
Other languages
Chinese (zh)
Inventor
张黎敏
韦胜钰
蔡佳
李诗仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronic Product Reliability and Environmental Testing Research Institute
Original Assignee
China Electronic Product Reliability and Environmental Testing Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronic Product Reliability and Environmental Testing Research Institute filed Critical China Electronic Product Reliability and Environmental Testing Research Institute
Priority to CN202311061200.3A priority Critical patent/CN117201931A/en
Publication of CN117201931A publication Critical patent/CN117201931A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application relates to the technical field of data acquisition, in particular to a camera parameter acquisition method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring movement track information of an entity camera; judging whether the entity camera has a moving sign or not according to the moving track information; if the entity camera has a moving sign, respectively acquiring basic parameters and real-time change parameters of the entity camera; and determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters. By adopting the method, the virtual camera and the entity camera can be ensured to be consistent in three-dimensional perspective relationship as much as possible, and the problem that the virtual camera is not matched with a real scene is reduced, so that the immersion of the virtual shooting work is improved.

Description

Camera parameter acquisition method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of data acquisition technologies, and in particular, to a method and apparatus for acquiring parameters of a camera, a computer device, and a storage medium.
Background
With the development of photographic technology, virtual shooting technology appears, virtual shooting (Virtual Production) is a brand-new film making mode, the virtual reality technology is applied to the shooting process of movies, television dramas and the like, the background is synchronously displayed through a real-time rendering engine of a computer, the background and people are instantly synthesized by using a tracking camera, the limitation of time and space is broken through, larger creation freedom is provided for directors, and meanwhile, better visual experience is brought to audiences. The virtual shooting system mainly comprises the following four parts: the system comprises an LED large screen system, a shooting system, a target tracking system and a rendering system.
In order to improve the immersion of the virtual shooting works, the target tracking system needs to acquire the real physical parameters of the entity camera and map the real physical parameters to the virtual camera, and the virtual camera parameters need to be set according to the parameters of the entity camera so as to ensure that the three-dimensional perspective relation of the three-dimensional virtual scene and the foreground is completely consistent.
For scenes such as film and television shooting, games and the like, actors need to move continuously, and at the moment, a fixed background scheme cannot meet shooting requirements, so that the problem that a scene mapped by a virtual camera is not matched with a real scene occurs.
Disclosure of Invention
Based on the above, it is necessary to provide a camera parameter acquisition method, apparatus, computer device and storage medium capable of ensuring that a virtual camera and an entity camera are consistent in three-dimensional perspective relationship as much as possible, reducing the problem that the virtual camera is not matched with a real scene, so as to improve the immersion of a virtual shooting work.
In a first aspect, the present application provides a method for capturing camera parameters. The method comprises the following steps:
acquiring movement track information of an entity camera;
judging whether the entity camera has a moving sign or not according to the moving track information;
If the entity camera has a moving sign, respectively acquiring basic parameters and real-time change parameters of the entity camera;
and determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
In one embodiment, the type of the basic parameter includes an image quality parameter; collecting basic parameters of the entity camera, including:
acquiring the relative position relation between the test chart card and the entity camera;
under the condition that the test chart card is not positioned at the preset position to be shot according to the relative position relation, the test chart card is controlled to move to the position to be shot;
acquiring external light intensity information and scene image information through the test chart;
and analyzing the scene image information according to the external light intensity information to obtain image quality parameters of the entity camera, wherein the image quality parameters comprise resolution, color standard, brightness and dynamic range.
In one embodiment, the base parameter types further include shutter rate; the method further comprises the steps of:
acquiring actual scene image information;
acquiring a waveform schematic diagram according to at least one of the actual scene image information and the scene image information acquired through the test chart;
And according to the waveform schematic diagram, analyzing the time difference between the two peaks to obtain the shutter speed.
In one embodiment, the base parameter type further includes a frame rate; the method further comprises the steps of:
acquiring video information of a first preset time period, wherein the video information is a test frame set containing scenes and actions;
acquiring the total frame number of the test frame set and the sum of time intervals between each continuous frame according to the video information;
and acquiring a frame rate according to the total frame number and the sum of the time intervals between each continuous frame.
In one embodiment, the type of real-time variation parameter includes a pose parameter; collecting real-time variation parameters of the entity camera, including:
acquiring an initial position and an initial direction of the entity camera;
taking the initial position and the initial direction as initial conditions, and acquiring a plurality of pieces of continuous picture information of a second preset time period;
analyzing a plurality of pieces of continuous picture information to obtain characteristic information of any adjacent picture under a pixel coordinate system;
according to a first formula and the characteristic information, the characteristic information of any adjacent picture under a world coordinate system is obtained;
Obtaining pose parameters of the entity camera according to the characteristic information of any adjacent picture under the world coordinate system;
wherein, the first formula is:
wherein f is the focal length of the camera, f x =f/d x 、f y =f/d y The number of pixels in the x and y directions of the focal length, respectively, (u) 0 ,v 0 ) For the pixel coordinates of the camera optical axis and the focal point of the image plane, R is the rotation matrix of the object, T is the translation matrix of the object, (u, v) is the two-dimensional coordinates of the object in the pixel coordinate system after imaging, (X) w ,Y w ,Z w ) Z is the three-dimensional coordinates of the object in the world coordinate system c Is a scale factor.
In one embodiment, the types of the real-time variation parameters include aperture value, focal length value, shooting distance and depth of field parameters; the method further comprises the steps of:
according to the continuous pieces of picture information, invoking an aperture value and a focal length value in the detailed information of the acquired pictures;
obtaining a depth of field parameter of the entity camera according to the aperture value, the focal length value and the second formula;
wherein the second formula is:
where F is the focal length of the camera, F is the aperture value, L is the shooting distance, δ=d/1730 is the diameter of the allowed circle of confusion, d is the diagonal length of the CCD chip, and DOF is the depth of field parameter.
In a second aspect, the application further provides a camera parameter acquisition device. The device comprises:
The acquisition module is used for acquiring the movement track information of the entity camera;
the judging module is used for judging whether the entity camera has a moving sign according to the moving track information;
the acquisition module is used for respectively acquiring basic parameters and real-time change parameters of the entity camera if the entity camera has movement signs;
and the processing module is used for determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring movement track information of an entity camera;
judging whether the entity camera has a moving sign or not according to the moving track information;
if the entity camera has a moving sign, respectively acquiring basic parameters and real-time change parameters of the entity camera;
and determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring movement track information of an entity camera;
judging whether the entity camera has a moving sign or not according to the moving track information;
if the entity camera has a moving sign, respectively acquiring basic parameters and real-time change parameters of the entity camera;
and determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring movement track information of an entity camera;
judging whether the entity camera has a moving sign or not according to the moving track information;
if the entity camera has a moving sign, respectively acquiring basic parameters and real-time change parameters of the entity camera;
and determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
According to the camera parameter acquisition method, the camera parameter acquisition device, the computer equipment and the storage medium, the basic parameters and the real-time change parameters of the entity camera can be acquired in real time by acquiring the movement track information of the entity camera and judging whether the camera moves according to the information. Based on these parameters, the real parameters of the virtual camera can be determined, thereby ensuring that the virtual camera and the physical camera are completely consistent in three-dimensional perspective. Therefore, the problem that the virtual camera is not matched with the real scene can be avoided, and the immersion sense of the virtual shooting work is improved. In the scenes of film and television shooting, games and the like, the technology can enable audience or players to be more immersive, and the participation and immersion of the audience or players are enhanced.
Drawings
FIG. 1 is an application environment diagram of a camera parameter acquisition method in one embodiment;
FIG. 2 is a flow chart of a method of camera parameter acquisition in one embodiment;
FIG. 3 is a flow chart of step S206 in one embodiment;
FIG. 4 is a signal flow diagram of a camera parameter acquisition device in one embodiment;
FIG. 5 is a block diagram of a camera parameter acquisition device in one embodiment;
fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The camera parameter acquisition method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 acquires movement track information of the entity camera; judging whether the entity camera has a moving sign or not according to the moving track information; if the entity camera has a moving sign, respectively acquiring basic parameters and real-time change parameters of the entity camera; and determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a method for acquiring camera parameters is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
step S202, obtaining movement track information of the entity camera.
Specifically, the position and the moving distance of the camera can be measured by installing a positioning device such as an ultrasonic sensor, an infrared sensor, or the like. These devices can provide real-time location information to determine the movement trajectory of the camera. With real-time images taken by the camera, the camera's position and motion information can be extracted by image processing techniques. For example, a light flow method may be used to calculate motion vectors on the image, thereby deducing the movement track of the camera. An Inertial Measurement Unit (IMU), which is a device that integrates sensors such as accelerometers and gyroscopes, may be used to measure the acceleration and angular velocity of an object. The IMU is mounted on the camera, and the movement track of the camera can be calculated by analyzing the output data of the sensor. In an actual scene, some visual marks, such as two-dimensional codes, AR marks and the like, are placed, and a camera can determine the position and the posture of the camera by identifying the marks. By analyzing the position of the mark and the shooting angle of the camera, the moving track of the camera can be calculated.
The above methods for obtaining the movement track information of the entity camera are common, and can be implemented by selecting a proper method according to specific requirements.
Step S204, judging whether the physical camera has movement signs according to the movement track information.
Specifically, judging whether the physical camera has a movement sign according to the movement track information can be achieved by:
judging the speed: the distances between the positions of the physical camera at different time points are calculated, and the average speed of the camera is calculated according to the time intervals. If the speed of the camera exceeds a set threshold, it can be determined that there is evidence of movement of the camera. Acceleration judgment: and calculating the speed change of the camera at different time points according to the movement track information. If the speed change of the camera exceeds a set threshold, it can be judged that there is a sign of movement of the camera. And (3) distance judgment: the total length of the physical camera movement track is calculated. If the camera movement distance exceeds a set threshold, it can be determined that there is evidence of movement of the camera. And (3) direction judgment: and analyzing the movement direction of the entity camera on the movement track. If the direction of movement of the camera is changed, it can be judged that there is a sign of movement of the camera.
Step S206, if the physical camera has a moving sign, respectively acquiring the basic parameters and the real-time variation parameters of the physical camera; if there is no sign of movement of the physical camera, only the basic parameters of the physical camera need to be acquired.
In particular, when there is evidence of movement of the camera, the basic parameters and real-time changing parameters of the physical camera need to be collected to obtain more comprehensive information. This is because the basic parameters of the mobile camera change over time, while real-time changing parameters can provide more detailed motion information. When there is no sign of movement of the camera, i.e. the camera is stationary, only the basic parameters of the physical camera need be acquired, since these parameters can determine the basic information of the camera.
In summary, the purpose of collecting the basic parameters and the real-time changing parameters of the physical camera is to obtain corresponding information according to the movement state of the camera for better analysis, application and decision. Collecting the basic and real-time changing parameters of the physical camera may provide more comprehensive and detailed information if the camera has signs of movement, facilitating subsequent analysis and application.
Step S208, determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
Specifically, by the basic parameters and the real-time changing parameters, the real parameters of the virtual camera can be deduced, namely, a virtual camera similar to the actual camera can be simulated. If the camera has a movement sign, the basic parameters and the real-time variation parameters of the entity camera are required to be acquired simultaneously so as to acquire various parameter information of the camera in the movement process. Real-time changing parameters include pose (translation and rotation), aperture, focal length, shooting distance, depth of field and the like, which can change along with the movement of the camera, and can provide more comprehensive motion information. When there is no sign of movement of the camera, i.e. the camera is stationary, only the basic parameters of the physical camera, such as resolution, color level, brightness, dynamic range, distortion, shutter rate and frame rate, etc. need to be acquired. These basic parameters can determine the fixed state of the camera and do not change over time.
Specifically, according to the moving state of the entity camera, the state and the characteristics of the entity camera can be more comprehensively known by collecting corresponding parameter information. If the entity camera has movement signs, the basic parameters and the real-time change parameters need to be acquired simultaneously; if the physical camera has no movement sign, only basic parameters are required to be acquired. The real parameters of the virtual camera can be deduced and simulated through the basic parameters and the real-time variation parameters, so as to realize the simulation and simulation effect of the entity camera.
In the camera parameter acquisition method, the movement track information of the entity camera is acquired, and whether the entity camera has movement signs is judged according to the movement track information. If there is a sign of movement of the physical camera, the basic parameters and the real-time variation parameters of the physical camera can be acquired respectively. Based on these parameters, the real parameters of the virtual camera can be determined. Therefore, the virtual camera and the entity camera can be guaranteed to be completely consistent in three-dimensional perspective relation, and the immersion sense of the virtual shooting works is improved. By the technology, more real and vivid visual effects can be realized in scenes such as film and television shooting, games and the like, and the feeling of being personally on the scene and the feeling of participation of a user are enhanced.
In one embodiment, as shown in FIG. 3, the type of base parameter includes an image quality parameter; collecting basic parameters of an entity camera, comprising:
step S302, the relative position relation between the test chart card and the entity camera is obtained.
Specifically, the relative position relationship between the test chart card and the entity camera is acquired by a camera calibration and feature matching method. The method comprises the following specific steps: multiple images are taken at different positions and angles using calibration plates of known dimensions and characteristics. And calculating external parameters (such as position and posture) of the camera through a calibration algorithm by utilizing internal parameters (such as focal length, optical center and the like) of the camera and characteristic points on the calibration plate. And placing the test chart card in an actual scene, and ensuring that the characteristic points or characteristic areas on the chart card are in the visual field of the camera. And shooting an image or video containing the test chart card by using the calibrated camera. And extracting the characteristic description of the characteristic points or the characteristic areas on the test chart by using a computer vision algorithm, and then searching for the matched characteristic points or the characteristic areas in the camera image. And then according to the matched characteristic points or characteristic areas, combining calibration parameters of the camera, and calculating the relative position relation between the test chart card and the entity camera, such as a translation vector and a rotation matrix, through triangulation or other algorithms. It should be noted that the accuracy of camera calibration and feature matching can affect the accuracy of the final relative positional relationship. Therefore, in actual operation, an appropriate calibration plate and feature extraction algorithm should be selected, and appropriate precision evaluation and error analysis should be performed to ensure that an accurate and reliable relative positional relationship is obtained.
Step S304, if the test chart card is determined not to be located at the preset position to be shot according to the relative position relationship, the test chart card is controlled to move to the position to be shot.
Specifically, according to the known relative position relationship and the preset position to be shot, whether the position of the current test chart card is consistent with the position to be shot is judged. If the test chart card is not located at the position to be photographed, the test chart card needs to be moved to the position to be photographed by controlling the movement of the test chart card. The automatic switching device of the test chart card can be utilized to control the electric control guide rail by sending an instruction so as to move the test chart card to a target position along the guide rail. The movement of the test chart card can be adjusted in real time by continuously detecting the position of the test chart card and judging the difference between the position of the test chart card and the target position until the test chart card reaches the position to be shot. After reaching the location to be photographed, photographing or other testing operations may be performed. It should be noted that, in order to precisely control the movement of the test card, the movement accuracy and stability of the electrically controlled guide rail should be ensured, and appropriate calibration and adjustment should be performed according to the actual situation. In addition, safety and stability are also considered, so that damage to the test card or equipment in the moving process is avoided.
Step S306, external light intensity information is acquired, and scene image information is acquired through the test chart.
Specifically, the light source of the external light includes a reflection surface light source and a projection light source. The reflection area light source is that a light source is arranged in a test scene, the light source irradiates the reflection surface, and then the intensity information of an external light source can be obtained by measuring the intensity of light on the reflection surface. This may be measured by an illuminometer or a camera. A transmissive light source refers to a light source that propagates through a transparent medium, such as sunlight, that shines into a room through a window. The intensity information of the transmitted light source may be measured using a light sensor or a camera.
The test chart card is usually designed with specific patterns, colors or marks, and the image information can be obtained by shooting the test chart card through a camera. And extracting information such as characteristic points, characteristic areas or patterns on the test chart card through an image processing algorithm. And matching the extracted features with preset features, and determining the position and the posture of the test chart card in the image. By testing the position and posture information of the graphic card, the relative position relation between the camera and the scene can be deduced, and the information such as the structure, texture, illumination and the like of the scene can be further analyzed.
In summary, the external light intensity information can be obtained through the reflection surface light source and the transmission light source, and the scene image information can be obtained through the test chart. Such information may be used for research and application in the fields of illumination simulation, image processing, computer vision, etc.
Step S308, analyzing scene image information according to the external light intensity information to obtain image quality parameters of the entity camera, wherein the image quality parameters comprise resolution, color standard, brightness and dynamic range.
Specifically, scene image information is analyzed according to external light intensity information, and image quality parameters of the entity camera can be obtained, including resolution, color level, brightness and dynamic range. The specific explanation is as follows:
resolution refers to the level of detail in an image that can be displayed, typically in pixels. In analyzing an image of a scene, the resolution can be assessed by detecting the sharpness of details in the image. If the detail sharpness in the image is high, the resolution is high; conversely, if the details in the image are blurred, the resolution is lower.
Color accuracy refers to the accuracy and authenticity of the colors in an image. By analyzing the color rendition and the color saturation in the scene image, the quality of the color level can be evaluated. If the color reduction degree in the image is high and the color saturation degree is moderate, the color accuracy is good; conversely, if the colors in the image are distorted or too saturated, the color level is poor.
Brightness refers to the degree of darkness in an image. By analyzing the brightness distribution and contrast in the scene image, the brightness can be evaluated. If the brightness distribution in the image is uniform and the contrast ratio is moderate, the brightness is better; conversely, if the brightness distribution in the image is uneven or the contrast is too high or too low, the brightness is poor.
The dynamic range refers to the range of brightness that can be displayed in an image. By analyzing the detail retention of the bright and dark portions in the scene image, the dynamic range can be evaluated for quality. If the details of the bright part and the dark part in the image can be clearly displayed, the dynamic range is larger; conversely, if the bright or dark detail in the image is lost or overexposed or too dark, the dynamic range is smaller.
By analyzing the influence of the external light intensity on the scene image, the image quality parameters of the related entity camera can be obtained. These parameters can be used to evaluate the imaging performance and image quality of the camera, providing a reference for optimal design and application of the camera.
In this embodiment, the relative positional relationship between the test card and the physical camera can be determined by measuring and calculating the positional relationship between them, so as to provide an accurate reference for subsequent operations. Under the condition that the test chart card is not located at the preset position to be shot, the test chart card is accurately moved to the position to be shot by controlling the movement of the test chart card, so that the accuracy and consistency of the follow-up image acquisition are ensured. The light intensity of the image acquisition environment and the related information of the image content can be acquired by acquiring the external light intensity information and the scene image information acquired by the test chart. According to the external light intensity information and the scene image information, the image quality parameters of the entity camera, including resolution, color standard, brightness and dynamic range, can be obtained by analyzing factors such as detail definition, color restoration degree, brightness distribution and dynamic range in the image, so that the image quality of the entity camera can be evaluated and optimized, and the quality and accuracy of image acquisition are improved.
In one embodiment, the base parameter types further include shutter rate; the method further comprises the steps of:
acquiring actual scene image information; acquiring a waveform schematic diagram according to at least one of actual scene image information and scene image information acquired through a test chart; according to the waveform diagram, the time difference between the two peaks is analyzed to obtain the shutter rate.
Specifically, after the actual scene image information and the scene image information acquired by the test chart are acquired, the image quality and performance of the physical camera can be evaluated by comparing the differences between the two. Wherein one of the items of information may be selected to draw a waveform schematic, such as a luminance waveform, a color waveform, etc. The waveform schematic may show the brightness or color distribution of different parts of the image to aid in analyzing and assessing image quality. By analyzing the time difference between the two peaks in the waveform diagram, information of the shutter rate can be obtained. Shutter rate refers to the speed at which the shutter of the camera opens and closes, which affects the exposure time and the degree of motion blur of the image.
In this embodiment, the shutter speed of the physical camera can be known by measuring and analyzing the time difference in the waveform diagram, the image quality and performance of the physical camera are evaluated and optimized, the exposure and dynamic capturing capability of the physical camera under different scenes are evaluated, and the working performance of the physical camera under different conditions is understood by parameters such as the waveform diagram and the shutter speed.
In one embodiment, the base parameter types further include frame rate; the method further comprises the steps of:
acquiring video information of a first preset time period, wherein the video information is a test frame set containing scenes and actions; acquiring the total frame number of the test frame set and the sum of time intervals between each two continuous frames according to the video information; the frame rate is obtained from the sum of the total number of frames and the time interval between each successive frame.
Specifically, first, by acquiring video information for a first preset period of time (e.g., 120 s), a set of test frames containing scenes and actions may be obtained. These test frames are images that are continuously acquired during this period of time and can be used for subsequent analysis and processing. Next, the number of images in the test frame set, i.e. the total number of test frames, is counted by analyzing the test frame set. This parameter represents the total number of images acquired during a preset time period. The time intervals between every two consecutive frames in the set of test frames are calculated and added. This parameter represents the total time interval between successive frames in a preset period of time. Finally, the frame rate can be obtained by dividing the total frame number by the sum of the time intervals between each successive frame. The frame rate is the number of frames of an image played in a unit time, and a common unit is the number of frames per second (fps). The average frame rate in the preset time period can be obtained by calculating the ratio of the frame number to the time interval.
In this embodiment, a set of test frames including scenes and actions is acquired by capturing and recording image frames within a preset time period for subsequent analysis and processing. And counting the number of images in the test frame set to obtain the total number of the test frames so as to know the total number of the images acquired in a preset time period. The total time interval between successive frames in the set of test frames is obtained by calculating the time intervals between successive frames and summing these time intervals. And dividing the total frame number by the sum of the time intervals between each two continuous frames to calculate the average frame rate in the preset time period for evaluating the fluency and playing effect of the video. Analyzing the video in a preset time period, obtaining information about the frame number and the frame rate so as to evaluate the quality and the playing effect of the video, and carrying out subsequent optimization and processing.
In one embodiment, the type of real-time variation parameter includes a pose parameter; collecting real-time variation parameters of an entity camera, comprising:
acquiring an initial position and an initial direction of an entity camera; taking the initial position and the initial direction as initial conditions, and acquiring a plurality of pieces of continuous picture information in a second preset time period; analyzing a plurality of pieces of continuous picture information to obtain characteristic information of any adjacent picture under a pixel coordinate system; according to the first formula and the characteristic information, the characteristic information of any adjacent picture under the world coordinate system is obtained; and obtaining pose parameters of the entity camera according to the characteristic information of any adjacent picture under the world coordinate system.
Specifically, the initial position and initial orientation of the physical camera in the world coordinate system is obtained in some way (e.g., manually set or sensor measurement). These parameters may be used as initial conditions for subsequent pose estimation. In a second preset time period (for example, 10 s), a plurality of images are continuously acquired and used for subsequent feature extraction and pose estimation. These images may be acquired by a physical camera or from other sources. And extracting feature points or feature descriptors in a plurality of continuous images through image processing and a computer vision algorithm. These feature information may include the location of corner points, edges, textures, etc. in the image and the feature description. By matching and solving the positions of the feature points in any adjacent pictures under the world coordinate system, the pose parameters of the entity camera at different time points can be estimated, including translation vectors and rotation matrixes, and the process can be realized by three-dimensional reconstruction, visual odometer or extended Kalman filtering and other methods
Wherein, the first formula is:
wherein f is the focal length of the camera, f x =f/d x 、f y =f/d y The number of pixels in the x and y directions of the focal length, respectively, (u) 0 ,v 0 ) For the pixel coordinates of the camera optical axis and the focal point of the image plane, R is the rotation matrix of the object, T is the translation matrix of the object, (u, v) is the two-dimensional coordinates of the object in the pixel coordinate system after imaging, (X) w ,Y w ,Z w ) Z is the three-dimensional coordinates of the object in the world coordinate system c Is a scale factor.
In this embodiment, by matching and solving feature points in any adjacent pictures, pose parameters of the entity camera at different time points, including translation vectors and rotation matrices, can be estimated, pose estimation of the entity camera can be realized, real-time variation parameters of the entity camera can be conveniently obtained, and determination of real parameters of the virtual camera is facilitated, so that the virtual camera and the entity camera are guaranteed to be completely consistent in three-dimensional perspective relation.
In one embodiment, the types of real-time variation parameters include aperture value, focal length value, shooting distance, and depth of field parameters; the method further comprises the steps of:
according to the continuous pieces of picture information, invoking an aperture value and a focal length value in the detailed information of the acquired pictures; and obtaining the depth of field parameter of the entity camera according to the aperture value, the focal length value and the second formula.
Specifically, detailed information of several consecutive pictures including an aperture value and a focal length value is acquired in some way (e.g., extracted from EXIF data of an image or acquired by calling a camera API). An aperture value and a focal length value are extracted from the detailed information of each picture. These values are typically represented by numbers, and may be a specific number or a range. It should be noted that the particular implementation may vary depending on the technology and equipment used. For example, when using a digital camera, an aperture value and a focal length value can be obtained by reading EXIF data of an image; when using the camera API, a correlation function may be called to obtain these values. When using a conventional film camera, it may be necessary to manually set the aperture and focal length and record them for use in subsequent calculations.
Wherein, the second formula is:
where F is the focal length of the camera, F is the aperture value, L is the shooting distance, δ=d/1730 is the diameter of the allowed circle of confusion, d is the diagonal length of the CCD chip, and DOF is the depth of field parameter.
In this embodiment, the aperture value and the focal length value are called by a plurality of pieces of continuous picture information, and the depth of field parameter of the physical camera is calculated according to the aperture value and the focal length value, so that the depth of field range of the image is known. The real-time change parameters of the entity camera are conveniently obtained, and the real parameters of the virtual camera are conveniently determined, so that the virtual camera and the entity camera are guaranteed to be completely consistent in three-dimensional perspective relation.
One embodiment of the application in the most detail is:
in live broadcast, release meeting and other scenes, only the basic parameters of the entity camera need to be acquired. The specific acquisition flow is as follows:
1. the basic parameters of the physical camera which are required to be acquired are selected, including resolution, color level, brightness, dynamic range, distortion, shutter speed, frame rate and the like.
2. If the resolution, color standard, brightness and dynamic range are required to be acquired, the distance between the test chart card and the entity camera, the required light intensity and the like are required to be input according to the acquisition requirements of different parameters.
3. The corresponding test chart card is automatically lifted and moved to a preselected position under the control of a touch screen or a computer program.
4. The system can control the entity camera to shoot the test chart card, and automatically analyze the shot chart and save parameter values through the waveform analyzer.
5. If the shutter rate is required to be acquired, the system automatically analyzes the time difference between the two peaks by using a waveform analyzer, so that the shutter rate is calculated.
6. If the frame rate needs to be acquired, video is recorded for a period of time (more than 120 s), and then single-frame playback is carried out, so that the frame rate value is calculated and stored.
Through the steps, the automatic acquisition of the basic parameters of the entity camera can be realized, and the acquisition rate and accuracy are improved.
For scenes such as film and television shooting, games and the like, actors need to move continuously, at the moment, a background type photo scheme cannot meet shooting requirements, an entity camera is required to track own space position information and feed back to a virtual camera of a rendering system, and the angle of a background picture is adjusted in real time by the virtual camera, so that a stronger immersion sense is created. Therefore, in addition to acquiring basic parameters of the physical camera, real-time changing parameters of the physical camera, such as pose (translation and rotation), aperture, focal length, shooting distance, depth of field, etc., need to be acquired.
The real-time change parameter acquisition process of the entity camera is as follows:
1. fixing the entity camera to a designated place, fixing the orientation of the entity camera, and automatically recording the position and the orientation of the entity camera at the moment by a system and marking the position and the orientation as initial data;
2. selecting basic parameters of an entity camera to be acquired in a module, such as pose (translation and rotation), aperture, focal length, shooting distance, depth of field and the like;
3. if the pose of the entity camera is required to be acquired, the entity camera is required to be controlled by a system to shoot pictures, feature matching is carried out on adjacent pictures, pose data are obtained through calculation according to the first formula, and the pose data are stored;
4. if the aperture and the focal length need to be acquired, related parameter values in EXIF detailed information of the acquired picture need to be called and stored;
5. if the depth of field needs to be acquired, after the aperture and the focal length values are acquired, the depth of field data is calculated and obtained according to the first formula and is stored;
6. if the shooting distance needs to be acquired, the camera needs to be linked with positioning equipment, so that the numerical value of the shooting distance is directly acquired and stored.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a camera parameter acquisition device for realizing the camera parameter acquisition method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation of the embodiment of the device for capturing camera parameters provided in the following may be referred to the limitation of the method for capturing camera parameters hereinabove, and will not be repeated here.
One embodiment of the application in the most detail is:
referring to fig. 4, the physical camera parameter acquisition device in the present application comprises the following parts:
(physical) camera: the method is used for shooting the test image card and the actual scene image, and acquiring the actual parameters of the entity camera through sending the actual scene image to the waveform analyzer and the video quality analysis software. It should be noted that, if the physical camera is a depth camera, the capturing distance parameter value may be obtained without using a positioning device.
Test chart card automatic switching equipment: a full-automatic image card switching device is combined with an electric control guide rail, a reflection surface light source and a transmission light source, a test image card and an entity camera can be moved to a designated position according to a preset shooting distance, a movement mode and the test image card size are not limited, and the acquisition rate and the accuracy of basic parameters of the entity camera can be improved.
Light source device: the device comprises a reflection surface light source device and a transmission type lamp box, and can provide a required light source for the automatic switching device of the test chart according to the environmental requirements when different parameters are acquired.
Positioning equipment: usually, the camera comprises ultrasonic sensors, infrared sensors and the like, and can directly calculate and obtain the shooting distance parameter value of the physical camera.
Waveform analyzer: by analyzing the picture or the live-action picture obtained by shooting the corresponding test chart card by the entity camera, the conditions of frequency amplitude change point, maximum gray level number, television line number and the like can be conveniently read, and corresponding parameter values can be conveniently obtained.
Video quality analysis software: the method comprises the steps of controlling the entity camera, controlling the automatic switching equipment of the test chart, storing video and audio, analyzing images, playing back images, recording time stamps and the like, integrating the acquisition method of various parameters of the entity camera, and visually presenting and storing the calculation results of the parameters by receiving the original images received by the waveform analyzer or the analysis results of the waveform analyzer.
In one embodiment, as shown in fig. 5, there is provided a camera parameter acquisition apparatus, including: an obtaining module 502, configured to obtain movement track information of the entity camera;
A judging module 504, configured to judge whether the physical camera has a movement sign according to the movement track information;
the acquisition module 506 is configured to acquire the basic parameters and the real-time variation parameters of the physical camera if the physical camera has a movement sign;
and the processing module 508 is used for determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
In one embodiment, the type of base parameter includes an image quality parameter; collecting basic parameters of an entity camera, comprising:
the acquiring module 502 is further configured to acquire a relative positional relationship between the test card and the physical camera;
the control module is used for controlling the test chart card to move to the position to be shot under the condition that the test chart card is not positioned at the preset position to be shot according to the relative position relation;
the acquisition module 502 is further configured to acquire external light intensity information, and acquire scene image information through a test chart;
the processing module 508 is further configured to analyze the scene image information according to the external light intensity information, to obtain image quality parameters of the physical camera, where the image quality parameters include resolution, color level, brightness, and dynamic range.
In one embodiment, the base parameter types further include shutter rate; the method further comprises the steps of:
the acquiring module 502 is further configured to acquire actual scene image information;
the obtaining module 502 is further configured to obtain a waveform schematic diagram according to at least one of actual scene image information and scene image information obtained through the test chart;
the processing module 508 is further configured to analyze a time difference between the two peaks according to the waveform schematic diagram to obtain a shutter rate.
In one embodiment, the base parameter types further include frame rate; the method further comprises the steps of:
the obtaining module 502 is further configured to obtain video information of a first preset time period, where the video information is a test frame set including a scene and an action;
the obtaining module 502 is further configured to obtain, according to the video information, a total frame number of the test frame set and a sum of time intervals between each successive frame;
the obtaining module 502 is further configured to obtain a frame rate according to the total frame number and the sum of the time intervals between each successive frame.
In one embodiment, the type of real-time variation parameter includes a pose parameter; collecting real-time variation parameters of an entity camera, comprising:
the acquiring module 502 is further configured to acquire an initial position and an initial direction of the entity camera;
The obtaining module 502 is further configured to obtain continuous pieces of picture information in a second preset time period using the initial position and the initial direction as initial conditions;
the processing module 508 is further configured to analyze a plurality of pieces of continuous picture information, and the obtaining module 502 is further configured to obtain feature information of any adjacent picture in a pixel coordinate system;
the obtaining module 502 is further configured to obtain feature information of any adjacent picture in a world coordinate system according to the first formula and the feature information;
the obtaining module 502 is further configured to obtain pose parameters of the entity camera according to feature information of any adjacent picture in the world coordinate system;
wherein, the first formula is:
wherein f is the focal length of the camera, f x =f/d x 、f y =f/d y The number of pixels in the x and y directions of the focal length, respectively, (u) 0 ,v 0 ) For the pixel coordinates of the camera optical axis and the focal point of the image plane, R is the rotation matrix of the object, T is the translation matrix of the object, (u, v) is the two-dimensional coordinates of the object in the pixel coordinate system after imaging, (X) w ,Y w ,Z w ) Z is the three-dimensional coordinates of the object in the world coordinate system c Is a scale factor.
In one embodiment, the types of real-time variation parameters include aperture value, focal length value, shooting distance, and depth of field parameters; the method further comprises the steps of:
The obtaining module 502 is further configured to invoke an aperture value and a focal length value in detailed information of the obtained picture according to the continuous pieces of picture information;
the obtaining module 502 is further configured to obtain a depth of field parameter of the physical camera according to the aperture value, the focal length value and the second formula;
wherein, the second formula is:
where F is the focal length of the camera, F is the aperture value, L is the shooting distance, δ=d/1730 is the diameter of the allowed circle of confusion, d is the diagonal length of the CCD chip, and DOF is the depth of field parameter.
The modules in the camera parameter acquisition device can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of camera parameter acquisition. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
step S202, obtaining movement track information of an entity camera;
step S204, judging whether the entity camera has a moving sign according to the moving track information;
step S206, if the camera has a moving sign, respectively acquiring the basic parameters and the real-time change parameters of the entity camera;
step S208, determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Step S202, obtaining movement track information of an entity camera;
step S204, judging whether the entity camera has a moving sign according to the moving track information;
step S206, if the camera has a moving sign, respectively acquiring the basic parameters and the real-time change parameters of the entity camera;
step S208, determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
step S202, obtaining movement track information of an entity camera;
step S204, judging whether the entity camera has a moving sign according to the moving track information;
step S206, if the camera has a moving sign, respectively acquiring the basic parameters and the real-time change parameters of the entity camera;
step S208, determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method for capturing camera parameters, the method comprising:
acquiring movement track information of an entity camera;
judging whether the entity camera has a moving sign or not according to the moving track information;
if the entity camera has a moving sign, respectively acquiring basic parameters and real-time change parameters of the entity camera;
And determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
2. The method of claim 1, wherein the type of the base parameter comprises an image quality parameter; collecting basic parameters of the entity camera, including:
acquiring the relative position relation between the test chart card and the entity camera;
under the condition that the test chart card is not positioned at the preset position to be shot according to the relative position relation, the test chart card is controlled to move to the position to be shot;
acquiring external light intensity information and scene image information through the test chart;
and analyzing the scene image information according to the external light intensity information to obtain image quality parameters of the entity camera, wherein the image quality parameters comprise resolution, color standard, brightness and dynamic range.
3. The method of claim 2, wherein the base parameter type further comprises a shutter rate; the method further comprises the steps of:
acquiring actual scene image information;
acquiring a waveform schematic diagram according to at least one of the actual scene image information and the scene image information acquired through the test chart;
And according to the waveform schematic diagram, analyzing the time difference between the two peaks to obtain the shutter speed.
4. The method of claim 2, wherein the base parameter type further comprises a frame rate; the method further comprises the steps of:
acquiring video information of a first preset time period, wherein the video information is a test frame set containing scenes and actions;
acquiring the total frame number of the test frame set and the sum of time intervals between each continuous frame according to the video information;
and acquiring a frame rate according to the total frame number and the sum of the time intervals between each continuous frame.
5. The method of claim 1, wherein the type of real-time variation parameter comprises a pose parameter; collecting real-time variation parameters of the entity camera, including:
acquiring an initial position and an initial direction of the entity camera;
taking the initial position and the initial direction as initial conditions, and acquiring a plurality of pieces of continuous picture information of a second preset time period;
analyzing a plurality of pieces of continuous picture information to obtain characteristic information of any adjacent picture under a pixel coordinate system;
according to a first formula and the characteristic information, the characteristic information of any adjacent picture under a world coordinate system is obtained;
Obtaining pose parameters of the entity camera according to the characteristic information of any adjacent picture under the world coordinate system;
wherein, the first formula is:
wherein f is the focal length of the camera, f x =f/d x 、f y =f/d y The number of pixels in the x and y directions of the focal length, respectively, (u) 0 ,v 0 ) For the pixel coordinates of the camera optical axis and the focal point of the image plane, R is the rotation matrix of the object, T is the translation matrix of the object, (u, v) is the two-dimensional coordinates of the object in the pixel coordinate system after imaging, (X) w ,Y w ,Z w ) For three-dimensional coordinates of objects in the world coordinate system,Z c Is a scale factor.
6. The method of claim 5, wherein the types of real-time variation parameters include aperture value, focal length value, shooting distance, and depth of field parameters; the method further comprises the steps of:
according to the continuous pieces of picture information, invoking an aperture value and a focal length value in the detailed information of the acquired pictures;
obtaining a depth of field parameter of the entity camera according to the aperture value, the focal length value and the second formula;
wherein the second formula is:
where F is the focal length of the camera, F is the aperture value, L is the shooting distance, δ=d/1730 is the diameter of the allowed circle of confusion, d is the diagonal length of the CCD chip, and DOF is the depth of field parameter.
7. A camera parameter acquisition device, the device comprising:
the acquisition module is used for acquiring the movement track information of the entity camera;
the judging module is used for judging whether the entity camera has a moving sign according to the moving track information;
the acquisition module is used for respectively acquiring basic parameters and real-time change parameters of the entity camera if the entity camera has movement signs;
and the processing module is used for determining the real parameters of the virtual camera according to the basic parameters and the real-time variation parameters.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202311061200.3A 2023-08-22 2023-08-22 Camera parameter acquisition method, device, computer equipment and storage medium Pending CN117201931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311061200.3A CN117201931A (en) 2023-08-22 2023-08-22 Camera parameter acquisition method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311061200.3A CN117201931A (en) 2023-08-22 2023-08-22 Camera parameter acquisition method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117201931A true CN117201931A (en) 2023-12-08

Family

ID=89002622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311061200.3A Pending CN117201931A (en) 2023-08-22 2023-08-22 Camera parameter acquisition method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117201931A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117640926A (en) * 2024-01-26 2024-03-01 杭州宇泛智能科技有限公司 Automatic testing system and method for ISP imaging quality of camera
CN118488237A (en) * 2024-05-10 2024-08-13 上海明殿文化传播有限公司 Video post-production method and system applying virtual video technology

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117640926A (en) * 2024-01-26 2024-03-01 杭州宇泛智能科技有限公司 Automatic testing system and method for ISP imaging quality of camera
CN117640926B (en) * 2024-01-26 2024-04-26 杭州宇泛智能科技有限公司 Automatic testing system and method for ISP imaging quality of camera
CN118488237A (en) * 2024-05-10 2024-08-13 上海明殿文化传播有限公司 Video post-production method and system applying virtual video technology

Similar Documents

Publication Publication Date Title
KR102480245B1 (en) Automated generation of panning shots
US10404969B2 (en) Method and apparatus for multiple technology depth map acquisition and fusion
CN117201931A (en) Camera parameter acquisition method, device, computer equipment and storage medium
Aittala Inverse lighting and photorealistic rendering for augmented reality
CN101422035B (en) Light source estimation device, light source estimation system, light source estimation method, device having increased image resolution, and method for increasing image resolution
CN108141543A (en) Camera setting adjustment based on prediction environmental factor and the tracking system using the setting adjustment of this camera
US20180014003A1 (en) Measuring Accuracy of Image Based Depth Sensing Systems
US12033309B2 (en) Learning-based lens flare removal
CN111311523B (en) Image processing method, device and system and electronic equipment
CN109816745A (en) Human body thermodynamic chart methods of exhibiting and Related product
CN105190229B (en) Three-dimensional shape measuring device, three-dimensional shape measuring method and three-dimensional shape measuring program
CN116843783A (en) Material aware three-dimensional scanning
US20210295467A1 (en) Method for merging multiple images and post-processing of panorama
CN110458964B (en) Real-time calculation method for dynamic illumination of real environment
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN109934873B (en) Method, device and equipment for acquiring marked image
CN115100276B (en) Method and device for processing picture image of virtual reality equipment and electronic equipment
JP6246441B1 (en) Image analysis system, image analysis method, and program
CN117456076A (en) Material map generation method and related equipment
US12026823B2 (en) Volumetric imaging
CN115514887A (en) Control method and device for video acquisition, computer equipment and storage medium
JP2023540652A (en) Single-image 3D photography technology using soft layering and depth-aware inpainting
CN108776963B (en) Reverse image authentication method and system
CN116433848B (en) Screen model generation method, device, electronic equipment and storage medium
CN116952966A (en) Method, apparatus, device, medium and program product for detecting VR lens smudge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination