CN115272483A - Image generation method and device, electronic equipment and storage medium - Google Patents

Image generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115272483A
CN115272483A CN202210870660.XA CN202210870660A CN115272483A CN 115272483 A CN115272483 A CN 115272483A CN 202210870660 A CN202210870660 A CN 202210870660A CN 115272483 A CN115272483 A CN 115272483A
Authority
CN
China
Prior art keywords
target
target object
shooting
image
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210870660.XA
Other languages
Chinese (zh)
Other versions
CN115272483B (en
Inventor
张彭星
苏亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202210870660.XA priority Critical patent/CN115272483B/en
Publication of CN115272483A publication Critical patent/CN115272483A/en
Application granted granted Critical
Publication of CN115272483B publication Critical patent/CN115272483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The embodiment of the application provides an image generation method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a target object to be shot; acquiring a target shooting visual angle and a reference shooting direction corresponding to the target object according to the object type of the target object, wherein the object type has a corresponding relation with the shooting visual angle and the shooting direction; acquiring a target distance between the virtual camera and the reference shooting direction under the target shooting visual angle according to the target object size of the target object; determining a target shooting position of the target object according to the reference shooting direction and the target distance under the target shooting visual angle; and controlling the virtual camera to shoot the target object at the target shooting position at the target shooting visual angle to generate an object image corresponding to the target object. The embodiment of the application can realize batch shooting of the model effect images, improve the generation efficiency of the model effect images and reduce the labor cost.

Description

Image generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image generation method and apparatus, an electronic device, and a storage medium.
Background
With the continuous improvement of economic level, people also have higher and higher requirements on decoration. Currently, some home model effect images are displayed on some websites to provide references for user decoration. At present, when uploading model effect images of home models on some model websites, a user needs to manually place the home models in a specific scene (such as a bedroom, a shopping mall, and the like), and then continuously adjust shooting positions by the user to shoot the model effect images of the home models, and upload the model effect images to the model websites for display. The operation is required to be executed according to different home models, the operation mode is complicated, and the labor cost is greatly increased. Meanwhile, the manual processing mode has low efficiency, and the requirement of a home model website on the construction period cannot be met under the condition that a large number of home models exist.
Disclosure of Invention
The embodiment of the application provides an image generation method and device, electronic equipment and a storage medium, so that batch shooting of model effect images is realized, the generation efficiency of the model effect images is improved, and meanwhile, the labor cost is reduced. The specific scheme is as follows:
in a first aspect, an embodiment of the present application provides an image generation method, including:
acquiring a target object to be shot;
acquiring a target shooting visual angle and a reference shooting direction corresponding to the target object according to the object type of the target object, wherein the object type has a corresponding relation with the shooting visual angle and the shooting direction;
acquiring a target distance between the virtual camera and the reference shooting direction under the target shooting visual angle according to the target object size of the target object;
determining a target shooting position of the target object according to the reference shooting direction and the target distance under the target shooting visual angle;
and controlling the virtual camera to shoot the target object at the target shooting position at the target shooting visual angle to generate an object image corresponding to the target object.
Optionally, the acquiring a target object to be photographed includes:
acquiring the maximum object size of an initial object to be shot;
under the condition that the maximum object size is larger than a set value, carrying out equal-scale scaling processing on the initial object to obtain a size adjustment object corresponding to the initial object, and taking the size adjustment object as the target object; the maximum size of the size adjustment object is the set value;
and taking the initial object as the target object when the maximum object size is smaller than or equal to the set value.
Optionally, the obtaining a target distance between the virtual camera and the reference shooting direction at the target shooting angle of view according to the target object size of the target object includes:
acquiring the maximum object size of the target object, and taking the maximum object size as the target object size;
and acquiring a first distance of the virtual camera in the direction of the x axis, a second distance in the direction of the y axis and a third distance in the direction of the z axis from a tangent plane corresponding to the reference shooting direction under the target shooting visual angle according to the corresponding relation between the size of the object and the shooting visual angle and the distance.
Optionally, the determining a target shooting position of the target object according to the reference shooting direction and the target distance at the target shooting angle of view includes:
acquiring a three-dimensional coordinate of the virtual camera in a world coordinate system according to the reference shooting direction, the first distance, the second distance and the third distance, wherein the world coordinate system takes the center of the target object as an origin;
and determining the target shooting position of the target object according to the three-dimensional coordinates.
Optionally, the controlling the virtual camera to perform image capturing on the target object at the target capturing position and the target capturing view angle, and generating an object image corresponding to the target object includes:
controlling the virtual camera to move to the target shooting position, and adjusting the shooting visual angle of the virtual camera to the target shooting visual angle;
acquiring an intermediate image of the target object photographed by the virtual camera;
and deleting the image background information of the intermediate image to obtain an object image corresponding to the target object.
Optionally, after the controlling the virtual camera to perform image capturing on the target object at the target capturing view angle and the position corresponding to the target capturing angle, and generate an object image corresponding to the target object, the method further includes:
establishing a corresponding relation between the object image and a preset object model, and displaying the object image in a preset webpage;
responding to a drag operation aiming at a target object image, and acquiring a target object model corresponding to the target object image and a display position corresponding to the target object model, wherein the target object image is an image in the object image;
displaying the target object model at the display location.
In a second aspect, an embodiment of the present application provides an image generating apparatus, including:
the target object acquisition module is used for acquiring a target object to be shot;
the visual angle direction acquisition module is used for acquiring a target shooting visual angle and a reference shooting direction corresponding to the target object according to the object type of the target object, wherein the object type has a corresponding relation with the shooting visual angle and the shooting direction;
the target distance acquisition module is used for acquiring a target distance between the virtual camera and the reference shooting direction under the target shooting visual angle according to the target object size of the target object;
the target position determining module is used for determining the target shooting position of the target object according to the reference shooting direction and the target distance under the target shooting visual angle;
and the object image generation module is used for controlling the virtual camera to shoot the target object at the target shooting position at the target shooting visual angle to generate an object image corresponding to the target object.
Optionally, the target object obtaining module includes:
an object size acquiring unit for acquiring a maximum object size of an initial object to be photographed;
a first target object obtaining unit, configured to, when the maximum object size is larger than a set value, perform scaling processing on the initial object to obtain a size adjustment object corresponding to the initial object, and use the size adjustment object as the target object; the maximum size of the size adjustment object is the set value;
a second target object acquisition unit configured to take the initial object as the target object in a case where the maximum object size is less than or equal to the set value.
Optionally, the target distance obtaining module includes:
a target object size acquisition unit configured to acquire a maximum object size of the target object and set the maximum object size as the target object size;
and the target distance acquisition unit is used for acquiring a first distance of the virtual camera in the direction of the x axis from a tangent plane corresponding to the reference shooting direction under the target shooting visual angle, a second distance in the direction of the y axis and a third distance in the direction of the z axis according to the corresponding relation between the size of the object and the shooting visual angle and the distance.
Optionally, the target position determination module includes:
a three-dimensional coordinate acquisition unit configured to acquire a three-dimensional coordinate of the virtual camera in a world coordinate system based on the reference shooting direction, the first distance, the second distance, and the third distance, the world coordinate system taking a center of the target object as an origin;
and the target position determining unit is used for determining the target shooting position of the target object according to the three-dimensional coordinates.
Optionally, the object image generation module includes:
the camera visual angle adjusting unit is used for controlling the virtual camera to move to the target shooting position and adjusting the shooting visual angle of the virtual camera to the target shooting visual angle;
an intermediate image acquisition unit configured to acquire an intermediate image of the target object photographed by the virtual camera;
and the object image acquisition unit is used for deleting the image background information of the intermediate image to obtain an object image corresponding to the target object.
Optionally, the apparatus further comprises:
the object image display module is used for establishing the corresponding relation between the object image and a preset object model and displaying the object image in a preset webpage;
a display position acquisition module, configured to acquire, in response to a drag operation for a target object image, a target object model corresponding to the target object image and a display position corresponding to the target object model, where the target object image is an image in the object image;
and the target object model display module is used for displaying the target object model at the display position.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method of any of the above.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method of any one of the above.
According to the scheme provided by the embodiment of the application, the target object to be shot is obtained, the target shooting visual angle and the reference shooting direction corresponding to the target object are obtained according to the object type of the target object, wherein the object type and the shooting visual angle have a corresponding relation, the target distance between the virtual camera and the reference shooting direction under the target shooting visual angle is obtained according to the target object size of the target object, the target shooting position of the target object is determined according to the reference shooting direction and the target distance under the target shooting visual angle, the virtual camera is controlled to shoot the target object at the target shooting position and the target shooting visual angle, and the object image corresponding to the target object is generated. According to the embodiment of the application, the shooting position of the virtual camera is determined by combining the size of the target object of the object and the shooting visual angle of the target object, the virtual camera is automatically controlled to shoot images of the target object at the target shooting position through the target shooting visual angle, batch shooting of model effect images can be achieved, the generation efficiency of the model effect images is improved, and the requirement of a home model website on the construction period can be met. Meanwhile, the shooting position does not need to be manually adjusted, and the labor cost can be reduced.
Drawings
Fig. 1 is a flowchart illustrating steps of an image generation method according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating steps of a target object obtaining method according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating steps of a method for obtaining a target distance according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating steps of a method for determining a target shooting position according to an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating steps of a method for acquiring an image of an object according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating steps of a method for displaying an object model according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart illustrating steps of an image generation method provided in an embodiment of the present application is shown, and as shown in fig. 1, the image generation method may include the following steps:
step 101: and acquiring a target object to be shot.
The embodiment of the application can be applied to determining the shooting position of the virtual camera relative to the target object by combining the size and the shooting visual angle of the target object so as to control the virtual camera to shoot images at the corresponding shooting visual angle and shooting position.
The target object refers to an object to be subjected to image capturing, and in this example, the target object may be a home model, such as a shoe cabinet model, a bookshelf model, a refrigerator model, and the like, and specifically, a specific type of the target object may be determined according to business requirements, which is not limited in this embodiment.
In this example, the target object is illustrated by taking a home model as an example.
In a specific implementation, when the model effect graph of the home model is uploaded on the home model website, a target object to be shot can be obtained. In this example, there is a limit to the size of the target object, and if the size of the home model exceeds the threshold, the scaling process is required to set the scaled model as the target object. Specifically, the screening process for the target object may be described in detail as follows in conjunction with fig. 2.
Referring to fig. 2, a flowchart illustrating steps of a target object obtaining method provided in an embodiment of the present application is shown, and as shown in fig. 2, the target object obtaining method may include: step 201, step 202 and step 203.
Step 201: the maximum object size of an initial object to be photographed is acquired.
In this embodiment, the initial object refers to an object that needs to be subjected to model effect map shooting. In this example, the initial object may be a home model, such as a shoe cabinet model, a computer desk model, a washing machine model, and the like, and specifically, the specific type of the initial object may be determined according to business requirements, which is not limited in this embodiment.
The maximum object size refers to the maximum size of an initial object to be photographed, for example, the initial object is exemplified by a shoe cabinet model, and the length, width and height of the shoe cabinet model are respectively: 1.2 meters, 0.6 meters, and 1.5 meters, in which case the height value of the shoe chest model can be taken as the maximum object size of the shoe chest model, and the like.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
When the target object is screened, an initial object to be photographed may be acquired, and a maximum object size of the initial object may be acquired.
After acquiring the maximum object size of the initial object to be photographed, step 202 is performed, or step 203 is performed.
Step 202: under the condition that the maximum object size is larger than a set value, carrying out equal-scale scaling processing on the initial object to obtain a size adjustment object corresponding to the initial object, and taking the size adjustment object as the target object; the maximum size of the size-adjusted object is the set value.
Step 203: and taking the initial object as the target object when the maximum object size is smaller than or equal to the set value.
The set value refers to a size threshold value set in advance for determining the target object. In this example, the setting value may be 1 meter, 0.8 meter, and the like, and specifically, a specific value of the setting value may be determined according to a business requirement, which is not limited in this embodiment.
After the maximum object size of the initial object to be photographed is acquired, the magnitude relationship between the maximum object size and the set value may be compared.
When the maximum object size of the initial object is less than or equal to the set value, the initial object may be taken as the target object.
When the maximum object size of the initial object is larger than the set value, the initial object may be scaled equally to obtain a size adjustment object corresponding to the initial object, where the maximum size of the size adjustment object is the set value, and in this case, the size adjustment object may be set as the target object. Specifically, the initial object may be transposed to obtain a model having the same style as the model of the initial object and a smaller size (as a set value), as the target object model.
According to the embodiment of the application, the set value is configured in advance, the target object can be obtained by scaling in equal proportion when the size of the home model to be shot is large, the problem that the image picture cannot completely contain the target object when the image is shot due to the fact that the size of the object is too large can be avoided, and the quality of the shot image can be improved.
After the target object to be photographed is acquired, step 102 is executed.
Step 102: and acquiring a target shooting visual angle and a reference shooting direction corresponding to the target object according to the object type of the target object, wherein the object type has a corresponding relation with the shooting visual angle and the shooting direction.
The target photographing angle of view refers to an angle of view for photographing a target object, and in this example, the target photographing angle of view may include: the shooting visual angle of looking up, overlook shooting visual angle and look up shoot visual angle three.
The reference photographing direction is a standard photographing direction for photographing an image of a target object, and for example, when the target object is a shoe chest model, the reference photographing direction of the shoe chest model is a front side of the shoe chest model. When the target object is a suspended lamp model, the reference imaging direction of the suspended lamp model is toward the ground, for example.
In this embodiment, the correspondence between the object type and the shooting angle of view may be saved in advance. After the target object to be shot is obtained, the target shooting visual angle corresponding to the target object can be obtained according to the object type of the target object. For example, when the object type of the target object is a shoe cabinet, a washing machine, or the like, the corresponding photographing view is a head-up photographing view. When the object type of the target object is a suspended illuminator type, the corresponding shooting view angle is a head-up shooting view angle. When the type of the target object is a bath, a basin or the like, the corresponding shooting angle of view is a top shooting angle of view or the like.
In the present embodiment, the correspondence between the object type and the shooting direction may also be saved in advance. After the target object to be photographed is acquired, the reference photographing direction corresponding to the target object may be acquired according to the object type of the target object. For example, when the type of the target object is shoe chest, washing machine, or the like, the corresponding reference shooting direction is the front shooting direction of the target object. When the type of the target object is a bathtub, a wash basin, or the like, the corresponding reference shooting direction is a shooting direction directly above the target object, or the like.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
In a specific implementation, the head-up shooting view angle, the top-down shooting view angle, and the head-up shooting view angle may be fixed values that are preset, such as when the shooting view angle is the top-down shooting view angle, the camera angle is: x =0,y =346.285,z = -90 °. When the shooting visual angle is the upward shooting visual angle, the camera angle is as follows: x =0,y =15.130494 °, Z = -90 °. When the shooting visual angle is a head-up shooting visual angle, the camera angle is as follows: x =0, y =0, z = -90 °, and the like. It is understood that the camera angle in this example is the camera angle in the ghost engine system.
After the target shooting angle and the reference shooting direction corresponding to the target object are acquired according to the object type of the target object, step 103 is executed.
Step 103: and acquiring a target distance between the virtual camera and the reference shooting direction under the target shooting visual angle according to the target object size of the target object.
The virtual camera refers to a camera for taking a subject image of a target subject. In the present example, the Virtual camera may be a VR (Virtual Reality) camera or the like provided within the shooting scene.
The target object size refers to the maximum object size of the target object.
After the target shooting angle of view and the reference shooting direction corresponding to the target object are acquired, the target object size of the target object can be acquired, and the target distance between the virtual camera and the reference shooting direction under the target shooting angle of view can be acquired according to the target object size of the target object. The target distance may be a distance between the virtual camera and the reference photographing direction corresponding to the tangent plane between the x-axis, the y-axis, and the z-axis. The implementation process for acquiring the target distance between the virtual camera and the reference direction under the target shooting angle of view can be described in detail in conjunction with fig. 3 as follows.
Referring to fig. 3, a flowchart illustrating steps of a target distance obtaining method provided in an embodiment of the present application is shown, and as shown in fig. 3, the target distance obtaining method may include: step 301 and step 302.
Step 301: and acquiring the maximum object size of the target object, and taking the maximum object size as the target object size.
In this embodiment, after the target object is acquired, the maximum object size of the target object may be acquired, and the maximum object size may be set as the target object size. For example, when the target object is a refrigerator model, the refrigerator model is a cube model, and the length, width and height are: 20cm, 10cm and 40cm, in which case, the maximum object size of the refrigerator model is the height of the refrigerator model, and the height of the refrigerator model can be used as the target object size of the refrigerator model.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After the target object size of the target object is acquired, step 302 is performed.
Step 302: and acquiring a first distance of the virtual camera in the direction of the x axis, a second distance in the direction of the y axis and a third distance in the direction of the z axis from a tangent plane corresponding to the reference shooting direction under the target shooting visual angle according to the corresponding relation between the size of the object and the shooting visual angle and the distance.
In this example, the correspondence between the subject size and the shooting angle of view and distance may be saved in advance.
The correspondence may be as shown in tables 1, 2 and 3 below:
table 1:
Figure BDA0003761050530000101
table 2:
Figure BDA0003761050530000102
Figure BDA0003761050530000111
table 3:
Figure BDA0003761050530000112
when the imaging angle of view is a top-view imaging angle of view and the object sizes are 5, 10, 20, 30, 40, and 50, respectively, the corresponding distances are shown in table 1.
When the photographing angle is a head-up photographing angle and the object sizes are 10, 20, 30, 40, 50, and 60, respectively, the corresponding distances are as shown in table 2.
When the photographing angle of view is a bottom view and the subject sizes are 5, 10, 20, 30, 40, and 50, respectively, the corresponding distances are shown in table 3.
After the target object size of the target object is obtained, a first distance in the x-axis direction, a second distance in the y-axis direction, and a third distance in the z-axis direction of a tangent plane corresponding to the reference shooting direction of the virtual camera at the target shooting angle of view may be obtained according to the corresponding relationship, where the first distance, the second distance, and the third distance are target distances between the virtual camera and the tangent plane corresponding to the base station shooting direction at the target shooting angle of view.
After the target distance between the virtual camera and the reference shooting direction under the target shooting angle of view is acquired according to the target object size of the target object, step 104 is executed.
Step 104: and determining the target shooting position of the target object according to the reference shooting direction and the target distance under the target shooting visual angle.
The target photographing position refers to a position where the virtual camera is located when the virtual camera is used to photograph an image of the target object.
After the target distance between the virtual camera and the reference shooting direction under the target shooting visual angle is acquired according to the target object size of the target object, the target shooting position of the target object can be determined according to the reference shooting direction and the target distance. Specifically, a three-dimensional coordinate of the virtual camera in the world coordinate system may be calculated according to the side corresponding to the reference shooting direction and the target distance, and the target shooting position of the target object may be determined according to the three-dimensional coordinate. This implementation can be described in detail below in conjunction with fig. 4.
Referring to fig. 4, a flowchart illustrating steps of a target shooting position determining method provided in an embodiment of the present application is shown, and as shown in fig. 4, the target shooting position determining method may include: step 401 and step 402.
Step 401: and acquiring the three-dimensional coordinates of the virtual camera in a world coordinate system according to the reference shooting direction, the first distance, the second distance and the third distance, wherein the world coordinate system takes the center of the target object as an origin.
In this embodiment, a world coordinate system may be constructed with the center of the target object as the origin. After a first distance in the x-axis direction, a second distance in the y-axis direction, and a third distance in the z-axis direction of a tangent plane corresponding to the reference shooting direction of the virtual camera at the target shooting angle of view are obtained, three-dimensional coordinates of the virtual camera in a world coordinate system can be obtained according to the reference shooting direction, the first distance, the second distance, and the third distance. Specifically, the x-axis coordinate of the tangent plane corresponding to the virtual camera and the reference shooting direction on the x-axis may be obtained according to the first distance, the y-axis coordinate of the tangent plane corresponding to the virtual camera and the reference shooting direction on the y-axis may be obtained according to the second distance, and the z-axis coordinate of the tangent plane corresponding to the virtual camera and the reference shooting direction on the z-axis may be obtained according to the third distance. The x-axis coordinate, the y-axis coordinate and the z-axis coordinate are three-dimensional coordinates of the virtual camera in a world coordinate system.
After acquiring the three-dimensional coordinates of the virtual camera within the world coordinate system, step 402 is performed.
Step 402: and determining the target shooting position of the target object according to the three-dimensional coordinates.
After the three-dimensional coordinates of the virtual camera in the world coordinate system are acquired, the target shooting position corresponding to the target object can be determined according to the three-dimensional coordinates.
According to the embodiment of the application, the corresponding relation between the size of the object and the shooting visual angle and the distance is stored in advance, so that the target shooting position corresponding to the target object can be dynamically obtained according to the size of the target object, the shooting position of a camera does not need to be manually adjusted by a user, and the shooting efficiency of the model effect graph is improved.
After the target photographing position of the target object is determined according to the reference photographing direction and the target distance at the target photographing angle of view, step 105 is performed.
Step 105: and controlling the virtual camera to shoot the target object at the target shooting position at the target shooting visual angle to generate an object image corresponding to the target object.
After the target shooting position of the target object is determined according to the reference shooting direction and the target distance under the target shooting visual angle, the virtual camera can be controlled to shoot the target object at the target shooting position in the target shooting visual angle so as to generate an object image corresponding to the target object, wherein the object image is a model effect graph. Specifically, the virtual camera may be controlled to move to the target shooting position and the shooting angle of view of the virtual camera may be adjusted to the target shooting angle of view to shoot an image of the target object, and the process may be described in detail below with reference to fig. 5.
Referring to fig. 5, a flowchart illustrating steps of an object image acquiring method provided in an embodiment of the present application is shown, and as shown in fig. 5, the object image acquiring method may include: step 501, step 502 and step 503.
Step 501: and controlling the virtual camera to move to the target shooting position, and adjusting the shooting visual angle of the virtual camera to the target shooting visual angle.
In this embodiment, after the target shooting angle of view and the target shooting position corresponding to the target object are determined, the virtual camera may be controlled to move to the target shooting position, and the shooting angle of view of the virtual camera may be adjusted to the target shooting angle of view.
In a specific implementation manner, the virtual camera may be a suspended VR camera, and the virtual camera is controlled to move to the target shooting position by the camera movement control device, and the shooting angle of view of the virtual camera is adjusted to the target shooting angle of view.
In another specific implementation manner, the virtual camera may be a VR camera disposed on the robot, the robot may adjust a shooting position and a shooting angle of the virtual camera, and after the target shooting position and the target shooting angle are obtained, the robot may control the virtual camera to move to the target shooting position, and adjust the shooting angle of the virtual camera to the target shooting angle.
It should be understood that the above examples are only examples for better understanding of the schemes provided by the embodiments of the present application, and are not to be taken as the only limitation on the embodiments.
After controlling the virtual camera to move to the target shooting position and adjusting the shooting angle of view of the virtual camera to the target shooting angle of view, step 502 is executed.
Step 502: acquiring an intermediate image of the target object photographed by the virtual camera.
After controlling the virtual camera to move to the target shooting position and adjusting the shooting angle of the virtual camera to the target shooting angle, the virtual camera may be used to shoot an image of the target object to obtain an intermediate image corresponding to the target object.
After acquiring the intermediate image of the target object captured by the virtual camera, step 503 is performed.
Step 503: and deleting the image background information of the intermediate image to obtain an object image corresponding to the target object.
After the intermediate image of the target object shot by the virtual camera is acquired, the image background information of the intermediate image can be deleted to obtain a model effect image, namely a target image, corresponding to the target object. In a specific implementation, in the illusion engine, the depth channel of the model effect map may be set to 1, so that the output model effect map may automatically delete the base map information (i.e., the image background information). Of course, after the intermediate image of the model effect map is output, the intermediate image may be processed to delete the base map information of the intermediate image to obtain the model effect map.
According to the embodiment of the application, in the shooting process of the model effect graph, the shooting position and the shooting visual angle of the virtual camera can be automatically adjusted without manual adjustment, the generation efficiency of the model effect graph can be improved, and the requirement of a home model website on the construction period is met.
In a specific implementation, after the model effect graph is generated, the model effect graph may be displayed on a model website, and meanwhile, a corresponding relationship between the model effect graph and the three-dimensional model is established, so that a user can call and view the model effect graph conveniently, and specifically, the following detailed description may be performed in conjunction with fig. 6.
Referring to fig. 6, a flowchart illustrating steps of an object model display method provided in an embodiment of the present application is shown, and as shown in fig. 6, the object model display method may include: step 601, step 602 and step 603.
Step 601: and establishing a corresponding relation between the object image and a preset object model, and displaying the object image in a preset webpage.
In this embodiment, after obtaining the object image of the target object, a corresponding relationship between the object image and the preset object model may be established, and the object image may be displayed in the preset web page. For example, the object image is a model effect graph of a home model, and after the model effect graph is obtained, a corresponding relationship between the model effect graph and the home model may be established, and the model effect graph may be displayed in a preset web page of a model website.
Step 602: and responding to a drag operation aiming at a target object image, and acquiring a target object model corresponding to the target object image and a display position corresponding to the target object model, wherein the target object image is an image in the object image.
The target object model refers to a three-dimensional object model corresponding to the target object image.
After the object image is displayed in the preset web page, a drag operation of a user on a target object image, which is an image in the object image, may be received. For example, when a user wants to finish a room, a preview of placing the finished home can be performed from a website, and if the user needs to place a washing machine at a certain position in the room to be previewed, the user can drag a model effect diagram corresponding to the washing machine to a preset position in the room, and the like.
After receiving a drag operation of a user on a target object image, a target object model corresponding to the target object image and a display position of the target object model may be obtained in response to the drag operation.
Step 603: and displaying the target object model at the display position.
After the target object model and the display position corresponding to the target object model are acquired, the target object model can be displayed at the display position.
According to the method and the device, the corresponding relation between the object image and the corresponding three-dimensional object model is established, the display of the virtual scene of the three-dimensional object model can be achieved, references such as decoration and home purchase are provided for a user, and the user experience is improved.
According to the image generation method provided by the embodiment of the application, a target object to be shot is obtained, a target shooting visual angle and a reference shooting direction corresponding to the target object are obtained according to the object type of the target object, wherein the object type and the shooting visual angle have a corresponding relation, a target distance between a virtual camera and the reference shooting direction under the target shooting visual angle is obtained according to the target object size of the target object, a target shooting position of the target object is determined according to the reference shooting direction and the target distance under the target shooting visual angle, the virtual camera is controlled to shoot the target object at the target shooting position and the target shooting visual angle, and an object image corresponding to the target object is generated. According to the embodiment of the application, the shooting position of the virtual camera is determined by combining the size of the target object of the object and the shooting visual angle of the target object, the virtual camera is automatically controlled to shoot images of the target object at the target shooting position through the target shooting visual angle, batch shooting of model effect images can be achieved, the generation efficiency of the model effect images is improved, and the requirement of a home model website on the construction period can be met. Meanwhile, the shooting position does not need to be manually adjusted, and the labor cost can be reduced.
Referring to fig. 7, which shows a schematic structural diagram of an image generating apparatus provided in an embodiment of the present application, as shown in fig. 7, the image generating apparatus 700 may include the following modules:
a target object obtaining module 710, configured to obtain a target object to be photographed;
a view angle direction obtaining module 720, configured to obtain a target shooting view angle and a reference shooting direction corresponding to the target object according to an object type of the target object, where the object type has a corresponding relationship with the shooting view angle and the shooting direction;
a target distance obtaining module 730, configured to obtain a target distance between the virtual camera and the reference shooting direction under the target shooting angle according to a target object size of the target object;
a target position determining module 740, configured to determine a target shooting position of the target object according to the reference shooting direction and the target distance at the target shooting angle;
and an object image generating module 750, configured to control the virtual camera to perform image capturing on the target object at the target capturing position and the target capturing view angle, so as to generate an object image corresponding to the target object.
Optionally, the target object obtaining module 710 includes:
an object size acquiring unit for acquiring a maximum object size of an initial object to be photographed;
a first target object obtaining unit, configured to, when the maximum object size is larger than a set value, perform an equal-scale scaling process on the initial object to obtain a size adjustment object corresponding to the initial object, and use the size adjustment object as the target object; the maximum size of the size adjustment object is the set value;
a second target object acquisition unit configured to take the initial object as the target object when the maximum object size is less than or equal to the set value.
Optionally, the target distance obtaining module 730 includes:
a target object size acquisition unit configured to acquire a maximum object size of the target object and take the maximum object size as the target object size;
and the target distance acquisition unit is used for acquiring a first distance of the virtual camera in the x-axis direction from a tangent plane corresponding to the reference shooting direction under the target shooting visual angle, a second distance in the y-axis direction and a third distance in the z-axis direction according to the corresponding relation between the size of the object and the shooting visual angle and the distance.
Optionally, the target position determining module 740 includes:
a three-dimensional coordinate acquisition unit, configured to acquire a three-dimensional coordinate of the virtual camera in a world coordinate system according to the reference shooting direction, the first distance, the second distance, and the third distance, where the world coordinate system uses a center of the target object as an origin;
and the target position determining unit is used for determining the target shooting position of the target object according to the three-dimensional coordinates.
Optionally, the object image generation module 750 includes:
the camera visual angle adjusting unit is used for controlling the virtual camera to move to the target shooting position and adjusting the shooting visual angle of the virtual camera to the target shooting visual angle;
an intermediate image acquisition unit configured to acquire an intermediate image of the target object photographed by the virtual camera;
and the object image acquisition unit is used for deleting the image background information of the intermediate image to obtain an object image corresponding to the target object.
Optionally, the apparatus further comprises:
the object image display module is used for establishing the corresponding relation between the object image and a preset object model and displaying the object image in a preset webpage;
a display position acquisition module, configured to acquire, in response to a drag operation for a target object image, a target object model corresponding to the target object image and a display position corresponding to the target object model, where the target object image is an image in the object image;
and the target object model display module is used for displaying the target object model at the display position.
The image generation device provided by the embodiment of the application acquires a target object to be photographed, acquires a target photographing angle of view and a reference photographing direction corresponding to the target object according to an object type of the target object, wherein the object type and the photographing angle of view have a corresponding relationship, acquires a target distance between a virtual camera and the reference photographing direction under the target photographing angle of view according to a target object size of the target object, determines a target photographing position of the target object according to the reference photographing direction and the target distance under the target photographing angle of view, controls the virtual camera to photograph the target object at the target photographing position with the target photographing angle of view, and generates an object image corresponding to the target object. According to the embodiment of the application, the shooting position of the virtual camera is determined by combining the size of the target object of the object and the shooting visual angle of the target object, the virtual camera is automatically controlled to shoot images of the target object at the target shooting position through the target shooting visual angle, batch shooting of model effect images can be achieved, the generation efficiency of the model effect images is improved, and the requirement of a home model website on the construction period can be met. Meanwhile, the shooting position does not need to be manually adjusted, and the labor cost can be reduced.
Optionally, an embodiment of the present application further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when executed by the processor, implement each process of the above-mentioned image generation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed in the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. An image generation method, comprising:
acquiring a target object to be shot;
acquiring a target shooting visual angle and a reference shooting direction corresponding to the target object according to the object type of the target object, wherein the object type has a corresponding relation with the shooting visual angle and the shooting direction;
acquiring a target distance between the virtual camera and the reference shooting direction under the target shooting visual angle according to the target object size of the target object;
determining a target shooting position of the target object according to the reference shooting direction and the target distance under the target shooting visual angle;
and controlling the virtual camera to shoot the target object at the target shooting position at the target shooting visual angle to generate an object image corresponding to the target object.
2. The method of claim 1, wherein the acquiring a target object to be photographed comprises:
acquiring the maximum object size of an initial object to be shot;
under the condition that the maximum object size is larger than a set value, carrying out equal-scale scaling processing on the initial object to obtain a size adjustment object corresponding to the initial object, and taking the size adjustment object as the target object; the maximum size of the size adjustment object is the set value;
and taking the initial object as the target object when the maximum object size is smaller than or equal to the set value.
3. The method of claim 1, wherein the obtaining a target distance between the virtual camera and the reference capturing direction at the target capturing perspective according to a target object size of the target object comprises:
acquiring the maximum object size of the target object, and taking the maximum object size as the target object size;
and acquiring a first distance of the virtual camera in the direction of the x axis, a second distance in the direction of the y axis and a third distance in the direction of the z axis from a tangent plane corresponding to the reference shooting direction under the target shooting visual angle according to the corresponding relationship between the size of the object and the shooting visual angle and the distance.
4. The method according to claim 3, wherein the determining the target shooting position of the target object according to the reference shooting direction and the target distance at the target shooting angle of view comprises:
acquiring three-dimensional coordinates of the virtual camera in a world coordinate system according to the reference shooting direction, the first distance, the second distance and the third distance, wherein the world coordinate system takes the center of the target object as an origin;
and determining the target shooting position of the target object according to the three-dimensional coordinates.
5. The method of claim 1, wherein the controlling the virtual camera to image-capture the target object at the target capture position and the target capture perspective to generate an object image corresponding to the target object comprises:
controlling the virtual camera to move to the target shooting position, and adjusting the shooting visual angle of the virtual camera to the target shooting visual angle;
acquiring an intermediate image of the target object photographed by the virtual camera;
and deleting the image background information of the intermediate image to obtain an object image corresponding to the target object.
6. The method according to claim 1, further comprising, after the controlling the virtual camera to capture an image of the target object at the target capture angle and the target capture angle corresponding position, and generating an object image corresponding to the target object:
establishing a corresponding relation between the object image and a preset object model, and displaying the object image in a preset webpage;
responding to a drag operation aiming at a target object image, and acquiring a target object model corresponding to the target object image and a display position corresponding to the target object model, wherein the target object image is an image in the object image;
displaying the target object model at the display location.
7. An image generation apparatus, characterized by comprising:
the target object acquisition module is used for acquiring a target object to be shot;
the visual angle direction acquisition module is used for acquiring a target shooting visual angle and a reference shooting direction corresponding to the target object according to the object type of the target object, wherein the object type has a corresponding relation with the shooting visual angle and the shooting direction;
the target distance acquisition module is used for acquiring a target distance between the virtual camera and the reference shooting direction under the target shooting visual angle according to the target object size of the target object;
the target position determining module is used for determining the target shooting position of the target object according to the reference shooting direction and the target distance under the target shooting visual angle;
and the object image generation module is used for controlling the virtual camera to shoot the target object at the target shooting position at the target shooting visual angle to generate an object image corresponding to the target object.
8. The apparatus of claim 7, wherein the target object acquisition module comprises:
an object size acquisition unit for acquiring a maximum object size of an initial object to be photographed;
a first target object obtaining unit, configured to, when the maximum object size is larger than a set value, perform an equal-scale scaling process on the initial object to obtain a size adjustment object corresponding to the initial object, and use the size adjustment object as the target object; the maximum size of the size-adjusted object is the set value;
a second target object acquisition unit configured to take the initial object as the target object in a case where the maximum object size is less than or equal to the set value.
9. The apparatus of claim 7, wherein the target distance obtaining module comprises:
a target object size acquisition unit configured to acquire a maximum object size of the target object and set the maximum object size as the target object size;
and the target distance acquisition unit is used for acquiring a first distance of the virtual camera in the x-axis direction from a tangent plane corresponding to the reference shooting direction under the target shooting visual angle, a second distance in the y-axis direction and a third distance in the z-axis direction according to the corresponding relation between the size of the object and the shooting visual angle and the distance.
10. The apparatus of claim 9, wherein the target location determining module comprises:
a three-dimensional coordinate acquisition unit, configured to acquire a three-dimensional coordinate of the virtual camera in a world coordinate system according to the reference shooting direction, the first distance, the second distance, and the third distance, where the world coordinate system uses a center of the target object as an origin;
and the target position determining unit is used for determining the target shooting position of the target object according to the three-dimensional coordinates.
11. The apparatus of claim 7, wherein the object image generation module comprises:
the camera visual angle adjusting unit is used for controlling the virtual camera to move to the target shooting position and adjusting the shooting visual angle of the virtual camera to the target shooting visual angle;
an intermediate image acquisition unit configured to acquire an intermediate image of the target object captured by the virtual camera;
and the object image acquisition unit is used for deleting the image background information of the intermediate image to obtain an object image corresponding to the target object.
12. The apparatus of claim 7, further comprising:
the object image display module is used for establishing the corresponding relation between the object image and a preset object model and displaying the object image in a preset webpage;
a display position acquisition module, configured to acquire, in response to a drag operation for a target object image, a target object model corresponding to the target object image and a display position corresponding to the target object model, where the target object image is an image in the object image;
and the target object model display module is used for displaying the target object model at the display position.
13. An electronic device, comprising: processor, memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 6.
14. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202210870660.XA 2022-07-22 2022-07-22 Image generation method and device, electronic equipment and storage medium Active CN115272483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210870660.XA CN115272483B (en) 2022-07-22 2022-07-22 Image generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210870660.XA CN115272483B (en) 2022-07-22 2022-07-22 Image generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115272483A true CN115272483A (en) 2022-11-01
CN115272483B CN115272483B (en) 2023-07-07

Family

ID=83769773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210870660.XA Active CN115272483B (en) 2022-07-22 2022-07-22 Image generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115272483B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008252711A (en) * 2007-03-30 2008-10-16 Nikon Corp Digital camera
JP2009117960A (en) * 2007-11-02 2009-05-28 Nikon Corp Digital camera
JP2014059646A (en) * 2012-09-14 2014-04-03 Toshiba Corp Object detection device and object detection method
KR101592112B1 (en) * 2014-08-13 2016-02-04 김기범 Method and apparatus for detecting the object using Digital High Speed Camera
CN107517372A (en) * 2017-08-17 2017-12-26 腾讯科技(深圳)有限公司 A kind of VR content imagings method, relevant device and system
CN108596116A (en) * 2018-04-27 2018-09-28 深圳市商汤科技有限公司 Distance measuring method, intelligent control method and device, electronic equipment and storage medium
CN108650431A (en) * 2018-05-14 2018-10-12 联想(北京)有限公司 A kind of filming control method, device and electronic equipment
US20190149807A1 (en) * 2016-05-10 2019-05-16 Sony Corporation Information processing apparatus, information processing system, and information processing method, and program
CN110097539A (en) * 2019-04-19 2019-08-06 贝壳技术有限公司 A kind of method and device intercepting picture in virtual three-dimensional model
CN110266952A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
JP2020102687A (en) * 2018-12-20 2020-07-02 キヤノン株式会社 Information processing apparatus, image processing apparatus, image processing method, and program
US20200379485A1 (en) * 2018-02-28 2020-12-03 SZ DJI Technology Co., Ltd. Method for positioning a movable platform, and related device and system
WO2021128747A1 (en) * 2019-12-23 2021-07-01 深圳市鸿合创新信息技术有限责任公司 Monitoring method, apparatus, and system, electronic device, and storage medium
WO2022000992A1 (en) * 2020-06-28 2022-01-06 百度在线网络技术(北京)有限公司 Photographing method and apparatus, electronic device, and storage medium
CN113908543A (en) * 2021-10-15 2022-01-11 北京果仁互动科技有限公司 Virtual camera control method and device and computer equipment
JP2022012398A (en) * 2020-07-01 2022-01-17 キヤノン株式会社 Information processor, information processing method, and program
CN114004890A (en) * 2021-11-04 2022-02-01 北京房江湖科技有限公司 Attitude determination method and apparatus, electronic device, and storage medium
WO2022041014A1 (en) * 2020-08-26 2022-03-03 深圳市大疆创新科技有限公司 Gimbal and control method and device therefor, photographing apparatus, system, and storage medium thereof
JP2022051312A (en) * 2020-09-18 2022-03-31 キヤノン株式会社 Image capturing control apparatus, image capturing control method, and program

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008252711A (en) * 2007-03-30 2008-10-16 Nikon Corp Digital camera
JP2009117960A (en) * 2007-11-02 2009-05-28 Nikon Corp Digital camera
JP2014059646A (en) * 2012-09-14 2014-04-03 Toshiba Corp Object detection device and object detection method
KR101592112B1 (en) * 2014-08-13 2016-02-04 김기범 Method and apparatus for detecting the object using Digital High Speed Camera
US20190149807A1 (en) * 2016-05-10 2019-05-16 Sony Corporation Information processing apparatus, information processing system, and information processing method, and program
CN107517372A (en) * 2017-08-17 2017-12-26 腾讯科技(深圳)有限公司 A kind of VR content imagings method, relevant device and system
US20200379485A1 (en) * 2018-02-28 2020-12-03 SZ DJI Technology Co., Ltd. Method for positioning a movable platform, and related device and system
CN108596116A (en) * 2018-04-27 2018-09-28 深圳市商汤科技有限公司 Distance measuring method, intelligent control method and device, electronic equipment and storage medium
CN108650431A (en) * 2018-05-14 2018-10-12 联想(北京)有限公司 A kind of filming control method, device and electronic equipment
JP2020102687A (en) * 2018-12-20 2020-07-02 キヤノン株式会社 Information processing apparatus, image processing apparatus, image processing method, and program
CN110097539A (en) * 2019-04-19 2019-08-06 贝壳技术有限公司 A kind of method and device intercepting picture in virtual three-dimensional model
CN110266952A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
WO2021128747A1 (en) * 2019-12-23 2021-07-01 深圳市鸿合创新信息技术有限责任公司 Monitoring method, apparatus, and system, electronic device, and storage medium
WO2022000992A1 (en) * 2020-06-28 2022-01-06 百度在线网络技术(北京)有限公司 Photographing method and apparatus, electronic device, and storage medium
JP2022012398A (en) * 2020-07-01 2022-01-17 キヤノン株式会社 Information processor, information processing method, and program
WO2022041014A1 (en) * 2020-08-26 2022-03-03 深圳市大疆创新科技有限公司 Gimbal and control method and device therefor, photographing apparatus, system, and storage medium thereof
JP2022051312A (en) * 2020-09-18 2022-03-31 キヤノン株式会社 Image capturing control apparatus, image capturing control method, and program
CN113908543A (en) * 2021-10-15 2022-01-11 北京果仁互动科技有限公司 Virtual camera control method and device and computer equipment
CN114004890A (en) * 2021-11-04 2022-02-01 北京房江湖科技有限公司 Attitude determination method and apparatus, electronic device, and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
向召利等: "基于成像尺寸变化的单目视觉测距方法研究", 《 兵器装备工程学报》, vol. 41, no. 2, pages 148 - 151 *
田阳等: "空间目标三维重构误差分析与相机参数设计方法", 《 宇航学报》, vol. 40, no. 8, pages 948 - 956 *
陈大海等: "固定摄像头图像中测算目标距离和尺寸的算法", no. 8, pages 1 - 5 *

Also Published As

Publication number Publication date
CN115272483B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN106934777B (en) Scanning image acquisition method and device
KR101612727B1 (en) Method and electronic device for implementing refocusing
CN106161939B (en) Photo shooting method and terminal
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN105554372B (en) Shooting method and device
CN108961417B (en) Method and device for automatically generating space size in three-dimensional house model
CN101309389A (en) Method, apparatus and terminal synthesizing visual images
CN106454061B (en) Electronic device and image processing method
CN102158648A (en) Image capturing device and image processing method
CN111355884A (en) Monitoring method, device, system, electronic equipment and storage medium
CN111882674A (en) Virtual object adjusting method and device, electronic equipment and storage medium
KR101703013B1 (en) 3d scanner and 3d scanning method
US20180213199A1 (en) Shooting method and shooting device
CN111083371A (en) Shooting method and electronic equipment
CN113298946A (en) House three-dimensional reconstruction and ground identification method, device, equipment and storage medium
CN112261320A (en) Image processing method and related product
US11138743B2 (en) Method and apparatus for a synchronous motion of a human body model
US8159523B2 (en) Method for capturing convergent-type multi-view image
CN115272483B (en) Image generation method and device, electronic equipment and storage medium
CN106815237B (en) Search method, search device, user terminal and search server
CN113747011B (en) Auxiliary shooting method and device, electronic equipment and medium
CN106856558B (en) Send the 3D image monitoring and its monitoring method of function automatically with video camera
CN112154652A (en) Control method and control device of handheld cloud deck, handheld cloud deck and storage medium
CN107087114B (en) Shooting method and device
CN116600147B (en) Method and system for remote multi-person real-time cloud group photo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant