CN118158378A - Volume shooting method, device, equipment and storage medium - Google Patents

Volume shooting method, device, equipment and storage medium Download PDF

Info

Publication number
CN118158378A
CN118158378A CN202211551806.0A CN202211551806A CN118158378A CN 118158378 A CN118158378 A CN 118158378A CN 202211551806 A CN202211551806 A CN 202211551806A CN 118158378 A CN118158378 A CN 118158378A
Authority
CN
China
Prior art keywords
target
volume
shooting
identification object
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211551806.0A
Other languages
Chinese (zh)
Inventor
郭景昊
车广富
魏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202211551806.0A priority Critical patent/CN118158378A/en
Publication of CN118158378A publication Critical patent/CN118158378A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a volume shooting method, a volume shooting device, volume shooting equipment and a storage medium. The method comprises the following steps: acquiring a target shooting scene acquired by a shooting device, wherein the target shooting scene comprises a target object; responding to a central position selection operation triggered by a user aiming at a target object, determining a target volume central position, and displaying a first identification object at the target volume central position; determining a plurality of target volume points based on the target volume center positions, and displaying a second identification object at each target volume point; triggering and collecting original volume images corresponding to a second identification object when detecting that the shielding relation exists between the second identification object and the first identification object; and carrying out pose standardization processing on the original volume image, and determining a standard volume image shot at the target volume point position. By the technical scheme provided by the embodiment of the invention, the hardware cost of volume shooting can be reduced while the volume shooting effect is ensured.

Description

Volume shooting method, device, equipment and storage medium
Technical Field
Embodiments of the present invention relate to computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for capturing a volume.
Background
With the rapid development of computer technology, the display effect of the three-dimensional image of the object can allow a user to observe the object from different angles, and the display mode is favored by more and more users.
Volume shooting is a special acquisition mode for acquiring three-dimensional images of objects. Currently, volume shooting typically captures objects from multiple angles using hundreds of high resolution cameras, so that video or pictures containing depth information can be taken.
However, in the process of implementing the present invention, the inventors found that at least the following problems exist in the prior art:
the existing volume shooting mode needs to utilize a large number of high-resolution cameras to shoot, so that the hardware cost of volume shooting is greatly increased.
Disclosure of Invention
The embodiment of the invention provides a volume shooting method, a device, equipment and a storage medium, which are used for reducing the hardware cost of volume shooting while ensuring the volume shooting effect.
In a first aspect, an embodiment of the present invention provides a volume shooting method, including:
acquiring a target shooting scene acquired by a shooting device, wherein the target shooting scene comprises a target object to be shot in volume;
Responding to a central position selection operation triggered by a user aiming at the target object, determining a target volume central position selected by the user, and displaying a first identification object at the target volume central position;
Determining a plurality of target volume points around the target object based on the target volume center position, and displaying a second identification object at each target volume point;
Triggering and collecting an original volume image corresponding to a second identification object when detecting that a shielding relation exists between the second identification object and the first identification object in the process of adjusting the shooting visual angle of the shooting device by a user;
and carrying out pose standardization processing on the acquired original volume image corresponding to the second identification object, and determining a standard volume image shot at the target volume point position.
In a second aspect, an embodiment of the present invention further provides a volume shooting device, including:
the target shooting scene acquisition module is used for acquiring a target shooting scene acquired by the shooting device, wherein the target shooting scene comprises a target object to be shot in volume;
the target volume center position selection module is used for responding to center position selection operation triggered by a user aiming at the target object, determining the target volume center position selected by the user and displaying a first identification object at the target volume center position;
the target volume point position determining module is used for determining a plurality of target volume point positions around the target object based on the target volume center position, and displaying a second identification object at each target volume point position;
The original volume image acquisition module is used for triggering and acquiring an original volume image corresponding to a second identification object when detecting that the second identification object and the first identification object have a shielding relation in the process of adjusting the shooting visual angle of the shooting device by a user;
and the standard volume image determining module is used for carrying out pose standardization processing on the acquired original volume image corresponding to the second identification object and determining the standard volume image shot at the target volume point position.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
One or more processors;
a memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the volumetric photographing method as provided by any embodiment of the present invention.
In a fourth aspect, embodiments of the present invention further provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a volume shooting method as provided by any of the embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits:
The method comprises the steps of acquiring a target shooting scene by using a shooting device, wherein the target shooting scene comprises target objects to be shot in volume, displaying a first identification object according to a target volume center position selected by a user for the target objects, determining a plurality of target volume potentials around the target objects based on the target volume center position, and displaying a second identification object at each target volume point. In the process of adjusting the shooting visual angle of the shooting device by a user, when a shielding relation exists between a second identification object and a first identification object, triggering an original volume image corresponding to the second identification object, and carrying out pose standardization processing on the acquired original volume image to obtain a standard volume image shot at a target volume point position, so that the shooting mode can utilize fewer shooting devices to carry out volume shooting, for example, the volume shooting can be realized by only one shooting device, the volume shooting effect is ensured, and the volume shooting cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, a brief description will be given below of the drawings required for the embodiments or the prior art descriptions, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for capturing images of a volume according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a first identified object presentation in accordance with an embodiment of the present invention;
FIG. 3 is a schematic illustration of a second identified object presentation in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of a display update of a second identified object based on a target display manner according to an embodiment of the present invention;
FIG. 5 is a diagram of a raw volumetric image versus a standard volumetric image in accordance with an embodiment of the invention;
FIG. 6 is a flow chart of another method of capturing images of volumes according to an embodiment of the present invention;
Figure 7 is a schematic diagram of a bounding box in accordance with an embodiment of the present invention;
FIG. 8 is a schematic diagram of a second identified object presentation location determined based on a bounding box in accordance with an embodiment of the present invention;
Fig. 9 is a schematic structural diagram of a volume photographing device according to an embodiment of the present invention;
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Fig. 1 is a flowchart of a method for capturing volume according to an embodiment of the present invention, where the embodiment is applicable to a case of capturing volume of a target object to be captured in a target capturing scene, and is particularly applicable to a case where a mobile terminal having a monocular camera and a capturing pose acquisition device captures volume of the target object to be captured in the target capturing scene. The method may be performed by a volume photographing device, which may be implemented in software and/or hardware, integrated in an electronic apparatus. As shown in fig. 1, the method specifically includes the following steps:
s110, acquiring a target shooting scene acquired by a shooting device, wherein the target shooting scene comprises a target object to be shot in volume.
The photographing apparatus may refer to a device having a photographing function. For example, the camera may be referred to as a camera. The camera may be a monocular camera. The monocular camera may refer to a visible light camera that may be used to photograph an object. The target shooting scene may refer to a picture displayed on a mobile terminal shooting interface acquired by the shooting device. The volume photographing can be a special acquisition mode for acquiring three-dimensional images of the object. Currently, volume shooting typically captures objects from multiple angles using hundreds of high resolution cameras, so that video or pictures containing depth information can be taken. The target object may refer to any object to be volume photographed.
Specifically, the user can click the application software used for realizing the volume shooting on the mobile terminal, open an operation interface of the application software, collect a target shooting scene containing a target object by utilizing the shooting device on the mobile terminal, and display the target shooting scene in a display interface of the application software, so that the user can perform the volume shooting by utilizing the application software, and therefore, a small number of shooting devices, such as one shooting device, can be used for replacing hundreds of high-resolution cameras to perform the volume shooting, the requirement on the precision of the shooting device is reduced, and the cost of shooting equipment is saved.
The method is applied to a client, the client is integrated in a mobile terminal, and a monocular camera and a shooting pose acquisition device are installed on the mobile terminal.
The shooting pose acquisition device can be equipment for acquiring shooting poses of the monocular camera. For example, the shooting pose acquisition device may be an inertial measurement unit (InertialMeasurementUnit, IMU). The inertial measurement unit may be a sensor for detecting and measuring acceleration and rotational movement.
Specifically, the method can be applied to a client, namely application software for realizing volume shooting; the client can be integrated in a mobile terminal, such as a mobile phone, a tablet computer and the like; the mobile terminal can be provided with a monocular camera and a shooting pose acquisition device. For example, a certain cell phone equipped with a monocular camera and a shooting pose acquisition device can perform volume shooting by operating a client using the method. Therefore, on the mobile terminal provided with the monocular camera and the shooting pose acquisition device, the capacity shooting of an object can be realized by the client, the mobile terminal can flexibly migrate, the requirement on the precision of the camera is further reduced, and the cost of shooting equipment is saved.
And S120, responding to a central position selection operation triggered by a user aiming at the target object, determining the central position of the target volume selected by the user, and displaying the first identification object at the central position of the target volume.
Wherein the central position may refer to a geometric central position of the object. The target volume center position may refer to a geometric center position of the volume photographing range for volume photographing centering on the position. The first identified object may refer to an object for identifying a target volumetric center location. For example, the first identification object may be a sphere, a cube, a hexahedron, or the like.
Specifically, the user can initially select the center position of the target object by clicking the screen in the operation interface of the application software, and adjust the center position of the selected target object by finger movement; after the central position of the target object is selected, the user can click the corresponding control in the operation interface of the application software to realize position locking, so that the application client can respond to the central position selection operation triggered by the user on the target object, determine the central position of the target volume selected by the user, and virtually place a first identification object, such as a yellow sphere, at the central position of the target volume, thereby obviously prompting the user of the central position of the volume shooting. Fig. 2 shows a schematic diagram of a first identification object, referring to fig. 2, the target object is a cylinder; the first identification object is a sphere and is displayed at the volume shooting center of the target object.
And S130, determining a plurality of target volume points around the target object based on the target volume center position, and displaying a second identification object at each target volume point.
The target volume point location may be a volume shooting point location generated based on the target volume center location, or a volume shooting point location generated based on user parameter adjustment, such as adjusting a distance from the target volume point location to the target volume center location. The second identified object may refer to an object for identifying a target volume point location. For example, the second identification object may be a sphere, a cube, a hexahedron, or the like. The first identification object may be the same shape as the second identification object, but different colors exist to facilitate distinguishing the first identification object from the second identification object.
Specifically, after determining the target volume center position selected by the user and displaying the first identification object at the target volume center position, a plurality of target volume points distributed around the target object may be determined based on the target volume center position, and the second identification object may be virtually placed at each target volume point, so that the target object may be volume-photographed from a plurality of different perspectives. For example, a volume point sphere may be determined with the target volume center position as the sphere center and a preset distance as a radius, each volume point of the discrete distribution in the volume point sphere is determined as a target volume point, and a second identification object, such as a blue sphere, is displayed at each target volume point. Fig. 3 shows a schematic diagram of a second identification object presentation, wherein fig. 3 shows only a part of the second identification object. Referring to fig. 3, a plurality of target volume points are distributed around a target object in the display interface, and a corresponding second identification object is displayed at each target volume point, where the second identification object may be a sphere with a volume greater than that displayed by the first identification object.
Illustratively, S130 may include: determining a target spherical surface display position by taking the target volume center position as a spherical surface center, and displaying a volume point spherical surface at the target spherical surface display position, wherein a plurality of volume points are positioned on the volume point spherical surface, and a second identification object is displayed at each volume point; responding to size adjustment operation triggered by a user on the volume point spherical surface, and synchronously adjusting the spherical surface display size of the volume point spherical surface; and responding to the adjustment ending operation triggered by the user, and determining each target volume point around the target object based on the currently displayed target volume point spherical surface.
The target sphere display position may be a position for displaying a point sphere. The volume point sphere may be a virtual sphere composed of a plurality of volume points. A plurality of volume points are on the volume point sphere. The distance from each volume point to the spherical center, i.e. the position of the center of the target volume, is the same. A second identification object may be presented at each volumetric point location. The target volume point sphere may be a volume point sphere formed by displaying the size of the sphere of the volume point sphere confirmed by the user.
Specifically, after determining the target volume center position selected by the user and displaying the first identification object at the target volume center position, the initial distance from the target volume center position to the volume point is set as the spherical radius, and the target spherical display position of the initial display of the volume point spherical is determined and the volume point spherical is displayed at the target spherical display position. Wherein each volume point in the volume point sphere displays a second identified object. The user can perform size adjustment operation on the volume point spherical surface, for example, the sphere display size of the volume point spherical surface is adjusted by modifying the size in a size adjustment control of the display interface, so that the client can respond to the size adjustment operation triggered by the user on the volume point spherical surface to synchronously adjust the sphere display size of the volume point spherical surface; after the size adjustment is completed, the user can click on an end adjustment control in the screen, so that the client can respond to the adjustment end operation triggered by the user, and each volume point in the adjusted currently displayed target volume point sphere is determined to be each target volume point around the target object.
And S140, triggering and collecting original volume images corresponding to a second identification object when detecting that the second identification object and the first identification object have a shielding relation in the process of adjusting the shooting visual angle of the shooting device by a user.
The original volume image may refer to an image automatically captured by the capturing device when an occlusion relationship exists between the second identification object and the first identification object.
Specifically, in the process of adjusting the shooting angle of view of the shooting device by adjusting the angle and the position of the mobile terminal, when a shielding relationship between a certain second identification object and a first identification object is detected, for example, when the second identification object shields part or all of the first identification object in the shooting angle of view of the shooting device, the operation of collecting the original volume image corresponding to the second identification object is triggered. It should be noted that, in the process of adjusting the shooting visual angle of the shooting device by the user, the display positions of the first identification object and the second identification object are fixed and unchanged, and only the visual line of the shooting visual angle is changed, so that each second identification object has a shielding relationship with the first identification object, and the image acquisition operation corresponding to each target volume point position is triggered.
It should be noted that, in the process of adjusting the shooting angle of view of the shooting device, when a shielding relationship exists between one second identification object and the first identification object, the acquisition of the original volume image corresponding to the second identification object is triggered, so that one original volume image is acquired at each second identification object, that is, all the second identification objects are triggered to acquire images. Or stopping image acquisition operation when the number of the acquired original volume images is detected to be equal to a preset number threshold, wherein only part of the second identification objects are triggered to acquire the images, and the other part of the second identification objects are not triggered to acquire the images, which can be specifically set based on service requirements and actual scenes.
For example, if there is only one photographing device, the user may trigger the acquisition of the original volume image corresponding to the second identification object by adjusting only the photographing viewing angle of the photographing device. If at least two shooting devices exist, a user can select a shooting device with a current visual angle to be adjusted from all the shooting devices, and then trigger to collect original volume images corresponding to the second identification object by adjusting the shooting visual angles of the selected shooting devices.
Illustratively, after "triggering acquisition of the original volumetric image corresponding to the second identification object" in S140, the method may further include: and if the acquired original volume image corresponding to the second identification object meets the preset image quality condition, displaying and updating the second identification object based on a target display mode.
The preset image quality condition may refer to preset image quality that needs to be satisfied when a successful image is captured. For example, the preset image quality condition may be that the pose error of the photographing device is smaller than a certain threshold value or that the picture is clear and undistorted. The target display mode can be a display mode used for representing completion of point shooting. For example, the target display mode may be a mode of changing the color of the second identification object, such as changing from blue to red, fading the color, or reducing the volume. When the second identification object collides with the first identification object, the display mode of the second identification object can be changed so as to distinguish the second identification object which has collided from the second identification object which has not collided.
Specifically, after triggering and collecting an original volume image corresponding to a second identification object with a shielding relation, if the collected original volume image corresponding to the second identification object does not meet a preset image quality condition, the second identification object needs to be shot again, and at the moment, the display mode of the second identification object does not need to be updated; if the acquired original volume image corresponding to the second identification object meets the preset image quality condition, displaying the second identification object in a target display mode, for example, changing the color of the second identification object to display and update the second identification object, or deleting the shot second identification object directly, and displaying only the second identification object which is not shot, so that the second identification object of the shot original volume image and the second identification object of the shot original volume image can be distinguished more directly visually, repeated shooting is avoided, and the shooting efficiency is improved. Fig. 4 is a schematic diagram illustrating a display update of a second identification object based on a target display mode. The second identification objects after updating the display mode are shown in fig. 4, so that partial second identification objects which are not acquired in the original volume image at present can be distinguished, repeated acquisition of the same second identification object is avoided, and the acquisition efficiency is improved.
And S150, carrying out pose standardization processing on the acquired original volume image corresponding to the second identification object, and determining a standard volume image shot at the target volume point position.
The pose may refer to a position and a pose of the photographing device, which may be characterized by a rotation matrix and a translation vector. The standard volume image may refer to an image of the original volume image after the pose normalization process. The data acquired for each target volume point location may include, but is not limited to: the raw volume image and the actual shooting pose of the shooting device.
Specifically, when the original volume image corresponding to the second identification object is acquired, the actual shooting pose corresponding to the original volume image can be acquired. However, when the standard volume image is acquired, a certain deviation exists between the actual shooting pose and the standard shooting pose of the volume point, so that the pose standardization processing is required to be performed on the acquired original volume image corresponding to the second identification object, the original volume image is converted back and translated to the standard pose, and the standard volume image shot at each target volume point is determined. Fig. 5 shows a comparison of a raw volume image with a standard volume image. Referring to fig. 5, images 1,2,3, 4 on the left side are original volume images of a paper cup as a target object; the images 1', 2', 3', 4' on the right are the corresponding standard volume images. After the standard volume images corresponding to all the target volume points are obtained, three-dimensional images of the target object can be generated based on all the standard volume images, or two-dimensional feature matching can be performed through all the standard volume images, so that three-dimensional construction of the target object can be realized, and the collected standard volume images can be directly made into a volume video for playing and the like.
Illustratively, S150 may include: aiming at the original volume image corresponding to each second identification object, determining a pose conversion relation between the actual shooting pose and the standard shooting pose according to the actual shooting pose corresponding to the original volume image and the standard shooting pose corresponding to the target volume point where the second identification object is located; and performing pose conversion on the original volume image based on the pose conversion relation to obtain a standard volume image shot at the target volume point.
Specifically, for each original volume image corresponding to the second identification object, a relationship between the original volume image and the ARKit virtual world coordinate system needs to be established by means of remapping (remap), and then the actual shooting pose corresponding to the original volume image can be expressed asThe standard shooting pose corresponding to the target volume point position of the second identification object can be expressed asAnd pass through the formula/>Determining the pose conversion relation between the actual shooting pose and the standard shooting pose; and based on the pose conversion relation, carrying out pose conversion on the original volume image, such as rotation or translation and the like, to obtain a standard volume image shot at the target volume point position, thereby reducing the requirement on volume shooting.
According to the technical scheme, the shooting device is used for acquiring a target shooting scene, the target shooting scene comprises target objects to be shot in volume, a first identification object is displayed according to the position of the center of the target volume selected by a user for the target objects, a plurality of target volume potentials around the target objects are determined based on the position of the center of the target volume, and a second identification object is displayed at each target volume point. In the process of adjusting the shooting visual angle of the shooting device by a user, when a shielding relation exists between a second identification object and a first identification object, triggering an original volume image corresponding to the second identification object, and carrying out pose standardization processing on the acquired original volume image to obtain a standard volume image shot at a target volume point position, so that the shooting mode can utilize fewer shooting devices to carry out volume shooting, for example, the volume shooting can be realized by only one shooting device, the volume shooting effect is ensured, and the volume shooting cost is reduced.
Fig. 6 is a flowchart of another volume photographing method according to an embodiment of the present invention, where the step of determining a plurality of target volume points around a target object based on a target volume center position and displaying a second identification object at each target volume point is further optimized based on the above embodiments, and the step of triggering and acquiring an original volume image corresponding to a second identification object when an occlusion relationship between the second identification object and the first identification object is detected during a process of adjusting a photographing view angle of a photographing device by a user is further optimized based on the above embodiments. Wherein the explanation of the same or corresponding terms as those of the above embodiments is not repeated herein.
Referring to fig. 6, another volume shooting method provided in this embodiment specifically includes the following steps:
S610, acquiring a target shooting scene acquired by a shooting device, wherein the target shooting scene comprises a target object to be shot in volume.
And S620, responding to a central position selection operation triggered by the user aiming at the target object, determining the central position of the target volume selected by the user, and displaying the first identification object at the central position of the target volume.
S630, determining a target bounding box display position by taking the target volume center position as a bounding box center, and displaying the bounding box at the target bounding box display position.
The bounding box may be a box that is virtually set to enclose the target object. In order to be able to view the first identification object through the bounding box, the bounding box may be a transparent bounding box. The target bounding box display position may refer to a position at which the bounding box is displayed centered around the first identification object of the target object.
Specifically, after determining that the target volume center position selected by the user is the first identification object display position, the target volume center position may be taken as the center of the bounding box, the target bounding box display position is determined based on the initial display size of the bounding box, and the virtual placed bounding box is initially displayed at the target bounding box display position, so that the first identification object and the bounding box may be displayed simultaneously; after the first identification object is displayed at the target volume center position, the target volume center position is taken as the center of the bounding box, the target bounding box display position is determined based on the initial display size of the bounding box, and the bounding box is displayed at the target bounding box display position for the first time, so that the bounding box can be displayed after the first identification object is displayed.
Illustratively, determining the target bounding box presentation position "in S630 with the target volumetric center position as the bounding box center may include: and determining a target plane where the bottom surface of the target object is located, taking the central position of the target volume as the center of the bounding box, taking the target plane as the plane where the bottom surface of the bounding box is located, and determining the display position of the target bounding box.
The target plane may refer to a plane on which the bottom surface of the target object is located, that is, a platform plane on which the target object is placed. For example, the target object is placed on a desktop, and then the target plane may be the desktop.
Specifically, the client can determine a target plane where the bottom surface of the target object is located based on a tracking system in the mobile terminal by using a plane recognition algorithm of the tracking system, and determine a target bounding box display position by taking the target volume center position as the bounding box center and taking the target plane as the plane where the bottom surface of the bounding box is located, so as to realize the fixation of the bottom surface of the bounding box, and meanwhile, the bounding box can translate on the target plane (the bottom surface of the bounding box cannot leave the target plane), so that the number of planes required to be adjusted when the bounding box is adjusted is reduced, and the bounding box setting efficiency is improved. Fig. 7 shows a schematic diagram of a bounding box. Referring to fig. 7, the target plane may be a plane in which the target object is placed, such as a table top. The target object may be observed through the bounding box, and the bounding box may enclose the target object.
It should be noted that, the mobile terminal may use ARKit or ARCore as a tracking system, and determine, by using a plane recognition algorithm of ARKit or ARCore, a target plane where the bottom surface of the target object is located, and perform tracking calculation. The mobile terminal may also use other slam algorithms for tracking calculations. For example, if the mobile terminal is a mobile phone, ARKit which is most convenient and commonly used on the mobile phone can be used as the tracking algorithm.
And S640, synchronously adjusting the display size of the bounding box in response to the size adjustment operation triggered by the user on the bounding box so as to enable the display size of the bounding box to be matched with the size of the target object.
The display size of the bounding box can refer to the length, width and height of the bounding box. The target object dimensions may refer to the length, width, and height of the target object. The bounding box size changes and so does the center of the bounding box. The position of the first identification object may be located in the center of the bounding box, i.e. change synchronously with the change of the center of the bounding box.
Specifically, a user can stretch or shrink the bounding box along a certain side of the bounding box on the display interface with fingers to adjust the size of the bounding box, so that the client side can synchronously adjust the display size of the bounding box in response to the size adjustment operation triggered by the user on the bounding box, the display size of the bounding box is matched with the size of a target object, namely the bounding box can encase the target object, and when an original volume image is shot, the existence of the complete bounding box in the image can be ensured, so that the existence of the complete target object in the original volume image is ensured, and meanwhile, prior information is provided for the processing of the image, namely the approximate position of the shot target object in the two-dimensional image.
Illustratively, the "sync adjusted bounding box presentation size" in S640 may include: the bounding box display sizes are adjusted synchronously along the target plane.
The bounding box center is the target volume center position, so that the target volume center position and the bounding box center are linked, that is, the target volume center position changes along with the change of the bounding box center. Specifically, when the user clicks the bounding box on the screen with a finger and lengthens the bounding box along the vertical direction of the target plane, the client responds to the size adjustment operation triggered by the user on the bounding box and synchronously adjusts the display size of the bounding box along the vertical direction of the target plane, and at the moment, the center of the bounding box changes along with the adjustment of the display size of the bounding box, so that the position of the center of the target volume also synchronously changes along with the adjustment of the display size of the bounding box.
And S650, responding to the adjustment ending operation triggered by the user, determining a plurality of target volume points around the target object based on the currently displayed target bounding box, and displaying a second identification object at each target volume point.
The currently displayed target bounding box may refer to a target bounding box displayed in the display interface after the size adjustment is finished.
Specifically, after the size adjustment is completed, the user may click on an end adjustment control in the screen, so that the client may respond to the adjustment end operation triggered by the user, determine a plurality of target volume points around the target object based on the adjusted currently displayed target bounding box and the target volume center position, and display the second identification object at each target volume point. Fig. 8 shows a schematic diagram of a second identification object presentation position determined based on a bounding box, wherein fig. 8 only shows a part of the second identification object. Referring to fig. 8, there are multiple target volume points in the presentation interface that are determined based on bounding box positions and sizes, each of which presents a corresponding second identification object. The second identification object may be a sphere having a volume greater than that exhibited by the first identification object.
Illustratively, "determining a plurality of target volume points around the target object based on the currently displayed target bounding box, and displaying the second identification object at each target volume point" in S650 may include: determining a target sphere display position and a target sphere display size based on the currently displayed target bounding box; and displaying the target volume point sphere of the target sphere display size at the target sphere display position.
The target sphere display size can be changed along with the bounding box size according to a certain proportion. For example, the target sphere presentation size may be a sphere radius. The target volume point sphere is composed of a plurality of target volume points, and a second identification object is displayed at each target volume point.
Specifically, determining a target volume center position based on a target bounding box currently displayed after adjustment is finished, and determining a target spherical display position by taking the target volume center position as a spherical center; determining the display size of the target sphere according to a certain proportion based on the size of the target bounding box displayed currently; and displaying the first identification object at the center position of the target volume corresponding to the currently displayed target bounding box, and displaying the target volume point sphere of the target sphere display size at the target sphere display position.
It should be noted that the illustrated volume point spherical surface may be a full spherical surface so as to capture the target object from each angle above and below the target plane. The illustrated volume point location sphere may be a hemispherical surface, so that the target object is shot only from all angles above the target plane, as shown in fig. 8, if the bottom surface of the target object needs to be shot, the bottom surface of the target object may be placed upwards and then volume shooting may be performed again.
And S660, triggering and collecting original volume images corresponding to a second identification object when the overlapping area between the second identification object and the first identification object is detected to be larger than or equal to a preset area and the position of a target bounding box in the current shooting view angle meets the preset shooting condition in the process of adjusting the shooting view angle of the shooting device by a user.
The preset area may be a preset minimum shielding area that allows shooting. The preset area can be used for judging whether the shooting device is at the shooting position corresponding to the target volume point. The preset area may be used to determine whether the photographing device is located on an extension line of a line connecting the second identification object and the first identification object. The preset photographing condition may mean that the target packing box is entirely within the photographing range, or that the target packing box is located at the center position of the photographed image.
Specifically, in the process that a user adjusts the shooting angle of view of the shooting device by adjusting the angle and the position of the mobile terminal, when the overlapping area between one second identification object and one first identification object is detected to be larger than or equal to the preset area, and the position of the target bounding box in the current shooting angle of view meets the preset shooting condition, the original volume image corresponding to the second identification object is triggered to be acquired, so that the existence of a complete target object in the original volume image is ensured, the volume shooting effect is further ensured, and the volume shooting cost is reduced. If at least one of the conditions cannot be met, the original volume image corresponding to the second identification object cannot be acquired.
S670, carrying out pose standardization processing on the acquired original volume image corresponding to the second identification object, and determining a standard volume image shot at the target volume point.
According to the technical scheme, the bounding box is utilized, the size of the bounding box is adjusted, the adjustment of the central position of the target volume and the covering of the target object are achieved, namely, the display size of the bounding box is matched with the size of the target object, so that the bounding box can wrap the target object, and when an original volume image is shot, the fact that the complete bounding box exists in the image can be guaranteed, the fact that the complete target object exists in the original volume image is guaranteed, the volume shooting effect is further guaranteed, and the volume shooting cost is reduced.
The following is an embodiment of a volume photographing device provided by an embodiment of the present invention, which belongs to the same inventive concept as the volume photographing method of each embodiment, and reference may be made to the embodiment of the volume photographing method for details that are not described in detail in the embodiment of the volume photographing device.
Fig. 9 is a schematic structural diagram of a volume photographing device according to an embodiment of the present invention, where the embodiment is applicable to a case of performing volume photographing on a target object to be volume photographed in a target photographing scene, and is particularly applicable to a case where a mobile terminal having a photographing device and a photographing pose acquisition device performs volume photographing on the target object to be volume photographed in the target photographing scene. As shown in fig. 9, the apparatus specifically includes: the system comprises a target shooting scene acquisition module 910, a target volume center position selection module 920, a target volume point location determination module 930, an original volume image acquisition module 940 and a standard volume image determination module 950.
The target shooting scene acquisition module 910 is configured to acquire a target shooting scene acquired by the shooting device, where the target shooting scene includes a target object to be shot in volume; the target volume center position selection module 920 is configured to determine a target volume center position selected by a user in response to a center position selection operation triggered by the user for the target object, and display a first identification object at the target volume center position; a target volume point location determining module 930, configured to determine a plurality of target volume points around the target object based on the target volume center position, and display a second identification object at each target volume point; the original volume image acquisition module 940 is configured to trigger acquisition of an original volume image corresponding to a second identification object when an occlusion relationship between the second identification object and the first identification object is detected during adjustment of a shooting angle of the shooting device by a user; the standard volume image determining module 950 is configured to perform pose normalization processing on the acquired original volume image corresponding to the second identification object, and determine a standard volume image captured at the target volume point.
According to the technical scheme, the shooting device is used for acquiring a target shooting scene, the target shooting scene comprises target objects to be shot in volume, a first identification object is displayed according to the position of the center of the target volume selected by a user for the target objects, a plurality of target volume potentials around the target objects are determined based on the position of the center of the target volume, and a second identification object is displayed at each target volume point. In the process of adjusting the shooting visual angle of the shooting device by a user, when a shielding relation exists between a second identification object and a first identification object, triggering an original volume image corresponding to the second identification object, and carrying out pose standardization processing on the acquired original volume image to obtain a standard volume image shot at a target volume point position, so that the shooting mode can utilize fewer shooting devices to carry out volume shooting, for example, the volume shooting can be realized by only one shooting device, the volume shooting effect is ensured, and the volume shooting cost is reduced.
Optionally, the target volume point location determination module 930 may include:
The target sphere display position determining submodule is used for determining a target sphere display position by taking the target volume center position as a sphere center and displaying a volume point sphere at the target sphere display position, wherein a plurality of volume points are positioned on the volume point sphere, and each volume point displays a second identification object;
the sphere display size adjustment sub-module is used for synchronously adjusting the sphere display size of the volume point sphere in response to the size adjustment operation triggered by the user on the volume point sphere;
And the first target volume point position determining sub-module is used for responding to the adjustment ending operation triggered by the user and determining each target volume point position around the target object based on the currently displayed target volume point position spherical surface.
Optionally, the target volume point location determination module 930 may include:
The bounding box display position determining submodule is used for determining a target bounding box display position by taking the target volume center position as a bounding box center and displaying the bounding box at the target bounding box display position;
The bounding box display size adjusting sub-module is used for synchronously adjusting the bounding box display size in response to the size adjusting operation triggered by the user on the bounding box so as to enable the bounding box display size to be matched with the target object size;
and the second target volume point position determining sub-module is used for responding to the adjustment ending operation triggered by the user, determining a plurality of target volume points around the target object based on the currently displayed target bounding box, and displaying a second identification object at each target volume point position.
Optionally, the bounding box exhibition position determination submodule is specifically configured to: determining a target plane where the bottom surface of the target object is located, taking the central position of the target volume as the center of the bounding box, taking the target plane as the plane where the bottom surface of the bounding box is located, and determining the display position of the target bounding box;
The bounding box display size adjustment submodule is specifically used for: the bounding box display sizes are adjusted synchronously along the target plane.
Optionally, the second target volume point location determination submodule is specifically configured to: determining a target sphere display position and a target sphere display size based on the currently displayed target bounding box; and displaying a target volume point location sphere of a target sphere display size at the target sphere display position, wherein the target volume point location sphere is composed of a plurality of target volume points, and a second identification object is displayed at each target volume point location.
Optionally, the raw volumetric image acquisition module 940 is specifically configured to: in the process of adjusting the shooting visual angle of the shooting device by a user, when detecting that the overlapping area between one second identification object and the first identification object is larger than or equal to the preset area and the position of a target bounding box in the current shooting visual angle meets the preset shooting condition, triggering to acquire an original volume image corresponding to the second identification object.
Optionally, the apparatus further comprises:
And the second identification object display updating module is used for displaying and updating the second identification object based on a target display mode if the acquired original volume image corresponding to the second identification object meets the preset image quality condition after triggering and acquiring the original volume image corresponding to the second identification object, wherein the target display mode is a display mode used for representing completion of point location shooting.
Optionally, the standard volumetric image determination module 950 is specifically configured to: aiming at the original volume image corresponding to each second identification object, determining a pose conversion relation between the actual shooting pose and the standard shooting pose according to the actual shooting pose corresponding to the original volume image and the standard shooting pose corresponding to the target volume point where the second identification object is located; and performing pose conversion on the original volume image based on the pose conversion relation to obtain a standard volume image shot at the target volume point.
Optionally, the shooting device is a monocular camera; the device is integrated in the client, and the client is integrated in the mobile terminal, and a monocular camera and a shooting pose acquisition device are installed on the mobile terminal.
The volume shooting device provided by the embodiment of the invention can execute the volume shooting method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the volume shooting method.
It should be noted that, in the embodiment of the volume shooting device, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. Fig. 10 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 10 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 10, the electronic device 12 is in the form of a general purpose computing device. Components of the electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 10, commonly referred to as a "hard disk drive"). Although not shown in fig. 10, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The system memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, implementing a volume shooting method step provided in the present embodiment, the method includes:
acquiring a target shooting scene acquired by a shooting device, wherein the target shooting scene comprises a target object to be shot in volume;
Responding to a central position selection operation triggered by a user aiming at a target object, determining a target volume central position selected by the user, and displaying a first identification object at the target volume central position;
Determining a plurality of target volume points around the target object based on the target volume center position, and displaying a second identification object at each target volume point;
Triggering and collecting an original volume image corresponding to a second identification object when detecting that the second identification object and the first identification object have a shielding relation in the process of adjusting the shooting visual angle of the shooting device by a user;
and carrying out pose standardization processing on the acquired original volume image corresponding to the second identification object, and determining a standard volume image shot at the target volume point position.
Of course, those skilled in the art will understand that the processor may also implement the technical solution of the volumetric photographing method provided in any embodiment of the present invention.
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the volume photographing method as provided by any of the embodiments of the present invention, the method comprising:
acquiring a target shooting scene acquired by a shooting device, wherein the target shooting scene comprises a target object to be shot in volume;
Responding to a central position selection operation triggered by a user aiming at a target object, determining a target volume central position selected by the user, and displaying a first identification object at the target volume central position;
Determining a plurality of target volume points around the target object based on the target volume center position, and displaying a second identification object at each target volume point;
Triggering and collecting an original volume image corresponding to a second identification object when detecting that the second identification object and the first identification object have a shielding relation in the process of adjusting the shooting visual angle of the shooting device by a user;
and carrying out pose standardization processing on the acquired original volume image corresponding to the second identification object, and determining a standard volume image shot at the target volume point position.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium may be, for example, but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It will be appreciated by those of ordinary skill in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed over a network of computing devices, or they may alternatively be implemented in program code executable by a computer device, such that they are stored in a memory device and executed by the computing device, or they may be separately fabricated as individual integrated circuit modules, or multiple modules or steps within them may be fabricated as a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (12)

1. A method of capturing a volume, comprising:
acquiring a target shooting scene acquired by a shooting device, wherein the target shooting scene comprises a target object to be shot in volume;
Responding to a central position selection operation triggered by a user aiming at the target object, determining a target volume central position selected by the user, and displaying a first identification object at the target volume central position;
Determining a plurality of target volume points around the target object based on the target volume center position, and displaying a second identification object at each target volume point;
Triggering and collecting an original volume image corresponding to a second identification object when detecting that a shielding relation exists between the second identification object and the first identification object in the process of adjusting the shooting visual angle of the shooting device by a user;
and carrying out pose standardization processing on the acquired original volume image corresponding to the second identification object, and determining a standard volume image shot at the target volume point position.
2. The method of claim 1, wherein the determining a plurality of target volume points around the target object based on the target volume center position and displaying a second identification object at each target volume point comprises:
Determining a target spherical surface display position by taking the target volume center position as a spherical surface center, and displaying a volume point spherical surface at the target spherical surface display position, wherein a plurality of volume points are positioned on the volume point spherical surface, and a second identification object is displayed at each volume point;
Responding to the size adjustment operation triggered by a user on the volume point spherical surface, and synchronously adjusting the spherical surface display size of the volume point spherical surface;
And responding to the adjustment ending operation triggered by the user, and determining each target volume point around the target object based on the currently displayed target volume point spherical surface.
3. The method of claim 1, wherein the determining a plurality of target volume points around the target object based on the target volume center position and displaying a second identification object at each target volume point comprises:
Taking the central position of the target volume as the center of the bounding box, determining the display position of the target bounding box, and displaying the bounding box at the display position of the target bounding box;
In response to a size adjustment operation triggered by a user on the bounding box, synchronously adjusting the bounding box display size so as to enable the bounding box display size to be matched with the target object size;
And responding to the adjustment ending operation triggered by the user, determining a plurality of target volume points around the target object based on the currently displayed target bounding box, and displaying a second identification object at each target volume point.
4. The method of claim 3, wherein determining a target bounding box display position with the target volumetric center position as a bounding box center comprises:
Determining a target plane where the bottom surface of the target object is located, taking the central position of the target volume as the center of the bounding box, and determining the display position of the target bounding box by taking the target plane as the plane where the bottom surface of the bounding box is located;
The synchronous regulation bounding box demonstrates size, includes:
And synchronously adjusting the display size of the bounding box along the target plane.
5. A method according to claim 3, wherein the determining a plurality of target volume points around the target object based on the currently displayed target bounding box and displaying a second identification object at each target volume point comprises:
Determining a target sphere display position and a target sphere display size based on the currently displayed target bounding box;
And displaying the target volume point location sphere of the target sphere display size at the target sphere display position, wherein the target volume point location sphere consists of a plurality of target volume points, and each target volume point location displays a second identification object.
6. The method according to claim 3, wherein triggering acquisition of the original volume image corresponding to a second identification object when an occlusion relationship between the second identification object and the first identification object is detected during the process of adjusting the shooting angle of view of the shooting device by the user comprises:
And triggering and collecting an original volume image corresponding to a second identification object when the overlapping area between the second identification object and the first identification object is detected to be larger than or equal to a preset area and the position of the target bounding box in the current shooting view angle meets the preset shooting condition in the process of adjusting the shooting view angle of the shooting device by a user.
7. The method of claim 1, further comprising, after triggering acquisition of the original volumetric image corresponding to the second identified object:
And if the acquired original volume image corresponding to the second identification object meets the preset image quality condition, displaying and updating the second identification object based on a target display mode, wherein the target display mode is a display mode for representing completion of point shooting.
8. The method according to claim 1, wherein the performing pose normalization processing on the acquired original volume image corresponding to the second identification object to determine a standard volume image captured at the target volume point comprises:
Aiming at an original volume image corresponding to each second identification object, determining a pose conversion relation between the actual shooting pose and the standard shooting pose according to the actual shooting pose corresponding to the original volume image and the standard shooting pose corresponding to a target volume point where the second identification object is located;
And performing pose conversion on the original volume image based on the pose conversion relation to obtain a standard volume image shot at the target volume point.
9. The method of any one of claims 1-8, wherein the camera is a monocular camera; the method is applied to a client, the client is integrated in a mobile terminal, and the monocular camera and the shooting pose acquisition device are installed on the mobile terminal.
10. A volume photographing apparatus, comprising:
The target shooting scene acquisition module is used for acquiring a target shooting scene acquired by the shooting device head, wherein the target shooting scene comprises a target object to be shot in volume;
the target volume center position selection module is used for responding to center position selection operation triggered by a user aiming at the target object, determining the target volume center position selected by the user and displaying a first identification object at the target volume center position;
the target volume point position determining module is used for determining a plurality of target volume point positions around the target object based on the target volume center position, and displaying a second identification object at each target volume point position;
The original volume image acquisition module is used for triggering and acquiring an original volume image corresponding to a second identification object when detecting that the second identification object and the first identification object have a shielding relation in the process of adjusting the shooting visual angle of the shooting device by a user;
and the standard volume image determining module is used for carrying out pose standardization processing on the acquired original volume image corresponding to the second identification object and determining the standard volume image shot at the target volume point position.
11. An electronic device, the electronic device comprising:
One or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the volumetric shooting method of any of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the volume shooting method as claimed in any one of claims 1-9.
CN202211551806.0A 2022-12-05 2022-12-05 Volume shooting method, device, equipment and storage medium Pending CN118158378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211551806.0A CN118158378A (en) 2022-12-05 2022-12-05 Volume shooting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211551806.0A CN118158378A (en) 2022-12-05 2022-12-05 Volume shooting method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118158378A true CN118158378A (en) 2024-06-07

Family

ID=91289175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211551806.0A Pending CN118158378A (en) 2022-12-05 2022-12-05 Volume shooting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118158378A (en)

Similar Documents

Publication Publication Date Title
US10212337B2 (en) Camera augmented reality based activity history tracking
WO2019242262A1 (en) Augmented reality-based remote guidance method and device, terminal, and storage medium
US11748906B2 (en) Gaze point calculation method, apparatus and device
US9437045B2 (en) Real-time mobile capture and application of photographic images as textures in three-dimensional models
WO2019238114A1 (en) Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium
CN108347657B (en) Method and device for displaying bullet screen information
CN107646109B (en) Managing feature data for environment mapping on an electronic device
US11044398B2 (en) Panoramic light field capture, processing, and display
WO2021097600A1 (en) Inter-air interaction method and apparatus, and device
US11275248B2 (en) Head mounted display apparatus, virtual reality display system and driving method thereof
US11812154B2 (en) Method, apparatus and system for video processing
US11922568B2 (en) Finite aperture omni-directional stereo light transport
CN113272871A (en) Camera calibration method and system
CN115631291B (en) Real-time relighting method and apparatus, device, and medium for augmented reality
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
TWM482797U (en) Augmented-reality system capable of displaying three-dimensional image
CN113559501B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN112073640B (en) Panoramic information acquisition pose acquisition method, device and system
CN111176425A (en) Multi-screen operation method and electronic system using same
WO2024055531A1 (en) Illuminometer value identification method, electronic device, and storage medium
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
CN118158378A (en) Volume shooting method, device, equipment and storage medium
CN112328150B (en) Automatic screenshot method, device and equipment, and storage medium
CN108171802B (en) Panoramic augmented reality implementation method realized by combining cloud and terminal
CN115004683A (en) Imaging apparatus, imaging method, and program

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination