WO2022021965A1 - 虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序 - Google Patents

虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序 Download PDF

Info

Publication number
WO2022021965A1
WO2022021965A1 PCT/CN2021/089437 CN2021089437W WO2022021965A1 WO 2022021965 A1 WO2022021965 A1 WO 2022021965A1 CN 2021089437 W CN2021089437 W CN 2021089437W WO 2022021965 A1 WO2022021965 A1 WO 2022021965A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
acquisition unit
image acquisition
target virtual
screen
Prior art date
Application number
PCT/CN2021/089437
Other languages
English (en)
French (fr)
Inventor
侯欣如
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2021570926A priority Critical patent/JP2022545598A/ja
Publication of WO2022021965A1 publication Critical patent/WO2022021965A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present disclosure relates to the technical field of computer vision, and in particular, to a method, apparatus, electronic device, computer storage medium, and program for adjusting a virtual object.
  • AR Augmented Reality
  • an augmented reality picture can usually be generated by combining the real scene image captured by the terminal device and the virtual object, and the displayed pose of the virtual object in the augmented reality picture can be edited in advance on the editing end.
  • the pre-edited display pose may deviate from the real scene environment, resulting in that the display pose of the virtual object in the augmented reality screen does not meet the requirements, and the display pose needs to be further adjusted.
  • the adjustment method still needs to be manually adjusted at the editing end for the display of the virtual object, and the adjustment process is inefficient.
  • the present disclosure proposes an adjustment scheme for virtual objects.
  • a method for adjusting a virtual object including:
  • the target virtual object After the selection operation on the target virtual object is detected, if it is detected that the pose of the image acquisition unit of the terminal device moves, the target virtual object is kept on the screen during the movement of the image acquisition unit.
  • the displayed pose on the screen remains unchanged, and at least part of the augmented reality picture displayed on the screen is updated;
  • the augmented reality picture after the moving of the image acquisition unit is displayed on the screen of the terminal device.
  • the adjustment process of the display pose of the target virtual object can be completed by moving the image acquisition unit after the user selects the target virtual object, and there is no need to manually adjust the parameters of the display pose at the background editing end, thereby improving the operation of adjusting the display pose
  • the current augmented reality picture can be presented in real time, so that the display pose of the target virtual object in the augmented reality picture can be adjusted intuitively, so that the adjusted display pose is more in line with the user's personalization need.
  • the selection operation includes a touch operation on the target virtual object on the screen.
  • the adjustment method further includes: acquiring relative pose data of the image acquisition unit and the target virtual object in a world coordinate system;
  • the moving process of the image acquisition unit can ensure that the displayed pose of the target virtual object on the screen remains unchanged, thereby enabling Based on the pose change of the image acquisition unit in the world coordinate system, the pose data of the target virtual object in the world coordinate system is automatically adjusted.
  • the acquiring relative pose data of the image acquiring unit and the target virtual object in the world coordinate system includes:
  • the relative pose data is determined based on the current pose data of the image acquisition unit and the first pose data of the target virtual object.
  • the acquiring the current pose data of the image acquiring unit in the world coordinate system includes:
  • the current pose data of the image acquisition unit in the world coordinate system is determined.
  • the current pose data of the image acquisition unit in the world coordinate system can be quickly obtained through the real scene image captured by the image acquisition unit.
  • the updating at least part of the augmented reality picture displayed on the screen includes:
  • At least part of the augmented reality picture displayed on the screen is updated based on the real scene image collected during the movement of the image acquisition unit.
  • the augmented reality image displayed on the screen is updated by acquiring the real scene image collected during the movement of the image acquisition unit, so that the relative relationship between the target virtual object in the current augmented reality image and other physical objects can be visually displayed pose, so as to better adjust the display pose of the target virtual object in the current augmented reality picture.
  • the updating at least part of the augmented reality picture displayed on the screen includes:
  • At least part of the augmented reality picture displayed on the screen is updated based on the first displayed pose data corresponding to the other virtual objects and the real scene image collected during the movement of the image acquisition unit.
  • the first displayed pose data corresponding to the other virtual objects can be determined through the current pose data of the image acquisition unit, and the real scene images collected during the movement of the image acquisition unit can be combined. , simultaneously update other virtual objects and real scene images displayed on the screen, so that the relative pose of the target virtual object in the current augmented reality picture and other physical objects and other virtual objects can be displayed intuitively, so as to facilitate more Well adjust the display pose of the target virtual object in the current augmented reality picture.
  • the adjustment method further includes:
  • the adjusted second pose data of the target virtual object in the world coordinate system is saved.
  • the adjustment method further includes:
  • an augmented reality picture including the target virtual object is displayed on the screen of the terminal device.
  • the adjusted second pose data of the target virtual object can be saved, so that when the augmented reality image is presented again in the subsequent process, the current pose data of the image acquisition unit and the saved second pose data can be obtained.
  • pose data the target virtual object can be directly presented in the current augmented reality screen according to the adjusted second pose data, without repeated adjustment, improving user experience.
  • an apparatus for adjusting a virtual object including:
  • a first display part configured to display an augmented reality picture including virtual objects on the screen of the terminal device
  • the adjustment part is configured to, after detecting the selection operation on the target virtual object, if it is detected that the pose of the image acquisition unit of the terminal device moves, keep the target virtual object during the movement of the image acquisition unit
  • the displayed pose of the object on the screen remains unchanged, and at least part of the augmented reality picture displayed on the screen is updated;
  • the second display part is configured to display, on the screen of the terminal device, that the image acquisition unit is moving based on the updated at least part of the augmented reality picture and the display pose of the target virtual object on the screen Augmented reality screen after.
  • the selection operation includes a touch operation on the target virtual object on the screen.
  • the adjustment device further includes an acquisition part configured to acquire relative pose data of the image acquisition unit and the target virtual object in the world coordinate system;
  • the adjustment part is specifically configured to: if it is detected that the pose of the image acquisition unit moves, then during the movement of the image acquisition unit, the distance between the image acquisition unit and the target virtual object is maintained. The relative pose data remains unchanged, so that the displayed pose of the target virtual object on the screen remains unchanged.
  • the obtaining section is configured to:
  • the relative pose data is determined based on the current pose data of the image acquisition unit and the first pose data of the target virtual object.
  • the obtaining section is configured to:
  • the current pose data of the image acquisition unit in the world coordinate system is determined.
  • the adjustment portion is configured to:
  • At least part of the augmented reality picture displayed on the screen is updated based on the real scene image collected during the movement of the image acquisition unit.
  • the adjustment portion is configured to:
  • At least part of the augmented reality picture displayed on the screen is updated based on the first displayed pose data corresponding to the other virtual objects and the real scene image collected during the movement of the image acquisition unit.
  • the adjusting device further includes a saving part configured to:
  • the adjusted second pose data of the target virtual object in the world coordinate system is saved.
  • the adjustment device further includes a third display part, and the third display part is configured to:
  • an augmented reality picture including the target virtual object is displayed on the screen of the terminal device.
  • an electronic device comprising:
  • a processor, a memory, and a bus stores machine-readable instructions executable by the processor, and when the electronic device operates, the processor and the memory communicate through a bus, and the machine-readable instructions When executed by the processor, the above-described adjustment method of the virtual object is performed.
  • a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to execute the above-described method for adjusting a virtual object.
  • a computer program including computer-readable code, where, when the computer-readable code is executed in an electronic device, a processor in the electronic device performs the above adjustment of the virtual object method.
  • a computer program product comprising a computer-readable storage medium storing program code, in the case where the program code is executed in the electronic device, processing in the electronic device
  • the controller executes the adjustment method of the virtual object described above.
  • FIG. 1 shows a schematic flowchart 1 of a method for adjusting a virtual object provided by an embodiment of the present disclosure
  • Fig. 2a shows a schematic diagram of an augmented reality screen including a target virtual object provided by an embodiment of the present disclosure
  • Fig. 2b shows a schematic diagram of an augmented reality screen provided by an embodiment of the present disclosure after a selection operation for a target virtual object is detected;
  • Fig. 2c shows a schematic diagram of an augmented reality screen adjusted for a target virtual object provided by an embodiment of the present disclosure
  • FIG. 2d shows a second schematic flowchart of a method for adjusting a virtual object provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic flowchart of a method for determining relative pose data provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic flow chart 1 of a method for determining current pose data of an image acquisition unit provided by an embodiment of the present disclosure
  • FIG. 5 shows a second schematic flowchart of a method for determining current pose data of an image acquisition unit provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic flowchart of a method for updating an augmented reality screen provided by an embodiment of the present disclosure
  • FIG. 7 shows a schematic flowchart of a method for displaying an augmented reality screen provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic structural diagram of an apparatus for adjusting a virtual object provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • At least one herein refers to any combination of any one of a plurality or at least two of a plurality, for example, including at least one of a, b, and c, and may mean including from a, b, and Any one or more elements selected from the set of c.
  • AR technology is a technology that can superimpose and interact with virtual objects and the real world. It can be applied to AR devices. Through AR devices, you can watch augmented reality images containing virtual objects.
  • the real scene is an exhibition hall.
  • the exhibition hall It contains solid objects such as walls, tables, and windowsills, and the virtual objects are virtual vases.
  • the physical objects in the real scene may change at any time.
  • the physical table is moved.
  • the display pose of the virtual vase presented by using the pre-edited relative pose data of the table and the virtual vase may not be located in the physical object.
  • you still want to achieve the augmented reality effect of the virtual vase on the physical table you still need to manually re-enter the adjustment parameter values on the editing side to adjust the displayed pose data of the virtual object based on the changed pose data of the physical object. , the adjustment process is less efficient.
  • the present disclosure provides an adjustment scheme for a virtual object.
  • the adjustment process for the displayed pose of the target virtual object can be completed by moving the image acquisition unit after the user selects the target virtual object, without the need for manual editing at the background editing end. Adjust the parameters of the display pose to improve the operation efficiency of the display pose adjustment.
  • the current augmented reality picture can be displayed in real time, so that the display pose of the target virtual object in the augmented reality picture can be intuitively adjusted. , in order to make the adjusted display pose more in line with the user's personalized needs.
  • the execution subject of the method for adjusting a virtual object provided by the embodiment of the present disclosure may be a terminal device, and the terminal device may It is an AR device with an AR function, for example, it may include AR glasses, tablet computers, smart phones, smart wearable devices, and other devices with display functions and data processing capabilities, which are not limited in the embodiments of the present disclosure.
  • the method for adjusting the virtual object may be implemented by the processor calling computer-readable instructions stored in the memory.
  • the adjusting method includes steps S101-S103:
  • the terminal device is an AR device with AR function, which may include smart phones, tablet computers, AR glasses, etc.
  • the terminal device may have a built-in image acquisition unit, or an external image acquisition unit, and the image acquisition unit may capture the reality After the scene image is obtained, the current pose data of the image acquisition unit can be determined based on the real scene image, and an augmented reality image including the target virtual object is displayed on the screen of the terminal device according to the current pose data.
  • the augmented reality picture may include multiple virtual objects, and the virtual objects specifically refer to virtual information generated by computer simulation, and may be virtual three-dimensional objects, such as the virtual vase mentioned above, or a virtual plane object, For example, virtual pointing arrows, virtual characters and virtual pictures.
  • a virtual object to be adjusted for pose data is selected from a plurality of virtual objects of the target virtual object, and the pose data of the target virtual object in the world coordinate system corresponding to the real scene is specifically adjusted, which will be described later.
  • the text is explained in detail.
  • the augmented reality screen displayed on the screen of the mobile phone may contain multiple virtual objects, and after detecting the selection operation for one of the virtual objects, the virtual object may be used as the pose data to be performed Adjust the target dummy object to adjust.
  • the selection operation may include a touch operation on the target virtual object on the screen.
  • the touch operation may include a long-press operation, a double-click operation, or a single-click operation.
  • the touch operation when the touch operation is a long-press operation, when the long-press operation ends, it means that the selection operation on the target virtual object ends; or when the touch operation is a double-click operation, the next double-click operation on the target virtual object is detected.
  • the selection operation of the target virtual object is completed; or when it is detected that the target virtual object is clicked, the selection operation of the target virtual object is determined to be completed; or the selection operation is determined to be completed after the set time period is exceeded.
  • the touch operation is a single click
  • the embodiment of the present disclosure takes the touch operation as the long-press operation as an example.
  • the long-press operation on the target virtual object displayed on the screen of the terminal device may refer to the long-press operation on the display area where the target virtual object is located on the screen.
  • the display area where the target virtual object is located is pressed on the screen for a preset duration, thereby triggering an adjustment process for the target virtual object.
  • the movement of the pose of the image acquisition unit includes at least one of a change in the position of the image acquisition unit in the world coordinate system and a change in the posture.
  • it is always Keep the display pose of the target virtual object on the screen unchanged, that is, the relative pose of the target virtual object and the image acquisition unit remains unchanged, so that when the pose of the image acquisition unit in the world coordinate system moves, the target virtual object is in the The pose data in the world coordinate system also moves accordingly.
  • the target virtual object when the selection operation on the target virtual object is detected, the target virtual object is located in the upper left corner of the screen, and is displayed at a preset angle to the center of the screen.
  • the target virtual object is always presented at the upper left corner of the screen and at a preset angle to the center of the screen, that is, during the movement of the image acquisition unit, the displayed pose of the target virtual object on the screen remains unchanged.
  • the current pose data of the image acquisition unit in the world coordinate system will change with the movement of the image acquisition unit, and the real scene image acquired by the image acquisition unit will also change accordingly.
  • the real picture contains other virtual objects other than the target virtual object
  • the displayed poses of other virtual objects on the screen will also change.
  • the augmented reality images displayed are updated in real time.
  • an augmented reality image may be generated based on at least part of the updated augmented reality image and the displayed pose of the target virtual object on the screen.
  • the real scene image captured by the image acquisition unit before updating includes the ground.
  • the augmented reality picture before the update can include the ground and the virtual vase on the ground.
  • the real table will appear in the updated augmented reality picture.
  • the display pose of the virtual vase on the screen remains unchanged, after the image acquisition unit moves, the augmented reality picture displayed on the screen can include the physical table and the virtual vase.
  • the virtual vase on the physical table can achieve the purpose of adjusting the display pose of the virtual vase in the world coordinate system, such as adjusting the display pose initially located on the ground to the display pose on the physical table.
  • the adjustment process of the display pose of the target virtual object can be completed by moving the image acquisition unit after the user selects the target virtual object, and there is no need to manually adjust the parameters of the display pose at the background editing end, thereby improving the display position
  • the current augmented reality picture can be displayed in real time, so that the display pose of the target virtual object in the augmented reality picture can be adjusted intuitively, so as to make the adjusted display pose more in line with User's personalized needs.
  • the indoor room contains physical objects such as sofas and chairs.
  • the augmented reality image displayed on the screen of the terminal device is shown in Figure 2a, and the augmented reality image contains virtual objects.
  • "Tang San Cai horse” 21 and "Decorative Lamp” 22, as well as solid object sofas and chairs, and "Tang San Cai Horse” is located above the physical object chair in the augmented reality picture, and is closer to the chair and farther from the "Decorative Lamp” 22 area.
  • the "Tang Sancai horse” 21 After detecting the long-press operation on the target virtual object "Tang Sancai horse” 21, in the case of detecting that the pose of the image acquisition unit of the terminal device moves, along with the movement of the image acquisition unit, the "Tang Sancai horse” 21 is always displayed. The displayed pose on the screen remains unchanged. In order to prompt the user that the target virtual object to be adjusted in pose data is selected, the display effects of the selected target virtual object and the unselected virtual object can also be distinguished, such as The contour of the target virtual object can be specially processed. As shown in FIG. 2b, a white line 23 is added to the contour of the selected target virtual object "Tang San Caima" 21.
  • the pose data of the target virtual object "Tang Sancai horse” 21 After the adjustment of the pose data of the target virtual object "Tang Sancai horse” 21 is started, with the movement of the image acquisition unit, for example, the image acquisition unit is shifted to the upper left, the corresponding image of the "Tang Sancai horse” 21 in the real scene can be adjusted.
  • the pose data in the world coordinate system is adjusted in real time, so that the "Tang San Caima" 21 is also shifted to the upper left in the real scene, and at least part of the augmented reality screen is adjusted, for example, the display of the solid object sofa in the augmented reality screen is adjusted.
  • pose so as to obtain the augmented reality picture as shown in Figure 2c. From Figure 2c, it can be seen that the target virtual object "Tang San Caima” 21 will also move to the upper left, that is, close to the "decorative lamp" 22.
  • FIG. 2d shows a second schematic flowchart of the method for adjusting a virtual object provided by an embodiment of the present disclosure.
  • the method for keeping the displayed pose of the target virtual object on the screen unchanged during the movement of the image acquisition unit may include:
  • the relative pose data of the image acquisition unit and the target virtual object in the world coordinate system may include relative position data and relative pose data of the image acquisition unit and the target virtual object in the world coordinate system.
  • the world coordinate system can be constructed in advance in the real scene where the terminal device is located.
  • the real scene is an exhibition hall
  • the preset position point of the exhibition hall can be used as the origin
  • three mutually perpendicular directions can be selected as the world coordinate system respectively.
  • the X-axis, Y-axis, and Z-axis of so that the world coordinate system used to represent the relative pose data between the image acquisition unit and the target virtual object can be obtained.
  • the target virtual object After acquiring the relative pose data of the image acquisition unit and the target virtual object in the world coordinate system, in the case that the pose of the image acquisition unit in the world coordinate system is detected to move, the target virtual object can be moved simultaneously in the world coordinate system.
  • the pose data in the world coordinate system keeps the relative pose data of the image acquisition unit and the target virtual object unchanged during the movement process, so that the relative position data and relative attitude data of the target virtual object and the image acquisition unit are kept unchanged. In the case of unchanged, the displayed pose of the target virtual object on the screen can be kept unchanged.
  • the moving process of the image acquisition unit can ensure that the displayed pose of the target virtual object on the screen does not change. Then, the pose data of the target virtual object in the world coordinate system can be automatically adjusted based on the pose change of the image acquisition unit in the world coordinate system.
  • FIG. 3 shows a schematic flowchart of the method for determining relative pose data provided by an embodiment of the present disclosure, as shown in FIG. 3 .
  • the method for acquiring the relative pose data of the image acquisition unit and the target virtual object in the world coordinate system may include the following S301-S302:
  • S302 Determine relative pose data based on the current pose data of the image acquisition unit and the first pose data of the target virtual object.
  • the current pose data of the image acquisition unit in the world coordinate system may be acquired through the real-time scene image captured by the image acquisition unit; the current pose data of the image acquisition unit may also be acquired in various ways .
  • the current pose data of the image acquisition unit can be determined by combining the initial pose data of the image acquisition unit in the pre-established world coordinate system and the motion data collected by the inertial measurement unit in real time.
  • the inertial measurement unit may include a gyroscope, an accelerometer, and the like.
  • the first pose data of the target virtual object in the world coordinate system before the adjustment can be based on the initial pose data of the target virtual object in the three-dimensional scene model representing the real scene.
  • the first pose data in the world coordinate system before the target virtual object is adjusted may be after the last adjustment on the pose data of the target virtual object in the world coordinate system Saved pose data.
  • the 3D scene model representing the real scene can be constructed based on a large number of pre-collected real scene images. After the initial pose data of the virtual object in the 3D scene model is determined in advance based on the 3D scene model, the 3D scene The scene model and the real scene are aligned, and the first pose data of the virtual object in the world coordinate system corresponding to the real scene can be obtained.
  • the current position coordinates of the image acquisition unit in the world coordinate system and the target virtual object in the world coordinate system may be based on The first position coordinates below, determine the relative position data of the image acquisition unit and the target virtual object, and based on the current attitude data of the image acquisition unit under the world coordinate system and the first attitude data of the target virtual object under the world coordinate system, Determine the relative posture data of the image acquisition unit and the target virtual object, and the relative position data and the relative posture data together constitute the relative posture data of the image acquisition unit and the target virtual object.
  • the current position coordinates of the image acquisition unit in the world coordinate system can be represented by the current position coordinates of the center point of the image acquisition unit in the world coordinate system.
  • the target virtual object is in the world coordinate system.
  • the first position coordinates in the system can be represented by the first position coordinates of the center point of the target virtual object in the world coordinate system.
  • the first position coordinates of the center point A of the target virtual object are (x A , y A , z A ), where x A represents the coordinate value of the center point A of the target virtual object along the X-axis direction in the world coordinate system, and y A represents the coordinate value of the center point A of the target virtual object along the Y-axis direction in the world coordinate system value, z A represents the coordinate value of the center point A of the target virtual object along the Z-axis direction in the world coordinate system;
  • the current position coordinates of the center point P of the image acquisition unit are (x P , y P , z P ), where, x p represents the coordinate value of the center point P of the image acquisition unit along the X axis in the world coordinate system, y p represents the coordinate value of the center point P of the image acquisition unit along the Y axis direction in the world coordinate system, z p represents the image The coordinate value of the center point P of the acquisition unit along the Z-axis direction in the world coordinate
  • the current attitude data of the image acquisition unit in the world coordinate system can be represented by the preset angle between the positive direction of the image acquisition unit and each coordinate axis of the world coordinate system.
  • the positive direction of the camera can be the direction perpendicular to the camera center point and facing away from the camera; similarly, the first pose data of the target virtual object in the world coordinate system can pass the preset positive direction of the target virtual object and the world coordinate system.
  • the first angle of each coordinate axis is represented.
  • the positive direction of the "Tang Sancai horse” can be perpendicular to the center point of the cross section of the "Tang Sancai horse” and back to the "Tang Sancai horse".
  • the image acquisition unit is determined based on the current angle between the positive direction of the image acquisition unit and each coordinate axis of the world coordinate system, and the positive direction of the target virtual object and the first angle between each coordinate axis of the world coordinate system. and the relative pose data of the target virtual object.
  • the relative pose data of the target virtual object and the image acquisition unit unchanged, so that the current pose of the image acquisition unit in the world coordinate system can be adjusted according to the current pose of the image acquisition unit.
  • the data adjusts the display pose of the target virtual object in the world coordinate system.
  • the display size of the target virtual object remains unchanged.
  • FIG. 4 shows the method for determining the current pose data of the image acquisition unit provided by the embodiment of the present disclosure
  • the first schematic flow diagram may include the following S401 to S402:
  • the real scene image corresponding to the real scene can be captured in real time.
  • the corresponding captured real scene images are also different. Real scene images to determine the current pose data of the image acquisition unit.
  • the current pose data of the image acquisition unit in the world coordinate system can be quickly obtained by using the real scene image captured by the image acquisition unit.
  • FIG. 5 shows the determination of the current pose data of the image acquisition unit provided by the embodiment of the present disclosure
  • the second schematic flow chart of the method, as shown in FIG. 5 may include the following S4021-S4022:
  • S4021 Detect the real scene image, and determine target object information included in the real scene image and shooting pose data corresponding to the target object information.
  • the real scene image can be detected based on the pre-trained neural network, and the target object contained in the real scene image can be determined.
  • the target object information may include the position information of the photographed entity object in the real scene image, and here the information of each entity object in the real scene corresponding to different position information in the real scene image may be pre-stored. Capture pose data.
  • the target object information obtained by detecting the real scene image includes the position information of a target object
  • it can be determined based on the shooting pose data corresponding to the position information of the target object.
  • the data jointly determine the current pose data of the image acquisition unit, such as averaging the shooting pose data corresponding to multiple target objects to obtain the current pose data of the image acquisition unit.
  • the method further includes:
  • the current pose data of the image acquisition unit is determined.
  • the current pose data of the image acquisition unit can be estimated based on the real scene image, combined with the motion data collected by the inertial measurement unit associated with the image acquisition unit, so as to obtain the current pose data of the image acquisition unit.
  • the real scene image captured by the image acquisition unit and the motion data collected by the inertial measurement unit jointly determine the current pose data of the image acquisition unit, so that the motion data collected by the inertial measurement unit can be used for The estimated pose data of the image is adjusted to obtain the current pose data with high accuracy.
  • updating at least part of the augmented reality pictures displayed on the screen may specifically include:
  • At least part of the augmented reality picture displayed on the screen is updated based on the real scene image collected during the movement of the image acquisition unit.
  • the current pose data of the image acquisition unit in the world coordinate system changes continuously, so that the real scene image captured by the image acquisition unit also changes accordingly.
  • the real scene image initially captured by the image acquisition unit is the front of the physical table.
  • the currently captured real scene image includes the side of the physical table.
  • the image is updated to the augmented reality screen, and the front of the table shown before the update can be updated to the side of the table.
  • the augmented reality picture displayed on the screen is updated by acquiring the real scene image collected during the movement of the image acquisition unit, so that the target virtual object in the current augmented reality picture and other physical objects can be visually displayed
  • the relative pose between the two can better adjust the display pose of the target virtual object in the current augmented reality picture.
  • FIG. 6 shows a schematic flowchart of a method for updating an augmented reality picture provided by an embodiment of the present disclosure, as shown in FIG. 6 , The following S1021 to S1022 may be included:
  • the displayed poses of other virtual objects on the screen will also change with the movement of the image acquisition unit.
  • the first display position of other virtual objects when displayed on the screen of the terminal device can be determined based on the current pose data of the image acquisition unit in the world coordinate system and the pose data of other virtual objects in the world coordinate system pose data.
  • the part of the augmented reality image that needs to be updated can be determined.
  • the display pose of other virtual objects that need to be updated on the screen and the display pose of the real object on the screen may be included.
  • the first display pose data corresponding to the other virtual objects can be determined through the current pose data of the image acquisition unit, and the data collected during the movement of the image acquisition unit can be combined with the first display pose data.
  • the image of the real scene is updated simultaneously with other virtual objects displayed on the screen and the image of the real scene, so that the relative pose between the target virtual object and other physical objects and other virtual objects in the current augmented reality picture can be visually displayed. , and then better adjust the display pose of the target virtual object in the current augmented reality picture.
  • the adjustment method provided by the embodiment of the present disclosure further includes:
  • the adjusted second pose data of the target virtual object in the world coordinate system is saved.
  • the second pose data of the target virtual object in the world coordinate system after adjustment can be saved, or the second pose data can be saved.
  • the pose data is sent to the server, so that other terminal devices can display the AR scene based on the second pose data corresponding to the target virtual object.
  • FIG. 7 shows a schematic flowchart of a method for displaying an augmented reality screen provided by an embodiment of the present disclosure.
  • the adjustment method provided by an embodiment of the present disclosure further includes the following S701 to S703:
  • the adjustment process for the target virtual object will not be triggered again.
  • the current pose data of the image acquisition unit can be acquired, and then the current pose data of the image acquisition unit can be acquired based on the current image acquisition unit.
  • the pose data and the second pose data corresponding to the target virtual object determine the second display pose data of the target virtual object on the screen of the terminal device, and follow the second display pose data and the reality captured by the image acquisition unit
  • the scene image generates an augmented reality picture including the target virtual object displayed on the screen.
  • the displayed pose of the target virtual object on the screen will change accordingly, and in addition, the displayed size of the target virtual object will also change accordingly.
  • the image acquisition unit is close to the virtual vase placed on the physical table, the gradually enlarged physical table and virtual vase can be seen in the augmented reality screen.
  • a gradually shrinking physical table and a virtual vase can be seen in the augmented reality footage.
  • the adjusted second pose data of the target virtual object may be saved after the selection operation, so that when the augmented reality image is presented again in the subsequent process, the current pose data of the image acquisition unit and the saved The second pose data of the target virtual object can be directly presented in the current augmented reality screen according to the adjusted second pose data, without repeated adjustment, which improves the user experience.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the embodiment of the present disclosure also provides an adjusting device for a virtual object corresponding to the adjusting method for a virtual object.
  • an adjusting device for a virtual object corresponding to the adjusting method for a virtual object.
  • the apparatus 800 for adjusting a virtual object includes:
  • the first display part 801 is configured to display an augmented reality picture including virtual objects on the screen of the terminal device;
  • the adjustment part 802 is configured to, after detecting the selection operation on the target virtual object, if it is detected that the pose of the image acquisition unit of the terminal device moves, keep the target virtual object on the screen during the movement of the image acquisition unit.
  • the display pose remains unchanged, and at least part of the augmented reality image displayed on the screen is updated;
  • the second display part 803 is configured to display, on the screen of the terminal device, the augmented reality image after the movement of the image acquisition unit based on the updated at least part of the augmented reality image and the displayed pose of the target virtual object on the screen.
  • the selection operation includes a touch operation on the target virtual object on the screen.
  • the adjustment device further includes an acquisition part 804,
  • the adjustment part 802 is configured to keep the relative pose data between the image acquisition unit and the target virtual object unchanged during the movement of the image acquisition unit if it is detected that the pose of the image acquisition unit moves, so as to keep the target virtual object
  • the display pose on the screen does not change.
  • the acquisition part 804 is configured to acquire the current pose data of the image acquisition unit under the world coordinate system, and the first pose data of the target virtual object under the world coordinate system before adjustment; The current pose data and the first pose data of the target virtual object determine the relative pose data.
  • the acquisition part 804 is configured to acquire the real scene image captured by the image acquisition unit
  • the current pose data of the image acquisition unit in the world coordinate system is determined.
  • the adjustment part 802 is configured to update at least part of the augmented reality picture displayed on the screen based on the real scene image collected during the movement of the image acquisition unit.
  • the adjustment part 802 is configured to determine, based on the current pose data of the image acquisition unit, the first display pose data when other virtual objects are displayed on the screen of the terminal device; based on the first display pose data corresponding to the other virtual objects The pose data and the real scene images collected during the movement of the image acquisition unit are displayed, and at least part of the augmented reality images displayed on the screen are updated.
  • the adjustment device further includes a saving part 805,
  • the saving part 805 is configured to, in response to the end of the selection operation on the target virtual object, save the second pose data of the target virtual object in the world coordinate system after adjustment.
  • the adjustment device further includes a third display part 806,
  • the third display part 806 is configured to acquire the current pose data of the image acquisition unit; based on the current pose data of the image acquisition unit and the second pose data corresponding to the target virtual object, determine the position of the target virtual object on the screen of the terminal device.
  • the second display pose data; based on the second display pose data, the augmented reality picture including the target virtual object is displayed on the screen of the terminal device.
  • an embodiment of the present disclosure further provides an electronic device 900 .
  • the schematic structural diagram of the electronic device provided by the embodiment of the present disclosure includes:
  • the communication between the processor 91 and the memory 92 is through the bus 93, so that the processor 91 executes the following instructions : Display an augmented reality picture including virtual objects on the screen of the terminal device; after detecting the selection operation on the target virtual object, if it is detected that the pose of the image acquisition unit of the terminal device moves, then the image acquisition unit moves during the movement process. , keep the displayed pose of the target virtual object on the screen unchanged, and update at least part of the augmented reality image displayed on the screen; based on the updated at least part of the augmented reality image and the displayed pose of the target virtual object on the screen, in On the screen of the terminal device, an augmented reality picture of the moving image acquisition unit is displayed.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the adjustment method described in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the computer program product of the adjustment method provided by the embodiment of the present disclosure includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the steps of the adjustment method described in the above method embodiments, specifically Reference may be made to the foregoing method embodiments, and details are not described herein again.
  • Embodiments of the present disclosure also provide a computer program, which implements any one of the methods in the foregoing embodiments when the computer program is executed by a processor.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
  • an augmented reality picture including virtual objects is displayed on the screen of the terminal device; after detecting the selection operation on the target virtual object, if it is detected that the pose of the image acquisition unit of the terminal device moves, the During the movement of the image acquisition unit, the displayed pose of the target virtual object on the screen is kept unchanged, and at least part of the augmented reality image displayed on the screen is updated; based on the updated at least part of the augmented reality image and the target virtual object on the screen
  • the display pose is displayed on the screen of the terminal device, and the augmented reality picture after the image acquisition unit is moved is displayed, which effectively improves the operation efficiency of the display pose adjustment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序。所述调整方法包括:通过在终端设备的屏幕上展示包括虚拟对象的增强现实画面;在检测到对目标虚拟对象的选择操作之后,若检测到所述终端设备的图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述目标虚拟对象在所述屏幕上的展示位姿不变,并更新所述屏幕上展示的至少部分增强现实画面;基于更新后的所述至少部分增强现实画面以及所述目标虚拟对象在所述屏幕上的展示位姿,在所述终端设备的屏幕上展示所述图像获取单元在移动后的增强现实画面。

Description

虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序
相关申请的交叉引用
本公开基于申请号为202010750615.1、申请日为2020年07月30日、申请名称为“虚拟对象的调整方法、装置、电子设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及计算机视觉技术领域,具体而言,涉及一种虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序。
背景技术
近年来,随着人工智能的不断发展,增强现实技术(Augmented Reality,AR)技术的应用场景逐渐增大,AR技术通过将实体信息(视觉信息、声音、触觉等)通过模拟仿真后,叠加到真实世界中,从而将真实的环境和虚拟的物体实时地在同一个画面或空间呈现。
在AR技术领域中,通常可以结合终端设备拍摄的现实场景图像和虚拟对象生成增强现实画面,虚拟对象在增强现实画面中的展示位姿可以提前在编辑端编辑好。但是在实际呈现过程中,提前编辑好的展示位姿可能与真实场景环境存在偏差,导致虚拟对象在增强现实画面中的展示位姿不符合要求,还需要对展示位姿进行进一步调整,目前的调整方式仍然需要在编辑端针对虚拟对象的展示进行手动调整,该调整过程效率较低。
发明内容
本公开提出了一种虚拟对象的调整方案。
根据本公开的一方面,提供了一种虚拟对象的调整方法,包括:
在终端设备的屏幕上展示包括虚拟对象的增强现实画面;
在检测到对目标虚拟对象的选择操作之后,若检测到所述终端设备的图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述目标虚拟对象在所述屏幕上的展示位姿不变,并更新所述屏幕上展示的至少部分增强现实画面;
基于更新后的所述至少部分增强现实画面以及所述目标虚拟对象在所述屏幕上的展示位姿,在所述终端设备的屏幕上展示所述图像获取单元在移动后的增强现实画面。
这样,目标虚拟对象的展示位姿的调整过程可以在用户选择目标虚拟对象后通过移动图像获取单元来完成,无需在后台编辑端进行手动调整展示位姿的参数,从而提升展示位姿调整的操作效率,另外在调整过程中,可以实时呈现当前增强现实画面,从而能够直观地对目标虚拟对象在增强现实画面中的展示位姿进行调整,以便使得调整后的展示位姿更加符合用户的个性化需求。
在本公开的一些实施例中,所述选择操作包括在所述屏幕上对所述目标虚拟对象的触摸操作。
在本公开的一些实施例中,所述调整方法还包括:获取所述图像获取单元与所述目标虚拟对象在世界坐标系中的相对位姿数据;
所述在检测到所述终端设备的图像获取单元的位姿发生移动的情况下,在所述图像获取单元移动过程中,保持所述目标虚拟对象在所述屏幕上的展示位姿不变,包括:
在检测到所述图像获取单元的位姿发生移动的情况下,在所述图像获取单元移动过程中,保持所述图像获取单元与所述目标虚拟对象之间的所述相对位姿数据不变,以保持所 述目标虚拟对象在所述屏幕上的展示位姿不变。
这样,通过保持图像获取单元和目标虚拟对象在世界坐标系中的相对位姿数据不变,这样图像获取单元的移动的过程,可以保证目标虚拟对象在屏幕上的展示位姿不变,进而能够基于图像获取单元在世界坐标系中的位姿变化,自动调整目标虚拟对象在世界坐标系中的位姿数据。
在本公开的一些实施例中,所述获取所述图像获取单元和所述目标虚拟对象在世界坐标系中的相对位姿数据,包括:
获取所述图像获取单元在所述世界坐标系下的当前位姿数据,以及所述目标虚拟对象调整之前在所述世界坐标系下的第一位姿数据;
基于所述图像获取单元的当前位姿数据和所述目标虚拟对象的所述第一位姿数据,确定所述相对位姿数据。
在本公开的一些实施例中,所述获取所述图像获取单元在所述世界坐标系下的当前位姿数据,包括:
获取所述图像获取单元拍摄的现实场景图像;
基于所述现实场景图像,确定所述图像获取单元在所述世界坐标系下的当前位姿数据。
这样,可以通过图像获取单元拍摄的现实场景图像,快速得到图像获取单元在世界坐标系下的当前位姿数据。
在本公开的一些实施例中,所述更新所述屏幕上展示的至少部分增强现实画面,包括:
基于所述图像获取单元移动过程中采集的现实场景图像,更新所述屏幕上展示的至少部分增强现实画面。
这样,通过获取图像获取单元移动过程中采集的现实场景图像,对屏幕上展示的增强现实画面进行更新,从而可以直观地展示出目标虚拟对象在当前增强现实画面中和其它实体物体之间的相对位姿,以便于更好地调整目标虚拟对象在当前增强现实画面中的展示位姿。
在本公开的一些实施例中,所述更新所述屏幕上展示的至少部分增强现实画面,包括:
基于所述图像获取单元的当前位姿数据,确定其它虚拟对象在所述终端设备的屏幕上展示时的第一展示位姿数据;
基于所述其它虚拟对象对应的第一展示位姿数据和所述图像获取单元移动过程中采集的现实场景图像,更新所述屏幕上展示的至少部分增强现实画面。
这样,在增强现实画面中包含其它虚拟对象情况下,可以通过图像获取单元的当前位姿数据确定其它虚拟对象对应的第一展示位姿数据,另外结合图像获取单元移动过程中采集的现实场景图像,对屏幕上展示的其它虚拟对象以及现实场景图像同时进行更新,这样可以直观地展示出目标虚拟对象在当前增强现实画面中和其它实体物体以及其它虚拟对象之间的相对位姿,以便于更好地调整目标虚拟对象在当前增强现实画面中的展示位姿。
在本公开的一些实施例中,所述调整方法还包括:
响应于对所述目标虚拟对象的选择操作结束,保存所述目标虚拟对象调整后在世界坐标系下的第二位姿数据。
在本公开的一些实施例中,所述调整方法还包括:
获取所述图像获取单元的当前位姿数据;
基于所述图像获取单元的当前位姿数据和所述目标虚拟对象对应的所述第二位姿数据,确定所述目标虚拟对象在所述终端设备的屏幕上的第二展示位姿数据;
基于所述第二展示位姿数据,在所述终端设备的屏幕上展示包括所述目标虚拟对象的增强现实画面。
这样,在选择操作结束后可以保存调整后的目标虚拟对象的第二位姿数据,这样当后 续再次进行增强现实画面的呈现过程中,通过图像获取单元的当前位姿数据以及保存的第二位姿数据,目标虚拟对象便可以直接按照调整后的第二位姿数据呈现在当前增强现实画面中,无需重复调整,提升用户体验。
根据本公开的一方面,提供了一种虚拟对象的调整装置,包括:
第一展示部分,配置为在终端设备的屏幕上展示包括虚拟对象的增强现实画面;
调整部分,配置为在检测到对目标虚拟对象的选择操作之后,若检测到所述终端设备的图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述目标虚拟对象在所述屏幕上的展示位姿不变,并更新所述屏幕上展示的至少部分增强现实画面;
第二展示部分,配置为基于更新后的所述至少部分增强现实画面以及所述目标虚拟对象在所述屏幕上的展示位姿,在所述终端设备的屏幕上展示所述图像获取单元在移动后的增强现实画面。
在本公开的一些实施例中,所述选择操作包括在所述屏幕上对所述目标虚拟对象的触摸操作。
在本公开的一些实施例中,所述调整装置还包括获取部分,所述获取部分配置为获取所述图像获取单元与所述目标虚拟对象在世界坐标系中的相对位姿数据;
所述调整部分具体配置为:若检测到所述图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述图像获取单元与所述目标虚拟对象之间的所述相对位姿数据不变,以保持所述目标虚拟对象在所述屏幕上的展示位姿不变。
在本公开的一些实施例中,所述获取部分配置为:
获取所述图像获取单元在所述世界坐标系下的当前位姿数据,以及所述目标虚拟对象调整之前在所述世界坐标系下的第一位姿数据;
基于所述图像获取单元的当前位姿数据和所述目标虚拟对象的所述第一位姿数据,确定所述相对位姿数据。
在本公开的一些实施例中,所述获取部分配置为:
获取所述图像获取单元拍摄的现实场景图像;
基于所述现实场景图像,确定所述图像获取单元在所述世界坐标系下的当前位姿数据。
在本公开的一些实施例中,所述调整部分配置为:
基于所述图像获取单元移动过程中采集的现实场景图像,更新所述屏幕上展示的至少部分增强现实画面。
在本公开的一些实施例中,所述调整部分配置为:
基于所述图像获取单元的当前位姿数据,确定其它虚拟对象在所述终端设备的屏幕上展示时的第一展示位姿数据;
基于所述其它虚拟对象对应的第一展示位姿数据和所述图像获取单元移动过程中采集的现实场景图像,更新所述屏幕上展示的至少部分增强现实画面。
在本公开的一些实施例中,所述调整装置还包括保存部分,所述保存部分配置为:
响应于对所述目标虚拟对象的选择操作结束,保存所述目标虚拟对象调整后在世界坐标系下的第二位姿数据。
在本公开的一些实施例中,所述调整装置还包括第三展示部分,所述第三展示部分配置为:
获取所述图像获取单元的当前位姿数据;
基于所述图像获取单元的当前位姿数据和所述目标虚拟对象对应的所述第二位姿数据,确定所述目标虚拟对象在所述终端设备的屏幕上的第二展示位姿数据;
基于所述第二展示位姿数据,在所述终端设备的屏幕上展示包括所述目标虚拟对象的增强现实画面。
根据本公开的一方面,提供了一种电子设备,包括:
处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如上所述的虚拟对象的调整方法。
根据本公开的一方面,提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如上所述的虚拟对象的调整方法。
根据本公开的一方面,提供了一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行上述虚拟对象的调整方法。
根据本公开的一方面,提供了一种计算机程序产品,包括存储了程序代码的计算机可读存储介质,在所述程序代码在所述电子设备中运行的情况下,所述电子设备中的处理器执行上述虚拟对象的调整方法。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出了本公开实施例所提供的虚拟对象的调整方法的流程示意图一;
图2a示出了本公开实施例所提供的一种包含目标虚拟对象的增强现实画面的示意图;
图2b示出了本公开实施例所提供的在检测到针对目标虚拟对象的选择操作后的增强现实画面的示意图;
图2c示出了本公开实施例所提供的针对目标虚拟对象调整后的增强现实画面的示意图;
图2d示出了本公开实施例所提供的虚拟对象的调整方法的流程示意图二;
图3示出了本公开实施例所提供的确定相对位姿数据的方法流程示意图;
图4示出了本公开实施例所提供的确定图像获取单元的当前位姿数据的方法流程示意图一;
图5示出了本公开实施例所提供的确定图像获取单元的当前位姿数据的方法流程示意图二;
图6示出了本公开实施例所提供的更新增强现实画面的方法流程示意图;
图7示出了本公开实施例所提供的增强现实画面的展示方法流程示意图;
图8示出了本公开实施例所提供的虚拟对象的调整装置的结构示意图;
图9示出了本公开实施例所提供的电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个 附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括a、b、c中的至少一种,可以表示包括从a、b和c构成的集合中选择的任意一个或多个元素。
AR技术是一种可以将虚拟对象和真实世界叠加并进行互动的技术,可以应用于AR设备中,通过AR设备可以观看到包含虚拟对象的增强现实画面,比如现实场景为展览厅,该展览厅中包含墙壁、桌子、窗台等实体物体,虚拟对象为虚拟花瓶,可以提前在后台编辑端手动输入参数值编辑虚拟对象和现实场景比如桌子在现实场景对应的世界坐标系中的相对位姿数据,通过这样的方式可以预先编辑出虚拟花瓶展示在实体桌子上的展示位姿。
但是,现实场景中的实体物体可能随时发生变化,比如实体桌子被移动,此时利用预先编辑好的桌子和虚拟花瓶的相对位姿数据而呈现出的虚拟花瓶的展示位姿可能并没有位于实体桌子上,若仍想达到虚拟花瓶位于实体桌子上的增强现实画面效果,还是需要基于实体物体变化后的位姿数据,在编辑端重新手动输入调整参数值对虚拟对象的展示位姿数据进行调整,该调整过程效率较低。
基于上述研究,本公开提供了一种虚拟对象的调整方案,针对目标虚拟对象的展示位姿的调整过程可以在用户选择目标虚拟对象后通过移动图像获取单元来完成,无需在后台编辑端进行手动调整展示位姿的参数,从而提升展示位姿调整的操作效率,另外在调整过程中,可以实时呈现当前增强现实画面,从而能够直观地对目标虚拟对象在增强现实画面中的展示位姿进行调整,以便使得调整后的展示位姿更加符合用户的个性化需求。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种虚拟对象的调整方法进行详细介绍,本公开实施例所提供的虚拟对象的调整方法的执行主体可以为终端设备,终端设备可以是具有AR功能的AR设备,比如可以包括AR眼镜、平板电脑、智能手机、智能穿戴式设备等具有显示功能和数据处理能力的设备,本公开实施例中不作限定。在一些可能的实现方式中,该虚拟对象的调整方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
参见图1所示,为本公开实施例提供的虚拟对象的调整方法的流程图,该调整方法包括步骤S101~S103:
S101,在终端设备的屏幕上展示包括虚拟对象的增强现实画面。
在本公开实施例中,终端设备为具有AR功能的AR设备,其中可以包括智能手机、平板电脑和AR眼镜等,终端设备可以内置图像获取单元,也可以外接图像获取单元,图像获取单元采集现实场景图像后,可以基于该现实场景图像确定图像获取单元的当前位姿数据,并根据该当前位姿数据在终端设备的屏幕上展示包含目标虚拟对象的增强现实画面。
在本公开实施例中,增强现实画面中可以包含多个虚拟对象,虚拟对象具体指由计算机模拟产生的虚拟信息,可以为虚拟三维物体,比如上文提到的虚拟花瓶,或者虚拟平面物体,比如虚拟的指示箭头、虚拟文字和虚拟画面等。
S102,在检测到对目标虚拟对象的选择操作之后,若检测到终端设备的图像获取单元的位姿发生移动,在图像获取单元移动过程中,保持目标虚拟对象在屏幕上的展示位姿不变,并更新屏幕上展示的至少部分增强现实画面。
在本公开实施例中,从目标虚拟对象的多个虚拟对象中选择待进行位姿数据调整的虚拟对象,具体调整目标虚拟对象在现实场景对应的世界坐标系下的位姿数据,将在后文进行具体说明。
这里,以终端设备为手机为例,手机屏幕上展示的增强现实画面中可以包含多个虚拟 对象,在检测到针对其中一个虚拟对象的选择操作后,可以将该虚拟对象作为待进行位姿数据调整的目标虚拟对象进行调整。
其中,选择操作可以包括在屏幕上对目标虚拟对象的触摸操作。例如,该触摸操作可以包括长按操作、双击操作或者单击操作。对应地,当触摸操作为长按操作时,在长按操作结束时,表示对目标虚拟对象的选择操作结束;或者当触摸操作为双击操作时,在检测到下一次对目标虚拟对象的双击操作时,对目标虚拟对象的选择操作结束;或者检测到单击目标虚拟对象时,确定对目标虚拟对象的选择操作结束;或者在超过设定时长后,确定选择操作结束。这里,当触摸操作为单击时,可以在检测到下一次单击目标虚拟对象时,确定对目标虚拟对象的选择操作结束;或者检测到双击目标虚拟对象时,确定对目标虚拟对象的选择操作结束,或者在超过设定时长后,确定选择操作结束。
本公开实施例以触摸操作为长按操作为例,对终端设备的屏幕上展示的目标虚拟对象的长按操作,可以指对屏幕中目标虚拟对象所在展示区域的长按操作,比如通过手指在屏幕上对目标虚拟对象所在展示区域进行预设时长的按压,从而触发针对该目标虚拟对象的调整过程。
在本公开实施例一实施方式中,图像获取单元的位姿发生移动包含图像获取单元在世界坐标系中的位置发生变化和姿态发生变化中的至少一项,在图像获取单元移动过程中,始终保持目标虚拟对象在屏幕上的展示位姿不变,即目标虚拟对象和图像获取单元的相对位姿保持不变,这样图像获取单元在世界坐标系中的位姿发生移动时,目标虚拟对象在世界坐标系中的位姿数据也随之发生移动。
在本公开实施例中,在检测到对目标虚拟对象的选择操作时,目标虚拟对象位于屏幕的左上角区域,并且按照与屏幕中心呈预设角度的方式进行展示,当图像获取单元发生移动时,目标虚拟对象始终呈现在屏幕的左上角且与屏幕中心呈预设角度,即在图像获取单元移动过程中,目标虚拟对象在屏幕上的展示位姿始终保持不变。
在本公开实施例中,图像获取单元在世界坐标系中的当前位姿数据会随着图像获取单元的移动发生变化,图像获取单元采集到的现实场景图像也随之发生变化,另外,当增强现实画面中包含除目标虚拟对象以外的其它虚拟对象时,随着图像获取单元的移动,其它虚拟对象在屏幕上的展示位姿也会发生变化,因此随着图像获取单元的实时移动,屏幕上展示的增强现实画面会进行实时更新。
S103,基于更新后的至少部分增强现实画面以及目标虚拟对象在屏幕上的展示位姿,在终端设备的屏幕上展示图像获取单元在移动后的增强现实画面。
本公开实施例中,随着图像获取单元的实时移动,可以基于更新后的至少部分增强现实画面和该目标虚拟对象在屏幕上的展示位姿,生成增强现实画面。
在本公开实施例一实施方式中,更新前图像获取单元拍摄的现实场景图像中包含地面。
以目标虚拟对象为虚拟花瓶为例,更新前的增强现实画面中可以包含地面和位于地面上的虚拟花瓶,随着图像获取单元的移动,当拍摄到的现实场景图像中包含位于地面上方的实体桌子时,更新后的增强现实画面中会出现实体桌子,由于虚拟花瓶在屏幕上的展示位姿不变,这样图像获取单元在移动后,在屏幕上展示的增强现实画面中可以包含实体桌子和位于实体桌子上的虚拟花瓶,这样可以达到对虚拟花瓶在世界坐标系中的展示位姿的调整目的,比如由初始位于地面上的展示位姿调整至位于实体桌子上的展示位姿。
本公开实施例中,目标虚拟对象的展示位姿的调整过程可以在用户选择目标虚拟对象后通过移动图像获取单元来完成,无需在后台编辑端进行手动调整展示位姿的参数,从而提升展示位姿调整的操作效率,另外在调整过程中,可以实时呈现当前增强现实画面,从而能够直观地对目标虚拟对象在增强现实画面中的展示位姿进行调整,以便使得调整后的 展示位姿更加符合用户的个性化需求。
下面结合一种应用场景,对上述调整过程进行说明:
这里,以现实场景为一室内房间为例,该室内房间包含实体物体沙发和椅子等实体物体,终端设备中的屏幕上展示的增强现实画面如图2a所示,该增强现实画面中包含虚拟对象“唐三彩马”21和“装饰灯”22,以及实体物体沙发和椅子,且“唐三彩马”在增强现实画面中位于实体物体椅子上方,且距离椅子较近且距“装饰灯”22较远的区域。
在检测到对目标虚拟对象“唐三彩马”21的长按操作之后,在检测到终端设备的图像获取单元的位姿发生移动的情况下,随着图像获取单元的移动,始终“唐三彩马”21在屏幕上的展示位姿不变,其中,为了向用户提示选中了待进行位姿数据调整的目标虚拟对象,还可对选中的目标虚拟对象和未选中的虚拟对象的展示特效进行区分,比如可以对目标虚拟对象的轮廓进行特殊处理,如图2b所示,在选中的目标虚拟对象“唐三彩马”21的轮廓上增加白色线条23。
在针对目标虚拟对象“唐三彩马”21的位姿数据开始调整后,随着图像获取单元的移动,比如图像获取单元朝左上方进行了偏移,可以对“唐三彩马”21在现实场景对应的世界坐标系中的位姿数据进行实时调整,使得“唐三彩马”21在现实场景中也向左上方偏移,同时至少部分调整增强现实画面,比如调整了增强现实画面中的实体物体沙发的展示位姿,从而得到如图2c所示的增强现实画面,从图2c中可以看到目标虚拟对象“唐三彩马”21也会移动至左上方,即靠近“装饰灯”22处。
下面结合一实施例针对上述S101~S103进行阐述。
在一种实施方式中,图2d示出了本公开实施例所提供的虚拟对象的调整方法的流程示意图二,如图2d所示,在检测到对目标虚拟对象的选择操作之后,若检测到终端设备的图像获取单元的位姿发生移动,则在图像获取单元移动过程中,保持目标虚拟对象在屏幕上的展示位姿不变的方法可以包括:
S201,在检测到对目标虚拟对象的选择操作之后,获取图像获取单元与目标虚拟对象在世界坐标系中的相对位姿数据;
若检测到终端设备的图像获取单元的位姿发生移动,则在图像获取单元移动过程中,保持目标虚拟对象在屏幕上的展示位姿不变,可以包括:
S202,若检测到图像获取单元的位姿发生移动,则在图像获取单元移动过程中,保持图像获取单元与目标虚拟对象之间的相对位姿数据不变,以保持目标虚拟对象在屏幕上的展示位姿不变。
在本公开实施例中,图像获取单元与目标虚拟对象在世界坐标系中的相对位姿数据可以包含图像获取单元和目标虚拟对象在世界坐标系中的相对位置数据和相对姿态数据。
其中,世界坐标系可以提前在终端设备所在的现实场景中进行构建,比如现实场景为一展览厅,可以将该展览厅的预设位置点作为原点,选择三条互相垂直的方向分别作为世界坐标系的X轴、Y轴和Z轴,这样可以得到用于表示图像获取单元与目标虚拟对象的相对位姿数据的世界坐标系。
在获取到图像获取单元与目标虚拟对象在世界坐标系中的相对位姿数据后,在检测到图像获取单元在该世界坐标系中的位姿发生移动的情况下,可以同时移动目标虚拟对象在该世界坐标系中的位姿数据,在移动过程中,保持图像获取单元和目标虚拟对象的相对位姿数据不变,这样在目标虚拟对象和图像获取单元的相对位置数据和相对姿态数据均保持不变的情况下,可以保持目标虚拟对象在屏幕上的展示位姿不变。
本公开实施例中,通过保持图像获取单元和目标虚拟对象在世界坐标系中的相对位姿数据不变,这样图像获取单元的移动的过程,可以保证目标虚拟对象在屏幕上的展示位姿不变,进而能够基于图像获取单元在世界坐标系中的位姿变化,自动调整目标虚拟对象在 世界坐标系中的位姿数据。
针对上述提到的获取图像获取单元和目标虚拟对象在世界坐标系中的相对位姿数据,图3示出了本公开实施例所提供的确定相对位姿数据的方法流程示意图,如图3所示,获取图像获取单元与目标虚拟对象在世界坐标系中的相对位姿数据的方法可以包括以下S301~S302:
S301,获取图像获取单元在世界坐标系下的当前位姿数据,以及目标虚拟对象调整之前在世界坐标系下的第一位姿数据;
S302,基于图像获取单元的当前位姿数据和目标虚拟对象的第一位姿数据,确定相对位姿数据。
在本公开实施例中,可以通过图像获取单元实时拍摄的现实场景图像来获取图像获取单元在世界坐标系下的当前位姿数据;也可以通过多种方式来获取图像获取单元的当前位姿数据。比如针对设置有惯性测量单元的图像获取单元,可以结合图像获取单元在预先建立的世界坐标系中的初始位姿数据以及惯性测量单元实时采集的运动数据,来确定图像获取单元的当前位姿数据。其中,惯性测量单元可以包含陀螺仪和加速度计等。
其中,在首次针对目标虚拟对象进行调整的情况下,目标虚拟对象调整之前在世界坐标系下的第一位姿数据,可以根据目标虚拟对象在表征现实场景的三维场景模型中的初始位姿数据确定。在非首次针对目标虚拟对象进行调整的情况下,目标虚拟对象调整之前在世界坐标系下的第一位姿数据可以是经过上次针对目标虚拟对象在世界坐标系下的位姿数据进行调整后保存的位姿数据。
在一些实施方式中,表征现实场景的三维场景模型可以基于预先采集的大量现实场景图像构建得到的,在预先基于该三维场景模型确定虚拟对象在三维场景模型中的初始位姿数据后,将三维场景模型和现实场景进行对齐,可以得到虚拟对象在现实场景对应的世界坐标系下的第一位姿数据。
在本公开实施例中,确定图像获取单元和目标虚拟对象在世界坐标系下的相对位姿数据时,可以基于图像获取单元在世界坐标系下的当前位置坐标和目标虚拟对象在该世界坐标系下的第一位置坐标,确定图像获取单元和目标虚拟对象的相对位置数据,以及基于图像获取单元在世界坐标系下的当前姿态数据以及目标虚拟对象在该世界坐标系下的第一姿态数据,确定图像获取单元和目标虚拟对象的相对姿态数据,该相对位置数据和相对姿态数据共同构成图像获取单元和目标虚拟对象的相对位姿数据。
在本公开实施例一实施方式中,图像获取单元在世界坐标系下的当前位置坐标可以通过图像获取单元的中心点在世界坐标系下的当前位置坐标来表示,同样,目标虚拟对象在世界坐标系下的第一位置坐标可以通过目标虚拟对象的中心点在世界坐标系下的第一位置坐标来表示,比如,目标虚拟物体的中心点A的第一位置坐标为(x A,y A,z A),其中,x A表示目标虚拟物体的中心点A在世界坐标系下沿X轴方向的坐标值,y A表示目标虚拟物体的中心点A在世界坐标系下沿Y轴方向的坐标值,z A表示目标虚拟物体的中心点A在世界坐标系下沿Z轴方向的坐标值;图像获取单元的中心点P的当前位置坐标为(x P,y P,z P),其中,x p表示图像获取单元的中心点P在世界坐标系下沿X轴方向的坐标值,y p表示图像获取单元的中心点P在世界坐标系下沿Y轴方向的坐标值,z p表示图像获取单元的中心点P在世界坐标系下沿Z轴方向的坐标值;则目标虚拟对象和图像获取单元的相对位置坐标可以通过向量
Figure PCTCN2021089437-appb-000001
来表示。
另外,图像获取单元在世界坐标系下的当前姿态数据可以通过预先设置的图像获取单元的正方向与世界坐标系各个坐标轴的当前夹角来表示,比如以图像获取单元手机中的摄像头为例,摄像头的正方向可以是垂直于摄像中心点且背向摄像头的方向;同样,目标虚拟对象在世界坐标系下的第一位姿数据可以通过预先设置的目标虚拟对象的正方向与世 界坐标系各个坐标轴的第一夹角来表示,比如以目标虚拟对象以上述“唐三彩马”为例,“唐三彩马”的正方向可以是垂直于“唐三彩马”横切面中心点且背向“唐三彩马”的方向,这样,基于图像获取单元的正方向与世界坐标系各个坐标轴的当前夹角,以及目标虚拟对象的正方向与世界坐标系各个坐标轴的第一夹角,来确定图像获取单元和目标虚拟对象的相对位姿数据。
在对目标虚拟对象在世界坐标系中的展示位姿进行调整过程中,保持目标虚拟对象和图像获取单元的相对位姿数据不变,这样可以根据图像获取单元在世界坐标系中的当前位姿数据对目标虚拟对象在世界坐标系中的展示位姿进行调整,另外,在调整过程中,目标虚拟对象的展示尺寸保持不变。
在一种实施方式中,针对上述提到的获取图像获取单元在世界坐标系下的当前位姿数据,图4示出了本公开实施例所提供的确定图像获取单元的当前位姿数据的方法流程示意图一,如图4所示,可以包括以下S401~S402:
S401,获取图像获取单元拍摄的现实场景图像;
S402,基于现实场景图像,确定图像获取单元在世界坐标系下的当前位姿数据。
图像获取单元进入现实场景后,可以实时拍摄现实场景对应的现实场景图像,在图像获取单元的当前位姿数据不同的情况下,对应拍摄的现实场景图像也不相同,因此,可以基于实施拍摄的现实场景图像,来确定图像获取单元的当前位姿数据。
本公开实施例中,可以通过图像获取单元拍摄的现实场景图像,来快速得到图像获取单元在世界坐标系下的当前位姿数据。
在一种实施方式中,在基于现实场景图像,确定图像获取单元在世界坐标系下的当前位姿数据时,图5示出了本公开实施例所提供的确定图像获取单元的当前位姿数据的方法流程示意图二,如图5所示,可以包括以下S4021~S4022:
S4021,对现实场景图像进行检测,确定现实场景图像中包含的目标对象信息,以及目标对象信息对应的拍摄位姿数据。
其中可以基于预先训练的神经网络对现实场景图像进行检测,确定现实场景图像中包含的目标对象。
在本公开实施例一实施方式中,目标对象信息可以包含拍摄的实体物体在现实场景图像中的位置信息,这里可以预先存储现实场景中每个实体物体在现实场景图像中对应不同位置信息时的拍摄位姿数据。
S4022,基于目标对象信息对应的拍摄位姿数据,确定图像获取单元的当前位姿数据。
在本公开实施例一实施方式中,在对现实场景图像进行检测,得到的目标对象信息中包含一个目标对象的位置信息的情况下,可以基于该目标对象的位置信息对应的拍摄位姿数据确定图像获取单元的当前位姿数据;在对现实场景图像进行检测,得到的目标对象信息中包含多个目标对象的位置信息的情况下,可以基于多个目标对象的位置信息各自对应的拍摄位姿数据共同确定图像获取单元的当前位姿数据,比如对多个目标对象对应的拍摄位姿数据进行平均,得到图像获取单元的当前位姿数据。
在另一种实施方式中,在基于现实场景图像,确定图像获取单元在世界坐标系下的当前位姿数据的情况下,还包括:
基于现实场景图像和图像获取单元中的惯性测量单元采集的运动数据,确定图像获取单元的当前位姿数据。
该方式可以基于现实场景图像预估图像获取单元的当前位姿数据,结合图像获取单元关联的惯性测量单元采集的运动数据,进而得到图像获取单元的当前位姿数据。
本公开实施例中,提出通过图像获取单元拍摄的现实场景图像以及惯性测量单元采集的运动数据共同确定图像获取单元的当前位姿数据,这样通过惯性测量单元采集的运动数 据,可以对基于现实场景图像预估的位姿数据进行调整,得到准确度较高的当前位姿数据。
针对上述S102,在一种实施方式中,更新屏幕上展示的至少部分增强现实画面,可以具体包括:
基于图像获取单元移动过程中采集的现实场景图像,更新屏幕上展示的至少部分增强现实画面。
可以理解的是,随着图像获取单元的移动,图像获取单元在世界坐标系中的当前位姿数据不断发生变化,这样图像获取单元拍摄的现实场景图像也会随着变化。比如图像获取单元初始拍摄的现实场景图像中为实体桌子的正面,当图像获取单元移动后,此时在当前拍摄的现实场景图像中包含有实体桌子的侧面,通过基于图像获取单元采集的现实场景图像对增强现实画面进行更新,便可以将更新前展示的桌子正面更新为桌子侧面。
本公开实施例中,通过获取图像获取单元移动过程中采集的现实场景图像,对屏幕上展示的增强现实画面进行更新,从而可以直观地展示出目标虚拟对象在当前增强现实画面中和其它实体物体之间的相对位姿,进而可以更好地调整目标虚拟对象在当前增强现实画面中的展示位姿。
针对上述S102,在另一种实施方式中,更新屏幕上展示的至少部分增强现实画面,图6示出了本公开实施例所提供的更新增强现实画面的方法流程示意图,如图6所示,可以包括以下S1021~S1022:
S1021,基于图像获取单元的当前位姿数据,确定其它虚拟对象在终端设备的屏幕上展示时的第一展示位姿数据;
S1022,基于其它虚拟对象对应的第一展示位姿数据和图像获取单元移动过程中采集的现实场景图像,更新屏幕上展示的至少部分增强现实画面。
在终端设备的屏幕上展示的增强现实画面中包含多个虚拟对象的情况下,随着图像获取单元的移动,除了目标虚拟对象以外,其它虚拟对象在屏幕上的展示位姿也会随着变化。这里,可以基于图像获取单元在世界坐标下下的当前位姿数据,以及其它虚拟对象在世界坐标系下的位姿数据,确定出其它虚拟对象在终端设备的屏幕上展示时的第一展示位姿数据。
然后通过对虚拟对象对应的第一展示位姿数据和图像获取单元移动过程中采集的现实场景图像进行叠加,便可以确定增强现实画面中需要更新的部分。其中,可以包含需要更新的其它虚拟对象在屏幕上的展示位姿和真实物体在屏幕上的展示位姿。
本公开实施例中,在增强现实画面中包含其它虚拟对象情况下,可以通过图像获取单元的当前位姿数据确定其它虚拟对象对应的第一展示位姿数据,另外结合图像获取单元移动过程中采集的现实场景图像,对屏幕上展示的其它虚拟对象以及现实场景图像同时进行更新,这样可以直观地展示出目标虚拟对象在当前增强现实画面中和其它实体物体以及其它虚拟对象之间的相对位姿,进而更好地调整目标虚拟对象在当前增强现实画面中的展示位姿。
在一种实施方式中,本公开实施例提供的调整方法还包括:
响应于对目标虚拟对象的选择操作结束,保存目标虚拟对象调整后在世界坐标系下的第二位姿数据。
在本公开实施例一实施方式中,在检测到作用在目标虚拟对象上的长按操作停止后,可以保存目标虚拟对象调整后在世界坐标系下的第二位姿数据,或者将该第二位姿数据发送至服务器,便于其它终端设备基于该目标虚拟对象对应的第二位姿数据进行AR场景展示。
在一种实施方式中,图7示出了本公开实施例所提供的增强现实画面的展示方法流程示意图,如图7所示,本公开实施例提供的调整方法还包括以下S701~S703:
S701,获取图像获取单元的当前位姿数据;
S702,基于图像获取单元的当前位姿数据和目标虚拟对象对应的第二位姿数据,确定目标虚拟对象在终端设备的屏幕上的第二展示位姿数据;
S703,基于第二展示位姿数据,在终端设备的屏幕上展示包括目标虚拟对象的增强现实画面。
在对目标虚拟对象的选择操作结束后,不会再触发针对目标虚拟对象的调整过程,随着图像获取单元的移动,可以获取图像获取单元的当前位姿数据,然后可以基于图像获取单元的当前位姿数据和目标虚拟对象对应的第二位姿数据,确定出目标虚拟对象在终端设备的屏幕上的第二展示位姿数据,并按照该第二展示位姿数据和图像获取单元拍摄的现实场景图像,生成在屏幕上展示的包括目标虚拟对象的增强现实画面。
在该场景下,随着图像获取单元的移动,目标虚拟对象在屏幕上的展示位姿会随着变化,另外,目标虚拟对象的展示尺寸也会随着变化。比如在图像获取单元靠近放置在实体桌子上的虚拟花瓶的情况下,在增强现实画面中可以看到逐渐变大的实体桌子以及虚拟花瓶,反之,在图像获取单元远离放置在实体桌子上的虚拟花瓶的情况下,在增强现实画面中可以看到逐渐缩小的实体桌子以及虚拟花瓶。
本公开实施例中,在选择操作结束后可以保存调整后的目标虚拟对象的第二位姿数据,这样当后续再次进行增强现实画面的呈现过程中,通过图像获取单元的当前位姿数据以及保存的第二位姿数据,目标虚拟对象便可以直接按照调整后的第二位姿数据呈现在当前增强现实画面中,无需重复调整,提升用户体验。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一技术构思,本公开实施例中还提供了与虚拟对象的调整方法对应的虚拟对象的调整装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述调整方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图8所示,为本公开实施例提供的一种虚拟对象的调整装置的结构示意图,该虚拟对象的调整装置800包括:
第一展示部分801,配置为在终端设备的屏幕上展示包括虚拟对象的增强现实画面;
调整部分802,配置为在检测到对目标虚拟对象的选择操作之后,若检测到终端设备的图像获取单元的位姿发生移动,则在图像获取单元移动过程中,保持目标虚拟对象在屏幕上的展示位姿不变,并更新屏幕上展示的至少部分增强现实画面;
第二展示部分803,配置为基于更新后的至少部分增强现实画面以及目标虚拟对象在屏幕上的展示位姿,在终端设备的屏幕上展示图像获取单元在移动后的增强现实画面。
在一种实施方式中,选择操作包括在屏幕上对目标虚拟对象的触摸操作。
在一种实施方式中,调整装置还包括获取部分804,
获取部分804,配置为获取图像获取单元与目标虚拟对象在世界坐标系中的相对位姿数据;
调整部分802,配置为若检测到图像获取单元的位姿发生移动,则在图像获取单元移动过程中,保持图像获取单元与目标虚拟对象之间的相对位姿数据不变,以保持目标虚拟对象在屏幕上的展示位姿不变。
在一些实施例中,获取部分804,配置为获取图像获取单元在世界坐标系下的当前位姿数据,以及目标虚拟对象调整之前在世界坐标系下的第一位姿数据;基于图像获取单元的当前位姿数据和目标虚拟对象的第一位姿数据,确定相对位姿数据。
在一些实施例中,获取部分804,配置为获取图像获取单元拍摄的现实场景图像;
基于现实场景图像,确定图像获取单元在世界坐标系下的当前位姿数据。
在一些实施例中,调整部分802,配置为基于图像获取单元移动过程中采集的现实场景图像,更新屏幕上展示的至少部分增强现实画面。
在一些实施例中,调整部分802,配置为基于图像获取单元的当前位姿数据,确定其它虚拟对象在终端设备的屏幕上展示时的第一展示位姿数据;基于其它虚拟对象对应的第一展示位姿数据和图像获取单元移动过程中采集的现实场景图像,更新屏幕上展示的至少部分增强现实画面。
在一些实施例中,调整装置还包括保存部分805,
保存部分805,配置为响应于对目标虚拟对象的选择操作结束,保存目标虚拟对象调整后在世界坐标系下的第二位姿数据。
在获取部分804中,调整装置还包括第三展示部分806,
第三展示部分806,配置为获取图像获取单元的当前位姿数据;基于图像获取单元的当前位姿数据和目标虚拟对象对应的第二位姿数据,确定目标虚拟对象在终端设备的屏幕上的第二展示位姿数据;基于第二展示位姿数据,在终端设备的屏幕上展示包括目标虚拟对象的增强现实画面。
关于装置中的各部分的处理流程、以及各部分之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
对应于图1中的虚拟对象的调整方法,本公开实施例还提供了一种电子设备900,如图9所示,为本公开实施例提供的电子设备结构示意图,包括:
处理器91、存储器92、和总线93;存储器92用于存储执行指令,包括内存921和外部存储器922;这里的内存921也称内存储器,用于暂时存放处理器91中的运算数据,以及与硬盘等外部存储器922交换的数据,处理器91通过内存921与外部存储器922进行数据交换,当电子设备900运行时,处理器91与存储器92之间通过总线93通信,使得处理器91执行以下指令:在终端设备的屏幕上展示包括虚拟对象的增强现实画面;在检测到对目标虚拟对象的选择操作之后,若检测到终端设备的图像获取单元的位姿发生移动,则在图像获取单元移动过程中,保持目标虚拟对象在屏幕上的展示位姿不变,并更新屏幕上展示的至少部分增强现实画面;基于更新后的至少部分增强现实画面以及目标虚拟对象在屏幕上的展示位姿,在终端设备的屏幕上展示图像获取单元在移动后的增强现实画面。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的调整方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例所提供的调整方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的调整方法的步骤,具体可参见上述方法实施例,在此不再赘述。
本公开实施例还提供一种计算机程序,该计算机程序被处理器执行时实现前述实施例的任意一种方法。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑 功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。
工业实用性
本公开实施例中,通过在终端设备的屏幕上展示包括虚拟对象的增强现实画面;在检测到对目标虚拟对象的选择操作之后,若检测到终端设备的图像获取单元的位姿发生移动,则在图像获取单元移动过程中,保持目标虚拟对象在屏幕上的展示位姿不变,并更新屏幕上展示的至少部分增强现实画面;基于更新后的至少部分增强现实画面以及目标虚拟对象在屏幕上的展示位姿,在终端设备的屏幕上展示图像获取单元在移动后的增强现实画面,有效提升了展示位姿调整的操作效率。

Claims (22)

  1. 一种虚拟对象的调整方法,包括:
    在终端设备的屏幕上展示包括虚拟对象的增强现实画面;
    在检测到对目标虚拟对象的选择操作之后,若检测到所述终端设备的图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述目标虚拟对象在所述屏幕上的展示位姿不变,并更新所述屏幕上展示的至少部分增强现实画面;
    基于更新后的所述至少部分增强现实画面以及所述目标虚拟对象在所述屏幕上的展示位姿,在所述终端设备的屏幕上展示所述图像获取单元在移动后的增强现实画面。
  2. 根据权利要求1所述的调整方法,其中,所述选择操作包括在所述屏幕上对所述目标虚拟对象的触摸操作。
  3. 根据权利要求1或2所述的调整方法,其中,所述调整方法还包括:获取所述图像获取单元与所述目标虚拟对象在世界坐标系中的相对位姿数据;
    所述若检测到所述终端设备的图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述目标虚拟对象在所述屏幕上的展示位姿不变,包括:
    若检测到所述图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述图像获取单元与所述目标虚拟对象之间的所述相对位姿数据不变,以保持所述目标虚拟对象在所述屏幕上的展示位姿不变。
  4. 根据权利要求3所述的调整方法,其中,所述获取所述图像获取单元和所述目标虚拟对象在世界坐标系中的相对位姿数据,包括:
    获取所述图像获取单元在所述世界坐标系下的当前位姿数据,以及所述目标虚拟对象调整之前在所述世界坐标系下的第一位姿数据;
    基于所述图像获取单元的所述当前位姿数据和所述目标虚拟对象的所述第一位姿数据,确定所述相对位姿数据。
  5. 根据权利要求4所述的调整方法,其中,所述获取所述图像获取单元在所述世界坐标系下的当前位姿数据,包括:
    获取所述图像获取单元拍摄的现实场景图像;
    基于所述现实场景图像,确定所述图像获取单元在所述世界坐标系下的所述当前位姿数据。
  6. 根据权利要求1至5任一项所述的调整方法,其中,所述更新所述屏幕上展示的 至少部分增强现实画面,包括:
    基于所述图像获取单元移动过程中采集的所述现实场景图像,更新所述屏幕上展示的所述至少部分增强现实画面。
  7. 根据权利要求1至5任一项所述的调整方法,其中,所述更新所述屏幕上展示的至少部分增强现实画面,包括:
    基于所述图像获取单元的所述当前位姿数据,确定其它虚拟对象在所述终端设备的屏幕上展示时的第一展示位姿数据;
    基于所述其它虚拟对象对应的所述第一展示位姿数据和所述图像获取单元移动过程中采集的所述现实场景图像,更新所述屏幕上展示的所述至少部分增强现实画面。
  8. 根据权利要求1至7任一项所述的调整方法,其中,所述调整方法还包括:
    响应于对所述目标虚拟对象的选择操作结束,保存所述目标虚拟对象调整后在世界坐标系下的第二位姿数据。
  9. 根据权利要求8所述的调整方法,其中,所述调整方法还包括:
    获取所述图像获取单元的所述当前位姿数据;
    基于所述图像获取单元的所述当前位姿数据和所述目标虚拟对象对应的所述第二位姿数据,确定所述目标虚拟对象在所述终端设备的屏幕上的第二展示位姿数据;
    基于所述第二展示位姿数据,在所述终端设备的屏幕上展示包括所述目标虚拟对象的增强现实画面。
  10. 一种虚拟对象的调整装置,包括:
    第一展示部分,配置为在终端设备的屏幕上展示包括虚拟对象的增强现实画面;
    调整部分,配置为在检测到对目标虚拟对象的选择操作之后,若检测到所述终端设备的图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述目标虚拟对象在所述屏幕上的展示位姿不变,并更新所述屏幕上展示的至少部分增强现实画面;
    第二展示部分,配置为基于更新后的所述至少部分增强现实画面以及所述目标虚拟对象在所述屏幕上的展示位姿,在所述终端设备的屏幕上展示所述图像获取单元在移动后的增强现实画面。
  11. 根据权利要求10所述的调整装置,其中,所述选择操作包括在所述屏幕上对所述目标虚拟对象的触摸操作。
  12. 根据权利要求10或11所述的调整装置,其中,所述调整装置还包括获取部分,
    所述获取部分,配置为获取所述图像获取单元与所述目标虚拟对象在世界坐标系中的 相对位姿数据;
    所述调整部分,配置为若检测到所述图像获取单元的位姿发生移动,则在所述图像获取单元移动过程中,保持所述图像获取单元与所述目标虚拟对象之间的所述相对位姿数据不变,以保持所述目标虚拟对象在所述屏幕上的展示位姿不变。
  13. 根据权利要求12所述的调整装置,其中,
    所述获取部分,配置为获取所述图像获取单元在所述世界坐标系下的当前位姿数据,以及所述目标虚拟对象调整之前在所述世界坐标系下的第一位姿数据;基于所述图像获取单元的所述当前位姿数据和所述目标虚拟对象的所述第一位姿数据,确定所述相对位姿数据。
  14. 根据权利要求13所述的调整装置,其中,
    所述获取部分,还配置为:获取所述图像获取单元拍摄的现实场景图像;基于所述现实场景图像,确定所述图像获取单元在所述世界坐标系下的所述当前位姿数据。
  15. 根据权利要求10至14任一项所述的调整装置,其中,
    所述调整部分,配置为基于所述图像获取单元移动过程中采集的所述现实场景图像,更新所述屏幕上展示的所述至少部分增强现实画面。
  16. 根据权利要求10至14任一项所述的调整装置,其中,
    所述调整部分,配置为基于所述图像获取单元的所述当前位姿数据,确定其它虚拟对象在所述终端设备的屏幕上展示时的第一展示位姿数据;
    基于所述其它虚拟对象对应的所述第一展示位姿数据和所述图像获取单元移动过程中采集的所述现实场景图像,更新所述屏幕上展示的所述至少部分增强现实画面。
  17. 根据权利要求10至16任一项所述的调整装置,其中,所述调整装置还包括保存部分,
    所述保存部分,配置为响应于对所述目标虚拟对象的选择操作结束,保存所述目标虚拟对象调整后在世界坐标系下的第二位姿数据。
  18. 根据权利要求17所述的调整装置,其中,所述调整装置还包括第三展示部分,
    所述第三展示部分,配置为获取所述图像获取单元的所述当前位姿数据;基于所述图像获取单元的所述当前位姿数据和所述目标虚拟对象对应的所述第二位姿数据,确定所述目标虚拟对象在所述终端设备的屏幕上的第二展示位姿数据;基于所述第二展示位姿数据,在所述终端设备的屏幕上展示包括所述目标虚拟对象的增强现实画面。
  19. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可 执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至9任一项所述的调整方法的步骤。
  20. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至9任一项所述的调整方法的步骤。
  21. 一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行如权利要求1至9任一项所述的调整方法。
  22. 一种计算机程序产品,包括存储了程序代码的计算机可读存储介质,在所述程序代码在所述电子设备中运行的情况下,所述电子设备如权利要求1至9任一项所述的调整方法。
PCT/CN2021/089437 2020-07-30 2021-04-23 虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序 WO2022021965A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021570926A JP2022545598A (ja) 2020-07-30 2021-04-23 仮想対象の調整方法、装置、電子機器、コンピュータ記憶媒体及びプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010750615.1 2020-07-30
CN202010750615.1A CN111882674A (zh) 2020-07-30 2020-07-30 虚拟对象的调整方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022021965A1 true WO2022021965A1 (zh) 2022-02-03

Family

ID=73205674

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089437 WO2022021965A1 (zh) 2020-07-30 2021-04-23 虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序

Country Status (4)

Country Link
JP (1) JP2022545598A (zh)
CN (1) CN111882674A (zh)
TW (1) TW202205060A (zh)
WO (1) WO2022021965A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882674A (zh) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 虚拟对象的调整方法、装置、电子设备及存储介质
CN114385002B (zh) * 2021-12-07 2023-05-12 达闼机器人股份有限公司 智能设备控制方法、装置、服务器和存储介质
CN114299263A (zh) * 2021-12-31 2022-04-08 北京绵白糖智能科技有限公司 增强现实ar场景的展示方法及装置
CN114445600A (zh) * 2022-01-28 2022-05-06 北京字跳网络技术有限公司 一种特效道具的展示方法、装置、设备及存储介质
CN114612637A (zh) * 2022-03-15 2022-06-10 北京字跳网络技术有限公司 一种场景画面显示方法、装置、计算机设备及存储介质
CN117643725A (zh) * 2022-08-12 2024-03-05 腾讯科技(深圳)有限公司 形象处理方法、装置、电子设备、存储介质及程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108553889A (zh) * 2018-03-29 2018-09-21 广州汉智网络科技有限公司 虚拟模型交互方法及装置
CN109002162A (zh) * 2018-06-21 2018-12-14 北京字节跳动网络技术有限公司 场景切换方法、装置、终端和计算机存储介质
CN109782901A (zh) * 2018-12-06 2019-05-21 网易(杭州)网络有限公司 增强现实交互方法、装置、计算机设备及存储介质
CN110941337A (zh) * 2019-11-25 2020-03-31 深圳传音控股股份有限公司 虚拟形象的控制方法、终端设备及计算机可读存储介质
CN111882674A (zh) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 虚拟对象的调整方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9911235B2 (en) * 2014-11-14 2018-03-06 Qualcomm Incorporated Spatial interaction in augmented reality
CN108553888A (zh) * 2018-03-29 2018-09-21 广州汉智网络科技有限公司 增强现实交互方法及装置
CN110124305B (zh) * 2019-05-15 2023-05-12 网易(杭州)网络有限公司 虚拟场景调整方法、装置、存储介质与移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108553889A (zh) * 2018-03-29 2018-09-21 广州汉智网络科技有限公司 虚拟模型交互方法及装置
CN109002162A (zh) * 2018-06-21 2018-12-14 北京字节跳动网络技术有限公司 场景切换方法、装置、终端和计算机存储介质
CN109782901A (zh) * 2018-12-06 2019-05-21 网易(杭州)网络有限公司 增强现实交互方法、装置、计算机设备及存储介质
CN110941337A (zh) * 2019-11-25 2020-03-31 深圳传音控股股份有限公司 虚拟形象的控制方法、终端设备及计算机可读存储介质
CN111882674A (zh) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 虚拟对象的调整方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
JP2022545598A (ja) 2022-10-28
CN111882674A (zh) 2020-11-03
TW202205060A (zh) 2022-02-01

Similar Documents

Publication Publication Date Title
WO2022021965A1 (zh) 虚拟对象的调整方法、装置、电子设备、计算机存储介质及程序
US20200349735A1 (en) Multiple user simultaneous localization and mapping (slam)
KR102322589B1 (ko) 3차원 콘텐츠 내의 위치-기반 가상 요소 양식
US10725297B2 (en) Method and system for implementing a virtual representation of a physical environment using a virtual reality environment
US11132546B2 (en) Plane detection using semantic segmentation
CN111610998A (zh) Ar场景内容的生成方法、展示方法、装置及存储介质
CN110633617B (zh) 使用语义分割的平面检测
US20190371072A1 (en) Static occluder
US10984607B1 (en) Displaying 3D content shared from other devices
US20200020118A1 (en) Object Detection Using Multiple Three Dimensional Scans
US11430192B2 (en) Placement and manipulation of objects in augmented reality environment
CN115917474A (zh) 在三维环境中呈现化身
US20230324985A1 (en) Techniques for switching between immersion levels
CN105389090A (zh) 游戏交互界面显示的方法及装置、移动终端和电脑终端
CN114514493A (zh) 增强设备
CN111651052A (zh) 虚拟沙盘的展示方法、装置、电子设备及存储介质
CN112987914A (zh) 用于内容放置的方法和设备
CN111599292A (zh) 一种历史场景的呈现方法、装置、电子设备及存储介质
WO2020173222A1 (zh) 物品虚拟化处理方法、装置、电子设备及存储介质
JP2017168132A (ja) 仮想オブジェクトの表示システム、表示システムプログラム及び表示方法
US10964056B1 (en) Dense-based object tracking using multiple reference images
US20240078743A1 (en) Stereo Depth Markers
WO2022022449A1 (zh) 用于空间定位的方法和装置
US11308716B1 (en) Tailoring a computer-generated reality experience based on a recognized object
KR102138620B1 (ko) 증강현실을 이용한 3d 모델 구현시스템 및 이를 이용한 구현방법

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021570926

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21851300

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21851300

Country of ref document: EP

Kind code of ref document: A1