WO2020207191A1 - 虚拟物体被遮挡的区域确定方法、装置及终端设备 - Google Patents

虚拟物体被遮挡的区域确定方法、装置及终端设备 Download PDF

Info

Publication number
WO2020207191A1
WO2020207191A1 PCT/CN2020/079282 CN2020079282W WO2020207191A1 WO 2020207191 A1 WO2020207191 A1 WO 2020207191A1 CN 2020079282 W CN2020079282 W CN 2020079282W WO 2020207191 A1 WO2020207191 A1 WO 2020207191A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
current frame
area
dimensional
model
Prior art date
Application number
PCT/CN2020/079282
Other languages
English (en)
French (fr)
Inventor
王宇鹭
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP20787305.0A priority Critical patent/EP3951721A4/en
Publication of WO2020207191A1 publication Critical patent/WO2020207191A1/zh
Priority to US17/499,856 priority patent/US11842438B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application belongs to the field of software technology, and in particular relates to a method, a device, a terminal device, and a computer-readable storage medium for determining a region where a virtual object is blocked.
  • Augmented Reality is a new technology that can superimpose computer-generated virtual objects or system prompts into real scenes, realize effective expansion and augmented reality of real scenes, and support users to interact with them.
  • Occlusion consistency requires that virtual objects can occlude the background and also be occluded by foreground objects, and have a correct occlusion relationship between virtual and real. Only by correctly handling the front and back positional relationship of virtual objects in the real world can users correctly perceive the hierarchical relationship of virtual and real objects in the real-time synthetic space. Wrong occlusion relationship between virtual and reality can easily lead to sensory spatial dislocation, and cannot achieve a sensory experience beyond reality. With the refinement of the AR research field, there are currently two main solutions in terms of solving the AR occlusion.
  • the first method 3D modeling occlusion method.
  • the occlusion method based on model reconstruction requires three-dimensional modeling of real objects that may be occluded with virtual objects in advance, covering the real objects with the three-dimensional models, and then comparing the depth values of the virtual and real object models. According to the comparison results, only the virtual objects are rendered. The occluded part of the real object is not rendered.
  • the second method occlusion method based on depth calculation.
  • the occlusion method based on depth calculation needs to use the stereo parallax to calculate the depth information of the real scene in real time, and then judge the spatial position relationship between the virtual object and the real scene according to the viewpoint position, the superimposed position of the virtual object, and the obtained scene depth information, and make corresponding Occlusion rendering to achieve virtual and real occlusion.
  • the first method needs to reconstruct a three-dimensional model of the real object in advance, it requires a huge workload.
  • the second method since the depth of the scene needs to be calculated by the stereo vision method, the calculation amount is also large, and the depth value needs to be recalculated when the scene changes.
  • the existing method requires a large amount of calculation to determine the occluded part of the virtual object.
  • the embodiments of the present application provide a method, a device, a terminal device, and a computer-readable storage medium for determining the occluded area of a virtual object, so as to solve the problem in the prior art that it is difficult to quickly determine the occluded area of a virtual object.
  • the first aspect of the embodiments of the present application provides a method for determining the occluded area of a virtual object, including:
  • the logarithm of the feature points of the occluded area of the virtual object in the current frame in the next frame of the current frame is greater than or equal to the preset matching logarithm threshold, according to the occlusion of the virtual object in the current frame.
  • the area determines the area where the virtual object is occluded in the next frame of the current frame.
  • a second aspect of the embodiments of the present application provides an apparatus for determining a region where a virtual object is occluded, including:
  • a scene three-dimensional map construction unit configured to construct a scene three-dimensional map of the current frame according to feature points of the current frame and corresponding depth information
  • the virtual object display unit is configured to display a designated virtual object at a position corresponding to the click operation if a user's click operation on the three-dimensional map of the scene is detected;
  • a three-dimensional scene model construction unit configured to construct a three-dimensional scene model according to the feature point information of the current frame
  • a depth value comparison unit for comparing the depth value of the three-dimensional scene model and the model of the virtual object
  • the occluded area determining unit of the current frame is configured to determine the occluded area of the virtual object in the current frame according to the comparison result;
  • the occluded area determining unit for the next frame is configured to: if the feature point of the virtual object in the occluded area of the current frame is matched in the next frame of the current frame, the logarithm of the matching logarithm is greater than or equal to the preset matching logarithm threshold And determining the area of the virtual object that is blocked in the next frame of the current frame according to the area of the virtual object that is blocked in the current frame.
  • the third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the computer program, Implement the steps of the method as described in the first aspect.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the method described in the first aspect are implemented.
  • a computer program product is provided.
  • the terminal device executes the method described in the first aspect.
  • the three-dimensional map of the scene of the current frame is constructed based on the feature points of the current frame and the corresponding depth information, instead of constructing the three-dimensional map of the current frame based on all the information of the current frame, the amount of image data involved in the construction process is reduced , Thereby improving the construction speed of the 3D map of the scene of the current frame.
  • the logarithm of the feature points of the occluded area of the virtual object in the current frame in the next frame of the current frame is greater than or equal to the preset matching logarithm threshold, according to the virtual object being occluded in the current frame The area determines the area where the virtual object is occluded in the next frame of the current frame. Therefore, there is no need to refer to all the image data of the next frame, which reduces the image data involved in the calculation, thereby greatly improving the calculation of the virtual object The speed of the occluded area in the next frame.
  • FIG. 1 is a flowchart of a first method for determining a occluded area of a virtual object according to an embodiment of the present application
  • FIG. 2 is a flowchart of a second method for determining a occluded area of a virtual object according to an embodiment of the present application
  • FIG. 3 is a flowchart of a third method for determining a occluded area of a virtual object according to an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of a device for determining a region where a virtual object is occluded according to an embodiment of the present application
  • Fig. 5 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context .
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • the terminal devices described in the embodiments of the present application include but are not limited to other portable devices such as mobile phones, laptop computers, or tablet computers with touch-sensitive surfaces (for example, touch screen displays and/or touch pads) . It should also be understood that in some embodiments, the above-mentioned devices are not portable communication devices, but desktop computers with touch-sensitive surfaces (eg, touch screen displays and/or touch pads).
  • terminal devices including displays and touch-sensitive surfaces are described.
  • the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
  • the terminal device supports various applications, such as one or more of the following: drawing application, presentation application, word processing application, website creation application, disk burning application, spreadsheet application, game application, telephone Apps, video conferencing apps, email apps, instant messaging apps, exercise support apps, photo management apps, digital camera apps, digital camera apps, web browsing apps, digital music player apps And/or digital video player application.
  • applications such as one or more of the following: drawing application, presentation application, word processing application, website creation application, disk burning application, spreadsheet application, game application, telephone Apps, video conferencing apps, email apps, instant messaging apps, exercise support apps, photo management apps, digital camera apps, digital camera apps, web browsing apps, digital music player apps And/or digital video player application.
  • Various application programs that can be executed on the terminal device may use at least one common physical user interface device such as a touch-sensitive surface.
  • a touch-sensitive surface One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within corresponding applications.
  • the common physical architecture of the terminal for example, a touch-sensitive surface
  • FIG. 1 shows a flowchart of a first method for determining a occluded area of a virtual object according to an embodiment of the present application.
  • the method is applied to a terminal device, and the terminal device is equipped with a camera for acquiring image data and a method for acquiring depth information.
  • the camera, the camera that obtains image data may be an RGB camera, and the camera that obtains depth information may be a time of flight (TOF) camera.
  • TOF time of flight
  • Step S11 constructing a three-dimensional map of the scene of the current frame according to the feature points of the current frame and the corresponding depth information;
  • the feature points of the current frame are extracted, and the pose of the current frame relative to the reference frame is estimated based on the extracted feature points, and then the pose is registered with that obtained according to TOF Depth map, to generate a three-dimensional map of the scene registered with the current frame.
  • first select a reference frame and establish a reference frame coordinate system For example, after the first frame of image data is captured, use the first frame as the reference frame. Correspondingly, the coordinate system established according to the first frame is the reference Frame coordinate system. In the subsequent shooting process, all captured image data frames are converted to the reference frame coordinate system.
  • the current frame is converted to the coordinate system of the previous frame, and then the previous frame and reference frame have been saved.
  • the rotation relationship of the current frame is converted to the reference frame coordinate system.
  • the first frame is the reference frame
  • the second frame has been converted to the first frame reference coordinate system
  • the corresponding rotation relationship is saved.
  • the current frame is the third frame, only two conversion steps are required: The frame is converted to the coordinate system of the second frame, and then the third frame is converted to the first frame reference coordinate system according to the saved rotation relationship of the second frame to the first frame.
  • the current frame when the current frame is the fourth frame, convert the fourth frame to the coordinate system of the third frame, and then convert the fourth frame to the first frame according to the saved rotation relationship of the third frame to the first frame In the reference coordinate system.
  • the first frame is not necessarily selected as the reference frame, but the frame with the largest disparity from the current frame is selected from the collected multiple frames of image data As a reference frame.
  • the current frame is the first frame
  • the three-dimensional map of the scene of the first frame is generated according to the position information of the feature points of the first frame and the depth information of the feature points of the first frame.
  • the sequence in order to present a three-dimensional map of the scene including more feature points, after obtaining multiple sequence frames, the sequence can be constructed based on the feature points of the multiple sequence frames and the corresponding depth information. Three-dimensional map of the scene of the frame.
  • the depth map includes depth information, such as depth value, as well as position information.
  • the three-dimensional map of the scene is a dense point cloud map.
  • the extracted feature point type of the current frame can be (Oriented FAST and Rotated Brief, ORB) feature point, which is an algorithm for fast feature point extraction and description.
  • the extracted feature point type of the current frame may also be a Scale Invariant Feature Transform (SIFT) feature point, etc., which is not limited here.
  • SIFT Scale Invariant Feature Transform
  • the pose is estimated by the extracted feature points, that is, estimated by software, there may be a certain error.
  • the data obtained by hardware can be combined Correct the estimated pose.
  • the extracted feature points are tightly coupled and nonlinear optimization combined with the data obtained by the inertial measurement unit (IMU) to obtain the modified pose.
  • IMU inertial measurement unit
  • a three-dimensional map of the scene of the current frame is constructed only when the current frame is a key frame.
  • the current frame is compared with the previous frame, and if the pixel difference (ie disparity) between the current frame and the previous frame is less than or equal to the preset threshold, it is determined that the current frame is not a key frame, and the current frame is discarded; otherwise, it is determined
  • the current frame is a key frame, extract the feature points of the current frame, and estimate the pose of the current frame relative to the previous frame (ie the previous key frame) based on the feature points of the current frame (ie the current key frame), and you will get After the pose correction, it is registered with the depth map to generate a three-dimensional map of the scene of the current frame (that is, the current key frame).
  • Step S12 if a click operation issued by the user on the three-dimensional map of the scene is detected, display a designated virtual object at a position corresponding to the click operation;
  • the constructed three-dimensional map of the scene is displayed, and if it is detected that the user clicks on a certain position of the three-dimensional map of the scene, an anchor point (Anchors) coordinates are generated at the position. After the anchor point coordinates are generated, it is equivalent to that the virtual object has a fixed position in the real world. After that, the virtual object will not be offset and can fit well in the real world.
  • the virtual object is presented by a predefined virtual object model, and the model of the virtual object is a three-dimensional model.
  • the position corresponding to the click operation is a plane, and if it is not a plane, the user is prompted that the current position is not a plane.
  • the plane information in the displayed three-dimensional map of the scene is detected, and the user is recommended to click on the detected plane information to display the virtual object on the plane, so that the displayed virtual The object is more in line with the actual situation.
  • Step S13 constructing a three-dimensional scene model according to the feature point information of the current frame
  • the feature point information includes two-dimensional coordinate information, feature descriptors, and so on.
  • the constructed three-dimensional scene model is sparse, which is beneficial to reduce the amount of calculation required when constructing the model.
  • Step S14 comparing the depth value of the three-dimensional scene model and the model of the virtual object
  • the depth value of the three-dimensional scene model and the depth value of the virtual object model are respectively obtained, and then the depth values of the three-dimensional scene model belonging to the same view angle and the depth value of the virtual object model are respectively compared.
  • Step S15 Determine the area of the virtual object that is occluded in the current frame according to the comparison result
  • the depth value of the virtual object is less than the depth value of the three-dimensional scene model, it indicates that the virtual object is before the three-dimensional scene model, that is, the virtual object is not occluded, on the contrary, it indicates that the virtual object is occluded. Further, only the area where the virtual object is not occluded is rendered, which reduces the area to be rendered and improves the rendering speed.
  • Step S16 if the logarithm of the feature points of the virtual object in the occluded area of the current frame in the next frame of the current frame is greater than or equal to the preset matching logarithm threshold, according to the virtual object in the current frame
  • the occluded area determines the occluded area of the virtual object in the next frame of the current frame.
  • the virtual object detects 50 feature points in the occluded area of the current frame, these 50 feature points will be tracked in the next frame. If 50 feature points can still be tracked in the next frame, the matched pair The number is 50 pairs. If only 20 feature points are tracked in the next frame, the number of matched pairs is 20 pairs.
  • the matched logarithm is greater than or equal to the preset matching logarithm threshold, it indicates that the difference between the next frame and the current frame is small. In this case, it is determined directly according to the area where the virtual object is blocked in the current frame. The area where the virtual object is occluded in the next frame.
  • the matching logarithm of the feature points of the next frame of the current frame and the feature points of the current frame is less than the preset matching logarithm threshold, it indicates that the difference between the next frame and the current frame is large At this time, if the virtual object is still used to calculate the occluded area of the virtual object in the next frame, it may be inaccurate. Therefore, in order to ensure that the calculation of the virtual object occluded in the next frame is accurate It needs to be recalculated according to the comparison result of the depth value of the new 3D scene model and the depth value of the virtual object.
  • a new three-dimensional scene model is constructed according to the feature point information of the next frame, the depth value of the new three-dimensional scene model and the model of the virtual object is compared, and the virtual object is determined to be occluded in the next frame according to the comparison result Area.
  • the three-dimensional map of the scene of the current frame is constructed according to the feature points of the current frame and the corresponding depth information. If a user's click operation on the three-dimensional map of the scene is detected, the click operation corresponding to the The designated virtual object is displayed at the position, and then a three-dimensional scene model is constructed according to the feature point information of the current frame, the depth value of the three-dimensional scene model and the model of the virtual object is compared, and the virtual object is determined to be in the current The occluded area of the frame is finally determined according to the occluded area of the virtual object in the current frame to determine the occluded area of the virtual object in the next frame.
  • the three-dimensional map of the scene of the current frame is constructed based on the feature points of the current frame and the corresponding depth information, instead of constructing the three-dimensional map of the current frame based on all the information of the current frame, the amount of image data involved in the construction process is reduced , Thereby improving the construction speed of the 3D map of the scene of the current frame.
  • the logarithm of the feature points of the virtual object occluded in the current frame in the next frame of the current frame is greater than or equal to the preset matching logarithm threshold, according to the virtual object in the current frame
  • the occluded area determines the occluded area of the virtual object in the next frame of the current frame. Therefore, there is no need to refer to all the image data of the next frame, which reduces the image data involved in the calculation, thereby greatly improving the calculation of virtual The speed of the occluded area of the object in the next frame.
  • FIG. 2 shows a flowchart of a second method for determining a occluded area of a virtual object according to an embodiment of the present application, wherein steps S21 to S25 are the same as steps S11 to S15 in FIG. 1, and will not be repeated here. :
  • Step S21 constructing a three-dimensional map of the scene of the current frame according to the feature points of the current frame and the corresponding depth information;
  • Step S22 if a click operation issued by the user on the three-dimensional map of the scene is detected, display a designated virtual object at a position corresponding to the click operation;
  • Step S23 constructing a three-dimensional scene model according to the feature point information of the current frame
  • Step S24 comparing the depth value of the three-dimensional scene model and the model of the virtual object
  • Step S25 Determine, according to the comparison result, the area where the virtual object is occluded in the current frame;
  • Step S26 If the logarithm of the feature points of the virtual object in the occluded area of the current frame in the next frame of the current frame is greater than or equal to the preset matching logarithm threshold, obtain the area magnification value, according to the An area enlargement value and an area occluded by the virtual object in the current frame, determining an enlarged area in the next frame of the current frame, and the area enlargement value is greater than or equal to 1;
  • the value of the area enlargement value is usually set to be greater than 1, and it is ensured that the enlarged area is smaller than the entire image area of the next frame.
  • the occluded two-dimensional contour of the virtual object in the current frame is determined according to the area that the virtual object is occluded in the current frame, and the next frame of the current frame (hereinafter referred to as the next frame) is positioned corresponding to the two-dimensional contour , And then combine the area magnification value to enlarge the positioned area to get the magnified area.
  • Step S27 constructing a new three-dimensional scene model according to the feature point information of the enlarged area
  • step S13 feature points are extracted from the enlarged area determined in the next frame, and then a new coefficient three-dimensional scene model is constructed according to the feature point information corresponding to the extracted feature points.
  • the specific construction process is similar to step S13, and will not be omitted here. Repeat.
  • Step S28 comparing the depth value of the new three-dimensional scene model with the model of the virtual object
  • step S14 The specific comparison process of this step is similar to step S14, and will not be repeated here.
  • Step S29 Determine the occluded area of the virtual object in the frame next to the current frame according to the comparison result.
  • the enlarged area determined in the next frame is greater than or equal to the area occluded by the virtual object in the current frame, so as to ensure that the virtual object is determined to be in the next frame. All areas blocked.
  • setting the enlarged area to be smaller than the entire image area of the next frame can ensure that the amount of image data involved in the calculation is less than the amount of image data of the entire frame, thereby greatly reducing the amount of calculations and increasing the area of the virtual object that is blocked in the next frame The speed of determination.
  • the method before the step S26, the method includes:
  • the obtaining the rotation angle of the camera from the current frame to the next frame of the current frame, and determining the area magnification value is specifically: obtaining the rotation angle of the camera from the current frame to the next frame of the current frame, and The virtual object and the three-dimensional scene model are projected onto the same plane according to the rotation angle to obtain the two-dimensional outline of the scene corresponding to the three-dimensional scene model and the two-dimensional outline of the object corresponding to the model of the virtual object to determine the scene
  • the intersecting area of the two-dimensional contour and the two-dimensional contour of the object is determined according to the intersecting area and the area where the virtual object is occluded in the current frame.
  • the rotation angle of the RGB camera from the current frame to the next frame can be obtained through the IMU data.
  • a two-dimensional projection area can be determined according to the area where the virtual object is occluded in the current frame, and then the intersection area is compared with the two-dimensional projection area to obtain a ratio, and the area magnification value is determined according to the ratio, for example, The sum of the ratio and 1 after rounding up is used as the differential amplification value.
  • the differential amplification value is used as the differential amplification value.
  • it can also be set in other ways, which is not limited here.
  • the obtaining the rotation angle of the camera from the current frame to the next frame of the current frame, and determining the area magnification value is specifically: obtaining the rotation angle of the camera from the current frame to the next frame of the current frame, if If the obtained rotation angle is greater than or equal to the preset angle threshold, the default area magnification value is increased by a value, and the increased value is used as the final area magnification value; otherwise, the default area magnification value is decreased by a value, And use the reduced value as the final area enlargement value.
  • the occluded area of the virtual object when the camera rotates, the occluded area of the virtual object also changes accordingly. Therefore, combining the rotation angle of the camera can make the determined area magnification value more accurate.
  • step S13 is specifically:
  • A1 Determine the three-dimensional coordinates of the feature points of the current frame according to the feature point information of the current frame and a pre-calibrated external parameter matrix
  • the external parameter matrix refers to a matrix including the positional relationship between the RGB camera and the TOF camera.
  • A2 Construct a three-dimensional scene model according to the three-dimensional coordinates of the characteristic points.
  • triangulate the three-dimensional coordinates (such as Poisson reconstruction) to construct a sparse three-dimensional scene model of the perspective of the current frame.
  • the external parameter matrix is calibrated in advance according to the positional relationship between the RGB camera and the TOF camera quality inspection, after acquiring the feature point information, the external parameter matrix can be combined to quickly determine the three-dimensional coordinates of the feature point, and then It can improve the construction speed of 3D scene model.
  • the method in order to improve the accuracy of the constructed 3D scene model, before the step S13 (or step S23), the method includes:
  • the construction of a three-dimensional scene model according to the three-dimensional coordinates of the characteristic points specifically includes:
  • the number of feature points in the current frame is less than a preset feature point number threshold, then acquire the depth information of the depth map of the current frame; extract depth feature point data from the depth information; The coordinates and the depth feature point data construct a three-dimensional scene model.
  • the depth constructed by the TOF camera Obtain the dense depth information in the figure, then extract the depth feature point data from the dense depth information, and finally construct the three-dimensional scene model with the three-dimensional coordinates of the feature point and the depth feature point data. Since the three-dimensional scene model is constructed by combining the depth feature point data with fewer extracted feature points, the accuracy of the constructed three-dimensional scene model is improved.
  • FIG. 3 shows a flowchart of a third method for determining a occluded area of a virtual object according to an embodiment of the present application. Steps S31 to S34 are the same as steps S11 to S14 in FIG. 1, and will not be repeated here. :
  • Step S31 constructing a three-dimensional map of the scene of the current frame according to the feature points of the current frame and the corresponding depth information;
  • Step S32 if a click operation issued by the user on the three-dimensional map of the scene is detected, display a designated virtual object at a position corresponding to the click operation;
  • Step S33 constructing a three-dimensional scene model according to the feature point information of the current frame
  • Step S34 comparing the depth value of the three-dimensional scene model and the model of the virtual object
  • step S35 if all the depth values of the virtual object model are less than the depth values of the three-dimensional scene model, then it is determined that the virtual object is not occluded in the current frame;
  • Step S36 If all the depth values of the virtual object model are not less than the depth values of the three-dimensional scene model, project both the three-dimensional scene model and the virtual object model onto the same projection plane to obtain the three-dimensional The two-dimensional outline of the scene corresponding to the scene model and the two-dimensional outline of the object corresponding to the model of the virtual object;
  • Step S37 if the two-dimensional contours of the object are all located within the two-dimensional contours of the scene, it is determined that the virtual object is completely occluded in the current frame, and if the two-dimensional contours of the object partially intersect the two-dimensional contours of the scene, it is determined
  • the virtual object area corresponding to the intersection area is the area where the virtual object is occluded in the current frame.
  • the intersection area (two-dimensional area) is obtained according to the two-dimensional contour of the object and the two-dimensional contour of the scene, the area corresponding to the intersection area (three-dimensional area) is found on the virtual object, and the found area is taken as The occluded area of the virtual object in the current frame.
  • the two-dimensional data is obtained by projecting to the same projection plane, and the two-dimensional data is used for calculation
  • the calculation amount of is less than the calculation amount of calculation using three-dimensional data. Therefore, the above method can improve the speed of determining the area where the virtual object is occluded.
  • Step S38 if the logarithm of the feature points of the virtual object in the occluded area of the current frame in the next frame of the current frame is greater than or equal to the preset matching logarithm threshold, according to the virtual object in the current frame
  • the occluded area determines the occluded area of the virtual object in the next frame of the current frame.
  • step S38 can also be further refined into the same solution as in FIG. 2, which will not be repeated here.
  • the method includes:
  • the connecting line being the connection line between the center of the virtual object model and the camera viewpoint; specifically, the camera viewpoint is the viewpoint of an RGB camera, for example, the coordinate is (0 ,0) corresponds to the point.
  • step S36 is specifically:
  • the internal parameter matrix of the camera refers to the internal parameter matrix of the RGB camera, and the internal parameter matrix includes the focal length, the actual distance of the principal point, and so on.
  • the projection plane can be quickly determined without searching for other parameters, thereby increasing the speed of determining the projection plane.
  • FIG. 4 shows a schematic structural diagram of a device for determining a occluded area of a virtual object provided by an embodiment of the present application.
  • the device for determining a occluded area of a virtual object is applied to a terminal device, and the terminal device is also provided with an image data acquisition device.
  • a camera and a camera that acquires depth information can be an RGB camera, and the camera that acquires depth information can be a time-of-flight TOF camera.
  • the following uses RGB cameras and TOF cameras as examples. The parts related to the embodiments of the present application are shown.
  • the device 4 for determining the occluded area of the virtual object includes: a scene three-dimensional map construction unit 41, a virtual object display unit 42, a three-dimensional scene model construction unit 43, a depth value comparison unit 44, a current frame occluded area determination unit 45, next The frame occluded area determination unit 46. among them:
  • the scene three-dimensional map construction unit 41 is configured to construct the scene three-dimensional map of the current frame according to the feature points of the current frame and corresponding depth information;
  • the feature points of the current frame after acquiring the current frame of the image through the RGB camera, extract the feature points of the current frame, estimate the pose of the current frame relative to the reference frame based on the extracted feature points, and then register the pose with the depth obtained according to TOF Figure to generate a three-dimensional map of the scene registered with the current frame.
  • first select a reference frame and establish a reference frame coordinate system For example, after the first frame of image data is captured, use the first frame as the reference frame.
  • the coordinate system established according to the first frame is the reference Frame coordinate system.
  • all captured image data frames are converted to the reference frame coordinate system.
  • the current frame is converted to the coordinate system of the previous frame, and then the previous frame and reference frame have been saved.
  • the rotation relationship of the current frame is converted to the reference frame coordinate system.
  • the first frame is not necessarily selected as the reference frame, but the frame with the largest disparity from the current frame is selected from the collected multiple frames of image data As a reference frame.
  • the current frame is the first frame
  • the three-dimensional map of the scene of the first frame is generated according to the position information of the feature points of the first frame and the depth information of the feature points of the first frame.
  • the depth map includes depth information, such as depth value, as well as position information.
  • the three-dimensional map of the scene is a dense point cloud map.
  • the extracted feature point type of the current frame may be an ORB feature point or a SIFT feature point, etc., which is not limited here.
  • the pose is estimated by the extracted feature points, that is, estimated by software, there may be a certain error.
  • the data obtained by hardware can be combined Correct the estimated pose.
  • the extracted feature points are tightly coupled nonlinear optimization combined with the data obtained by the IMU, and the modified pose is obtained.
  • a three-dimensional map of the scene of the current frame is constructed only when the current frame is a key frame.
  • the virtual object display unit 42 is configured to display a designated virtual object at a position corresponding to the click operation if a click operation issued by the user on the three-dimensional map of the scene is detected;
  • the device 4 for determining the occluded area of the virtual object further includes:
  • the plane detection unit is used to detect whether the position corresponding to the click operation is a plane, and if it is not a plane, prompt the user that the current position is not a plane. Further, the plane information in the displayed three-dimensional map of the scene (the plane information can be the plane of the table, etc.) is detected, and the user is recommended to click on the detected plane information to display the virtual object on the plane, so that the displayed virtual The object is more in line with the actual situation.
  • the three-dimensional scene model construction unit 43 is configured to construct a three-dimensional scene model according to the feature point information of the current frame;
  • the feature point information includes two-dimensional coordinate information, feature descriptors, and so on.
  • the depth value comparing unit 44 is configured to compare the depth value of the three-dimensional scene model and the model of the virtual object;
  • the occluded area determining unit 45 of the current frame is configured to determine the occluded area of the virtual object in the current frame according to the comparison result;
  • the occluded area determining unit 46 of the next frame is configured to: if the feature point of the virtual object in the occluded area of the current frame is matched in the next frame of the current frame, the number of pairs of matches is greater than or equal to the preset matching log number
  • the threshold is used to determine the area where the virtual object is blocked in the next frame of the current frame according to the area where the virtual object is blocked in the current frame.
  • the device 4 for determining the occluded area of the virtual object further includes:
  • the new scene occlusion area determining unit is configured to construct according to the feature point information of the next frame if the matching logarithm of the feature point of the next frame of the current frame and the feature point of the current frame is less than a preset matching logarithm threshold
  • the new three-dimensional scene model compares the depth values of the new three-dimensional scene model and the model of the virtual object, and determines the area of the virtual object that is occluded in the next frame according to the comparison result.
  • the construction is reduced.
  • the amount of image data involved in the process thereby increasing the speed of constructing a three-dimensional map of the scene of the current frame.
  • the logarithm of the feature points of the occluded area of the virtual object in the current frame in the next frame of the current frame is greater than or equal to the preset matching logarithm threshold, according to the virtual object being occluded in the current frame.
  • the area determines the area where the virtual object is occluded in the next frame of the current frame. Therefore, there is no need to refer to all the image data of the next frame, which reduces the image data involved in the calculation, thereby greatly improving the calculation of the virtual object The speed of the occluded area in the next frame.
  • the occluded area determining unit 46 in the next frame includes:
  • the area magnification value acquisition module is configured to acquire the area magnification value, and determine the magnified area in the next frame of the current frame according to the area magnification value and the area that the virtual object is occluded in the current frame.
  • the magnification value is greater than or equal to 1;
  • the value of the area enlargement value is usually set to be greater than 1, and it is ensured that the enlarged area is smaller than the entire image area of the next frame.
  • a new 3D scene model construction module which is used to construct a new 3D scene model according to the feature point information of the enlarged area;
  • the depth value comparison module of the new 3D scene model is used to compare the depth value of the new 3D scene model with the model of the virtual object;
  • the occluded area determination module in the next frame is configured to determine the occluded area of the virtual object in the next frame of the current frame according to the comparison result.
  • the enlarged area determined in the next frame is greater than or equal to the area occluded by the virtual object in the current frame, so as to ensure that the virtual object is determined to be in the next frame. All areas blocked.
  • setting the enlarged area to be smaller than the entire image area of the next frame can ensure that the amount of image data involved in the calculation is less than the amount of image data of the entire frame, thereby greatly reducing the amount of calculations and increasing the area of the virtual object that is blocked in the next frame The speed of determination.
  • the device 4 for determining the occluded area of the virtual object further includes:
  • the area enlargement value determining unit is used to obtain the rotation angle of the camera from the current frame to the next frame of the current frame, and determine the area enlargement value.
  • the region magnification value determining unit is specifically configured to: obtain the rotation angle of the camera from the current frame to the next frame of the current frame, and project the virtual object and the three-dimensional scene model according to the rotation angle To the same plane, obtain the two-dimensional outline of the scene corresponding to the three-dimensional scene model and the two-dimensional outline of the object corresponding to the model of the virtual object, determine the intersection area of the two-dimensional outline of the scene and the two-dimensional outline of the object, according to The intersecting area and the area occluded by the virtual object in the current frame determine the area enlargement value.
  • the rotation angle of the RGB camera from the current frame to the next frame can be obtained through the IMU data.
  • a two-dimensional projection area can be determined according to the area where the virtual object is occluded in the current frame, and then the intersection area is compared with the two-dimensional projection area to obtain a ratio, and the area magnification value is determined according to the ratio, for example, The sum of the ratio and 1 after rounding up is used as the differential amplification value.
  • the differential amplification value is used as the differential amplification value.
  • it can also be set in other ways, which is not limited here.
  • the area magnification value determining unit is specifically configured to: obtain the rotation angle of the camera from the current frame to the next frame of the current frame, and if the obtained rotation angle is greater than or equal to a preset angle threshold, the default The area enlargement value is increased by one value, and the value obtained after the increase is used as the final area enlargement value; otherwise, the default area enlargement value is reduced by one value, and the reduced value is used as the final area enlargement value.
  • the three-dimensional scene model construction unit 43 is specifically configured to:
  • the three-dimensional scene model construction unit 43 is specifically configured to:
  • the depth feature point data is combined to construct the three-dimensional scene model when the extracted feature points are few, the accuracy of the constructed three-dimensional scene model is improved.
  • the occluded area determining unit 45 of the current frame includes:
  • the first area determining module is configured to determine that the virtual object is not occluded in the current frame if all the depth values of the virtual object model are less than the depth values of the three-dimensional scene model;
  • the second area determination module is configured to project the three-dimensional scene model and the virtual object model on the same projection plane if all the depth values of the virtual object model are not less than the depth values of the three-dimensional scene model To obtain the two-dimensional outline of the scene corresponding to the three-dimensional scene model and the two-dimensional outline of the object corresponding to the model of the virtual object;
  • the third area determination module is configured to determine that if the two-dimensional contours of the object are all located within the two-dimensional contours of the scene, determine that the virtual object is completely occluded in the current frame, and if the two-dimensional contours of the object are The contour parts intersect, and it is determined that the virtual object area corresponding to the intersecting area is an area occluded by the virtual object in the current frame.
  • the two-dimensional data is obtained by projecting to the same projection plane, and the two-dimensional data is used for calculation
  • the calculation amount of is less than the calculation amount of calculation using three-dimensional data. Therefore, the above method can improve the speed of determining the area where the virtual object is occluded.
  • the device 4 for determining the occluded area of the virtual object further includes:
  • a projection plane establishing unit for establishing a projection plane perpendicular to a connecting line, the connecting line being the line connecting the center of the model of the virtual object and the camera viewpoint;
  • the second area determining module is specifically configured to:
  • the internal parameter matrix of the camera refers to the internal parameter matrix of the RGB camera, and the internal parameter matrix includes the focal length, the actual distance of the principal point, and so on.
  • the projection plane can be quickly determined without looking for other parameters, thereby increasing the speed of determining the projection plane.
  • Fig. 5 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 5 of this embodiment includes a processor 50, a memory 51, and a computer program 52 stored in the memory 51 and running on the processor 50.
  • the processor 50 implements the steps in the foregoing method embodiments when the computer program 52 is executed, such as steps S11 to S16 shown in FIG. 1.
  • the processor 50 executes the computer program 52, the functions of the modules/units in the foregoing device embodiments, such as the functions of the modules 41 to 46 shown in FIG. 4, are realized.
  • the computer program 52 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 51 and executed by the processor 50 to complete This application.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 52 in the terminal device 5.
  • the computer program 52 can be divided into a scene 3D map construction unit, a virtual object display unit, a 3D scene model construction unit, a depth value comparison unit, a current frame occluded area determination unit, and a next frame occluded area determination unit.
  • the specific functions of each unit are as follows:
  • a scene three-dimensional map construction unit configured to construct a scene three-dimensional map of the current frame according to feature points of the current frame and corresponding depth information
  • the virtual object display unit is configured to display a designated virtual object at a position corresponding to the click operation if a user's click operation on the three-dimensional map of the scene is detected;
  • a three-dimensional scene model construction unit configured to construct a three-dimensional scene model according to the feature point information of the current frame
  • a depth value comparison unit for comparing the depth value of the three-dimensional scene model and the model of the virtual object
  • the occluded area determining unit of the current frame is configured to determine the occluded area of the virtual object in the current frame according to the comparison result;
  • the occluded area determining unit for the next frame is configured to: if the feature point of the virtual object in the occluded area of the current frame is matched in the next frame of the current frame, the logarithm of the matching logarithm is greater than or equal to the preset matching logarithm threshold And determining the area of the virtual object that is blocked in the next frame of the current frame according to the area of the virtual object that is blocked in the current frame.
  • the terminal device 5 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 50 and a memory 51.
  • FIG. 5 is only an example of the terminal device 5, and does not constitute a limitation on the terminal device 5. It may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the terminal device may also include input and output devices, network access devices, buses, etc.
  • the so-called processor 50 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5.
  • the memory 51 may also be an external storage device of the terminal device 5, for example, a plug-in hard disk equipped on the terminal device 5, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc. Further, the memory 51 may also include both an internal storage unit of the terminal device 5 and an external storage device.
  • the memory 51 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 51 can also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/terminal device and method may be implemented in other ways.
  • the device/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunications signal
  • software distribution media etc.
  • the content contained in the computer-readable medium can be appropriately added or deleted in accordance with the requirements of the legislation and patent practice in the jurisdiction.
  • the computer-readable medium Does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

虚拟物体被遮挡的区域确定方法、装置及终端设备,适用于软件技术领域。所述方法包括:根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图(S11);若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体(S12);根据所述当前帧的特征点信息构建三维场景模型(S13);比较所述三维场景模型与所述虚拟物体的模型的深度值(S14);根据比较结果确定所述虚拟物体在当前帧被遮挡的区域(S15);根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在下一帧被遮挡的区域(S16)。通过上述方法能够极大提高识别虚拟物体在序列帧被遮挡的区域的运算速度。

Description

虚拟物体被遮挡的区域确定方法、装置及终端设备 技术领域
本申请属于软件技术领域,尤其涉及虚拟物体被遮挡的区域确定方法、装置、终端设备及计算机可读存储介质。
背景技术
增强现实(Augmented Reality,AR)是一种能够把计算机产生的虚拟物体或系统提示信息叠加到真实场景中,实现对真实场景有效扩充和增强现实的新技术,并能支持用户与其进行交互。
在使用AR技术的过程中,涉及到遮挡一致性的问题。遮挡一致性要求虚拟物体能够遮挡背景,也能被前景物体遮挡,具备正确的虚实遮挡关系。只有正确的处理虚拟物体在真实世界中的前后位置关系,才能使用户在实时的合成空间中正确感知虚实物体的层次关系。错误的虚实遮挡关系,容易造成感官上的空间位置错乱,不能达到超越现实的感官体验。随着AR研究领域细化发展,在解决AR虚实遮挡方面,目前主要2种解决方法。
第一种方法:三维建模遮挡法。基于模型重建的遮挡方法需要预先对可能与虚拟物体发生遮挡关系的真实物体进行三维建模,用三维模型覆盖真实物体,然后比较虚实物体模型的深度值,根据比较结果,只渲染虚拟物体未被真实物体遮挡的部分,不渲染虚拟物体被遮挡的部分。
第二种方法:基于深度计算的遮挡方法。基于深度计算的遮挡方法需要使用立体视差实时计算真实场景的深度信息,然后根据视点位置、虚拟物体的叠加位置以及求得的场景深度信息,判断虚拟物体与真实场景的空间位置关系,进行相应的遮挡渲染,从而实现虚实遮挡。
由于第一种方法需要预先重建真实物体的三维模型,因此需要巨大的工作量。而第二种方法,由于需要利用立体视觉方法计算场景的深度,因此计算量也大,且在场景变化时还需要重新计算深度值。
综上可知,现有方法需要较大的计算量才能确定出虚拟物体的被遮挡部分。
申请内容
有鉴于此,本申请实施例提供了虚拟物体被遮挡的区域确定方法、装置、终端设备及计算机可读存储介质,以解决现有技术中难以快速确定出虚拟物体被遮挡的区域的问题。 本申请实施例的第一方面提供了一种虚拟物体被遮挡的区域确定方法,包括:
根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
根据所述当前帧的特征点信息构建三维场景模型;
比较所述三维场景模型与所述虚拟物体的模型的深度值;
根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域。
本申请实施例的第二方面提供了一种虚拟物体被遮挡的区域确定装置,包括:
场景三维地图构建单元,用于根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
虚拟物体显示单元,用于若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
三维场景模型构建单元,用于根据所述当前帧的特征点信息构建三维场景模型;
深度值比较单元,用于比较所述三维场景模型与所述虚拟物体的模型的深度值;
当前帧被遮挡的区域确定单元,用于根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
下一帧被遮挡的区域确定单元,用于若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域。
本申请实施例的第三方面提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面所述方法的步骤。
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述方法的步骤。
第五方面,提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中所述的方法。
本申请实施例与现有技术相比存在的有益效果是:
由于根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图,而不是根据当前帧的所有信息构建该当前帧的场景三维地图,因此,减少了构建过程涉及的图 像数据量,从而提高当前帧的场景三维地图的构建速度。并且,由于虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值时,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域,因此,无需参考该下一帧的所有图像数据,减少了参与计算的图像数据,从而极大提高了计算虚拟物体在下一帧被遮挡的区域的速度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示范性技术描述中所需要使用的附图作简单地介绍。
图1是本申请实施例提供的第一种虚拟物体被遮挡的区域确定方法的流程图;
图2是本申请实施例提供的第二种虚拟物体被遮挡的区域确定方法的流程图;
图3是本申请实施例提供的第三种虚拟物体被遮挡的区域确定方法的流程图;
图4是本申请实施例提供的一种虚拟物体被遮挡的区域确定装置的结构示意图;
图5是本申请实施例提供的终端设备的示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应 于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。具体实现中,本申请实施例中描述的终端设备包括但不限于诸如具有触摸敏感表面(例如,触摸屏显示器和/或触摸板)的移动电话、膝上型计算机或平板计算机之类的其它便携式设备。还应当理解的是,在某些实施例中,上述设备并非便携式通信设备,而是具有触摸敏感表面(例如,触摸屏显示器和/或触摸板)的台式计算机。
在接下来的讨论中,描述了包括显示器和触摸敏感表面的终端设备。然而,应当理解的是,终端设备可以包括诸如物理键盘、鼠标和/或控制杆的一个或多个其它物理用户接口设备。
终端设备支持各种应用程序,例如以下中的一个或多个:绘图应用程序、演示应用程序、文字处理应用程序、网站创建应用程序、盘刻录应用程序、电子表格应用程序、游戏应用程序、电话应用程序、视频会议应用程序、电子邮件应用程序、即时消息收发应用程序、锻炼支持应用程序、照片管理应用程序、数码相机应用程序、数字摄影机应用程序、web浏览应用程序、数字音乐播放器应用程序和/或数字视频播放器应用程序。
可以在终端设备上执行的各种应用程序可以使用诸如触摸敏感表面的至少一个公共物理用户接口设备。可以在应用程序之间和/或相应应用程序内调整和/或改变触摸敏感表面的一个或多个功能以及终端上显示的相应信息。这样,终端的公共物理架构(例如,触摸敏感表面)可以支持具有对用户而言直观且透明的用户界面的各种应用程序。
实施例:
图1示出了本申请实施例提供的第一种虚拟物体被遮挡的区域确定方法的流程图,该方法应用于终端设备中,该终端设备同时设置了获取图像数据的相机以及获取深度信息的相机,该获取图像数据的相机可以为RGB相机,该获取深度信息的相机可以为飞行时间(Time of flight,TOF)相机,下面以RGB相机和TOF相机为例进行阐述,详述如下:
步骤S11,根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
该步骤中,在通过RGB相机获取图像的当前帧后,提取该当前帧的特征点,根据提取的特征点估算该当前帧相对参考帧的位姿,再配准该位姿与根据TOF获取的深度图,生成配准了当前帧的场景三维地图。在配准之前,先选取参考帧,建立参考帧坐标系,如在拍摄到第一帧图像数据后,将该第一帧作为参考帧,对应地,根据该第一帧建立的坐标系为参考帧坐标系,后续拍摄过程再将拍摄的图像数据帧都转换到该参考帧坐标系下,比如,将当前帧转换到上一帧的坐标系,再根据已经保存的该上一帧与参考帧的旋转关系,将当前帧转换到参考帧坐标系下。例如,假设第一帧为参考帧,且第二帧已经转换到第一帧参考坐标系中,则保存对应的旋转关系,在当前帧为第三帧时,只需要两步转换:将第三帧 转换到第二帧的坐标系,再根据保存的第二帧转换到第一帧的旋转关系,将第三帧转换到第一帧参考坐标系中。同理,在当前帧为第四帧时,将第四帧转换到第三帧的坐标系,再根据保存的第三帧转换到第一帧的旋转关系,将第四帧转换到第一帧参考坐标系中。当然,若在建立参考帧坐标系之前拍摄到多帧图像数据,此时,不一定选取第一帧作为参考帧,而是从采集的多帧图像数据中,选取与当前帧的视差最大的帧作为参考帧。当然,若该当前帧为第一帧,则根据第一帧的特征点的位置信息以及该第一帧的特征点的深度信息生成该第一帧的场景三维地图。
需要指出的是,在实际应用中,为了呈现包括更多特征点的场景三维地图,则可在获取多个序列帧后,再根据该多个序列帧的特征点以及对应的深度信息构建该序列帧的场景三维地图。
其中,深度图包括深度信息,如深度值,也包括位置信息等。此外,为了能够显示地图的更多细节,该场景三维地图为稠密点云地图。
其中,提取的当前帧的特征点类型可以为(Oriented FAST and Rotated BRIEF,ORB)特征点,该ORB是一种快速特征点提取和描述的算法。此外,提取的当前帧的特征点类型也可以为尺度不变特征变换(Scale Invariant Feature Transform,SIFT)特征点等,此处不做限定。
在一些实施例中,由于位姿是通过提取的特征点估算,即通过软件方式估算,因此可能存在一定的误差,此时,为了提高得到的位姿的准确性,可结合硬件方式获取的数据对估算的位姿进行修正。比如,结合惯性测量单元(Inertial measurement unit,IMU)获取的数据对提取的特征点进行紧耦合非线性优化,得到修改后的位姿。
在一些实施例中,为了减少计算量,只有在当前帧为关键帧时,才构建该当前帧的场景三维地图。
具体地,将当前帧与上一帧比较,若当前帧与上一帧的像素差(即视差)小于或等于预设阈值,则判定该当前帧不是关键帧,丢弃该当前帧;否则,判定该当前帧为关键帧,提取当前帧的特征点,根据当前帧(即当前的关键帧)的特征点估算该当前帧相对于上一帧(即上一帧关键帧)的位姿,将得到的位姿修正后与深度图配准,生成当前帧(即当前的关键帧)的场景三维地图。
步骤S12,若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
具体地,显示构建的场景三维地图,若检测到用户点击该场景三维地图的某个位置,则在该位置上生成一个锚点(Anchors)坐标。生成锚点坐标后,相当于虚拟物体在现实世 界有了固定位置,此后,虚拟物体就不会发生偏移,可以很好的贴合在现实世界中。其中,虚拟物体通过预先定义的虚拟物体的模型展现,且该虚拟物体的模型是三维模型。
在一些实施例中,为了使得显示的虚拟物体更符合实际情况,则检测所述点击操作对应的位置是否为一个平面,若不是一个平面,则提示用户当前位置不是平面。进一步地,检测显示的场景三维地图中的平面信息(该平面信息可以为桌子的平面等),建议用户在检测到的平面信息发出点击操作,以在平面上显示虚拟物体,从而使得显示的虚拟物体更符合实际情况。
步骤S13,根据所述当前帧的特征点信息构建三维场景模型;
其中,特征点信息包括二维坐标信息,特征描述子等。
该步骤中,若采取少于预设个数阈值的特征点信息构建三维场景模型,则构建的三维场景模型是稀疏的,从而有利于减少构建模型时所需的计算量。
步骤S14,比较所述三维场景模型与所述虚拟物体的模型的深度值;
具体地,分别获取该三维场景模型的深度值和虚拟物体的模型的深度值,再分别比较属于同一视角上的三维场景模型上的深度值和虚拟物体的模型的深度值。
步骤S15,根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
具体地,若虚拟物体的深度值小于三维场景模型的深度值,表明该虚拟物体在该三维场景模型之前,即该虚拟物体没有被遮挡,相反,表明该虚拟物体被遮挡。进一步地,只对虚拟物体没有被遮挡的区域进行渲染,减少渲染的区域,提高渲染速度。
步骤S16,若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域。
具体地,假设虚拟物体在当前帧被遮挡的区域中检测到50个特征点,则在下一帧跟踪这50个特征点,若在下一帧仍能跟踪到50个特征点,则被匹配的对数为50对,若在下一帧只跟踪到20个特征点,则被匹配的对数为20对。
本实施例中,若被匹配成功的对数大于或等于预设匹配对数阈值,表明该下一帧与当前帧的差异较小,此时,直接根据虚拟物体在当前帧被遮挡的区域确定虚拟物体在下一帧被遮挡的区域。
在一些实施例中,若所述当前帧的下一帧的特征点与所述当前帧的特征点的匹配对数小于预设匹配对数阈值,表明该下一帧与当前帧的差异较大,此时,若仍采用虚拟物体在当前帧被遮挡的区域计算该虚拟物体在下一帧被遮挡的区域很可能是不准确的,因此,为了保证虚拟物体在下一帧被遮挡的区域的计算准确性,需要根据新的三维场景模型的深度 值与虚拟物体的深度值的比较结果重新计算。具体地,根据下一帧的特征点信息构建新的三维场景模型,比较所述新的三维场景模型与所述虚拟物体的模型的深度值,根据比较结果确定所述虚拟物体在下一帧被遮挡的区域。
本申请实施例中,根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图,若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体,再根据所述当前帧的特征点信息构建三维场景模型,比较所述三维场景模型与所述虚拟物体的模型的深度值,根据比较结果确定所述虚拟物体在当前帧被遮挡的区域,最后根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在下一帧被遮挡的区域。由于根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图,而不是根据当前帧的所有信息构建该当前帧的场景三维地图,因此,减少了构建过程涉及的图像数据量,从而提高当前帧的场景三维地图的构建速度。并且,由于在虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值时,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域,因此,无需参考该下一帧的所有图像数据,减少了参与计算的图像数据,从而极大提高了计算虚拟物体在下一帧被遮挡的区域的速度。
图2示出了本申请实施例提供的第二种虚拟物体被遮挡的区域确定方法的流程图,其中,步骤S21~步骤S25与图1中的步骤S11~步骤S15相同,此处不再赘述:
步骤S21,根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
步骤S22,若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
步骤S23,根据所述当前帧的特征点信息构建三维场景模型;
步骤S24,比较所述三维场景模型与所述虚拟物体的模型的深度值;
步骤S25,根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
步骤S26,若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,获取区域放大值,根据所述区域放大值以及所述虚拟物体在当前帧被遮挡的区域,在所述当前帧的下一帧确定放大后的区域,所述区域放大值大于或等于1;
其中,为了获得更准确的遮挡区域,通常设置区域放大值的取值大于1,且保证放大后的区域小于该下一帧的整个图像区域。
该步骤中,根据虚拟物体在当前帧被遮挡的区域确定该虚拟物体在当前帧被遮挡的二 维轮廓,在该当前帧的下一帧(后续简称下一帧)定位与该二维轮廓对应的区域,再结合区域放大值放大定位后的区域,得到放大后的区域。
步骤S27,根据所述放大后的区域的特征点信息构建新的三维场景模型;
该步骤中,在下一帧确定的放大后的区域中提取特征点,再根据提取的特征点对应的特征点信息构建新的系数三维场景模型,具体的构建过程与步骤S13类似,此处不再赘述。
步骤S28,比较所述新的三维场景模型与所述虚拟物体的模型的深度值;
该步骤具体的比较过程与步骤S14类似,此处不再赘述。
步骤S29,根据比较结果确定所述虚拟物体在所述当前帧的下一帧的被遮挡的区域。
本实施例中,由于区域放大值大于或等于1,因此,在下一帧确定的放大后的区域大于或等于虚拟物体在当前帧被遮挡的区域,从而能够尽量保证确定出虚拟物体在下一帧被遮挡的所有区域。此外,设置放大后的区域小于该下一帧的整个图像区域能够保证参与计算的图像数据量小于整帧的图像数据量,从而极大地减少了运算量,提高虚拟物体在下一帧被遮挡的区域的确定速度。
在一些实施例中,在所述步骤S26之前,包括:
获取相机从当前帧到所述当前帧的下一帧的旋转角度,确定区域放大值。
可选地,所述获取相机从当前帧到所述当前帧的下一帧的旋转角度,确定区域放大值具体为:获取相机从当前帧到所述当前帧的下一帧的旋转角度,将所述虚拟物体和所述三维场景模型根据所述旋转角度投影到同一平面,得到所述三维场景模型对应的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓,确定所述场景二维轮廓和所述物体二维轮廓的相交区域,根据所述相交区域以及所述虚拟物体在当前帧被遮挡的区域确定区域放大值。
其中,可通过IMU数据获取RGB相机从当前帧到下一帧的旋转角度。
本实施例中,可根据虚拟物体在当前帧被遮挡的区域确定一个二维投影区域,再将该相交区域与二维投影区域作比,得到一比值,根据该比值确定区域放大值,比如,将该比值向上取整后与1的和作为区别放大值,当然,也可以设定为其他方式,此处不作限定。
在一些实施例中,为了提高区域放大值的确定速度,则:
可选地,所述获取相机从当前帧到所述当前帧的下一帧的旋转角度,确定区域放大值具体为:获取相机从当前帧到所述当前帧的下一帧的旋转角度,若获取的旋转角度大于或等于预置角度阈值,则在默认的区域放大值增加一个数值,并将增加后得到的数值作为最终的区域放大值;反之,则在默认的区域放大值减少一个数值,并将减少后得到的数值作为最终的区域放大值。
本实施例中,由于相机旋转时,虚拟物体的被遮挡的区域也会相应改变,因此,结合相机的旋转角度能够使得确定出的区域放大值更准确。
在一些实施例中,所述步骤S13(或步骤S23)具体为:
A1、根据所述当前帧的特征点信息以及预先标定的外参矩阵确定所述当前帧的特征点的三维坐标;
其中,外参矩阵是指包括RGB相机和TOF相机之间的位置关系的矩阵。
A2、根据所述特征点的三维坐标构建三维场景模型。
可选地,将三维坐标进行三角网格化(如poisson重建)来构建当前帧所在视角的稀疏三维场景模型。
本实施例中,由于外参矩阵是根据RGB相机和TOF相机质检的位置关系预先标定的,因此,在获取特征点信息之后就能够结合该外参矩阵快速确定出特征点的三维坐标,进而能够提高三维场景模型的构建速度。
在一些实施例中,为了提高构建的三维场景模型的准确性,在所述步骤S13(或步骤S23)之前,包括:
判断所述当前帧的特征点的数量是否少于预设特征点数量阈值;
对应地,所述根据所述特征点的三维坐标构建三维场景模型具体为:
若所述当前帧的特征点的数量少于预设特征点数量阈值,则获取所述当前帧的深度图的深度信息;从所述深度信息提取深度特征点数据;根据所述特征点的三维坐标以及所述深度特征点数据构建三维场景模型。
本实施例中,若从当前帧提取的特征点的数量少于预设特征点数量阈值,表明当前帧包括纹理信息比较弱的区域,比如包含白墙区域,此时,通过TOF相机构建的深度图中获取稠密的深度信息,再从该稠密的深度信息中提取深度特征点数据,最后将特征点的三维坐标和深度特征点数据构建三维场景模型。由于在提取的特征点较少的情况下结合深度特征点数据构建三维场景模型,因此,提高了构建的三维场景模型的精确性。
图3示出了本申请实施例提供的第三种虚拟物体被遮挡的区域确定方法的流程图,其中,步骤S31~步骤S34与图1中的步骤S11~步骤S14相同,此处不再赘述:
步骤S31,根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
步骤S32,若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
步骤S33,根据所述当前帧的特征点信息构建三维场景模型;
步骤S34,比较所述三维场景模型与所述虚拟物体的模型的深度值;
步骤S35,若虚拟物体的模型的所有深度值都小于所述三维场景模型的深度值,则判定所述虚拟物体在当前帧没有被遮挡的区域;
步骤S36,若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则将所述三维场景模型与所述虚拟物体的模型都投影到同一投影平面上,得到所述三维场景模型对应的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓;
步骤S37,若所述物体二维轮廓全部位于所述场景二维轮廓内,判定所述虚拟物体在当前帧全部被遮挡,若所述物体二维轮廓与所述场景二维轮廓部分相交,判定相交区域对应的虚拟物体区域为所述虚拟物体在当前帧被遮挡的区域。
该步骤中,根据物体二维轮廓与所述场景二维轮廓得到相交区域(二维区域),在虚拟物体上查找到与该相交区域对应的区域(三维区域),并将查找到的区域作为虚拟物体在当前帧被遮挡的区域。
本实施例中,由于将三维场景模型与虚拟物体的模型都投影到同一投影平面来确定虚拟物体被遮挡的区域,而投影到同一投影平面得到的是二维数据,且使用二维数据进行运算的计算量小于使用三维数据进行运算的计算量,因此,上述方法能够提高确定虚拟物体被遮挡的区域的速度。
步骤S38,若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域。
需要指出的是,该步骤S38也可以进一步细化为与图2相同的方案,此处不再赘述。
在一些实施例中,为了便于快速确定投影平面,在所述步骤S36之前,包括:
建立与连接线垂直的投影平面,所述连接线为所述虚拟物体的模型的中心与相机视点的连线;具体地,该相机视点为RGB相机的视点,如在RGB相机中坐标为(0,0)对应的点。
对应地,所述步骤S36具体为:
若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则根据相机的内参矩阵,将所述三维场景模型与所述虚拟物体的模型都投影到所述投影平面,得到所述三维场景模型对应的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓。
其中,相机的内参矩阵是指RGB相机的内参矩阵,该内参矩阵包括焦距,主点的实际距离等。
本实施例中,由于在相机确定之后就能够确定相机的内参矩阵以及视点,因此无需寻 找其他参数就能快速确定出投影平面,从而提高投影平面的确定速度。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
图4示出了本申请实施例提供的一种虚拟物体被遮挡的区域确定装置的结构示意图,该虚拟物体被遮挡的区域确定装置应用于终端设备中,该终端设备同时设置了获取图像数据的相机以及获取深度信息的相机,该获取图像数据的相机可以为RGB相机,该获取深度信息的相机可以为飞行时间TOF相机,下面以RGB相机和TOF相机为例进行阐述,为了便于说明,仅示出了与本申请实施例相关的部分。
该虚拟物体被遮挡的区域确定装置4包括:场景三维地图构建单元41、虚拟物体显示单元42、三维场景模型构建单元43、深度值比较单元44、当前帧被遮挡的区域确定单元45、下一帧被遮挡的区域确定单元46。其中:
场景三维地图构建单元41,用于根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
具体地,在通过RGB相机获取图像的当前帧后,提取该当前帧的特征点,根据提取的特征点估算该当前帧相对参考帧的位姿,再配准该位姿与根据TOF获取的深度图,生成配准了当前帧的场景三维地图。在配准之前,先选取参考帧,建立参考帧坐标系,如在拍摄到第一帧图像数据后,将该第一帧作为参考帧,对应地,根据该第一帧建立的坐标系为参考帧坐标系,后续拍摄过程再将拍摄的图像数据帧都转换到该参考帧坐标系下,比如,将当前帧转换到上一帧的坐标系,再根据已经保存的该上一帧与参考帧的旋转关系,将当前帧转换到参考帧坐标系下。当然,若在建立参考帧坐标系之前拍摄到多帧图像数据,此时,不一定选取第一帧作为参考帧,而是从采集的多帧图像数据中,选取与当前帧的视差最大的帧作为参考帧。当然,若该当前帧为第一帧,则根据第一帧的特征点的位置信息以及该第一帧的特征点的深度信息生成该第一帧的场景三维地图。
其中,深度图包括深度信息,如深度值,也包括位置信息等。此外,为了能够显示地图的更多细节,该场景三维地图为稠密点云地图。
其中,提取的当前帧的特征点类型可以为ORB特征点,也可以为SIFT特征点等,此处不做限定。
在一些实施例中,由于位姿是通过提取的特征点估算,即通过软件方式估算,因此可能存在一定的误差,此时,为了提高得到的位姿的准确性,可结合硬件方式获取的数据对估算的位姿进行修正。比如,结合IMU获取的数据对提取的特征点进行紧耦合非线性优化, 得到修改后的位姿。
在一些实施例中,为了减少计算量,只有在当前帧为关键帧时,才构建该当前帧的场景三维地图。
虚拟物体显示单元42,用于若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
在一些实施例中,为了使得显示的虚拟物体更符合实际情况,该虚拟物体被遮挡的区域确定装置4还包括:
平面检测单元,用于检测所述点击操作对应的位置是否为一个平面,若不是一个平面,则提示用户当前位置不是平面。进一步地,检测显示的场景三维地图中的平面信息(该平面信息可以为桌子的平面等),建议用户在检测到的平面信息发出点击操作,以在平面上显示虚拟物体,从而使得显示的虚拟物体更符合实际情况。
三维场景模型构建单元43,用于根据所述当前帧的特征点信息构建三维场景模型;
其中,特征点信息包括二维坐标信息,特征描述子等。
深度值比较单元44,用于比较所述三维场景模型与所述虚拟物体的模型的深度值;
当前帧被遮挡的区域确定单元45,用于根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
下一帧被遮挡的区域确定单元46,用于若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域。
在一些实施例中,该虚拟物体被遮挡的区域确定装置4还包括:
新场景遮挡区域确定单元,用于若所述当前帧的下一帧的特征点与所述当前帧的特征点的匹配对数小于预设匹配对数阈值,根据下一帧的特征点信息构建新的三维场景模型,比较所述新的三维场景模型与所述虚拟物体的模型的深度值,根据比较结果确定所述虚拟物体在下一帧被遮挡的区域。
本申请实施例中,由于根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图,而不是根据当前帧的所有信息构建该当前帧的场景三维地图,因此,减少了构建过程涉及的图像数据量,从而提高当前帧的场景三维地图的构建速度。并且,由于虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值时,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域,因此,无需参考该下一帧的所有图像数据,减少了参与计算的图像数据,从而极大提高了计算虚拟物体在下一帧被遮挡的区域的速度。
在一些实施例中,所述下一帧被遮挡的区域确定单元46包括:
区域放大值获取模块,用于获取区域放大值,根据所述区域放大值以及所述虚拟物体在当前帧被遮挡的区域,在所述当前帧的下一帧确定放大后的区域,所述区域放大值大于或等于1;
其中,为了获得更准确的遮挡区域,通常设置区域放大值的取值大于1,且保证放大后的区域小于该下一帧的整个图像区域。
新的三维场景模型构建模块,用于根据所述放大后的区域的特征点信息构建新的三维场景模型;
新的三维场景模型的深度值比较模块,用于比较所述新的三维场景模型与所述虚拟物体的模型的深度值;
下一帧被遮挡的区域确定模块,用于根据比较结果确定所述虚拟物体在所述当前帧的下一帧的被遮挡的区域。
本实施例中,由于区域放大值大于或等于1,因此,在下一帧确定的放大后的区域大于或等于虚拟物体在当前帧被遮挡的区域,从而能够尽量保证确定出虚拟物体在下一帧被遮挡的所有区域。此外,设置放大后的区域小于该下一帧的整个图像区域能够保证参与计算的图像数据量小于整帧的图像数据量,从而极大地减少了运算量,提高虚拟物体在下一帧被遮挡的区域的确定速度。
在一些实施例中,该虚拟物体被遮挡的区域确定装置4还包括:
区域放大值确定单元,用于获取相机从当前帧到所述当前帧的下一帧的旋转角度,确定区域放大值。
可选地,所述区域放大值确定单元具体用于:获取相机从当前帧到所述当前帧的下一帧的旋转角度,将所述虚拟物体和所述三维场景模型根据所述旋转角度投影到同一平面,得到所述三维场景模型对应的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓,确定所述场景二维轮廓和所述物体二维轮廓的相交区域,根据所述相交区域以及所述虚拟物体在当前帧被遮挡的区域确定区域放大值。
其中,可通过IMU数据获取RGB相机从当前帧到下一帧的旋转角度。
本实施例中,可根据虚拟物体在当前帧被遮挡的区域确定一个二维投影区域,再将该相交区域与二维投影区域作比,得到一比值,根据该比值确定区域放大值,比如,将该比值向上取整后与1的和作为区别放大值,当然,也可以设定为其他方式,此处不作限定。
可选地,所述区域放大值确定单元具体用于:获取相机从当前帧到所述当前帧的下一帧的旋转角度,若获取的旋转角度大于或等于预置角度阈值,则在默认的区域放大值增加 一个数值,并将增加后得到的数值作为最终的区域放大值;反之,则在默认的区域放大值减少一个数值,并将减少后得到的数值作为最终的区域放大值。
在一些实施例中,所述三维场景模型构建单元43具体用于:
根据所述当前帧的特征点信息以及预先标定的外参矩阵确定所述当前帧的特征点的三维坐标;根据所述特征点的三维坐标构建三维场景模型。
在一些实施例中,所述三维场景模型构建单元43具体用于:
根据所述当前帧的特征点信息以及预先标定的外参矩阵确定所述当前帧的特征点的三维坐标;
判断所述当前帧的特征点的数量是否少于预设特征点数量阈值;
若所述当前帧的特征点的数量少于预设特征点数量阈值,则获取所述当前帧的深度图的深度信息;
从所述深度信息提取深度特征点数据;
根据所述特征点的三维坐标以及所述深度特征点数据构建三维场景模型。
本实施例中,由于在提取的特征点较少的情况下结合深度特征点数据构建三维场景模型,因此,提高了构建的三维场景模型的精确性。
在一些实施例中,所述当前帧被遮挡的区域确定单元45包括:
第一区域确定模块,用于若虚拟物体的模型的所有深度值都小于所述三维场景模型的深度值,则判定所述虚拟物体在当前帧没有被遮挡的区域;
第二区域确定模块,用于若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则将所述三维场景模型与所述虚拟物体的模型都投影到同一投影平面上,得到所述三维场景模型对应的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓;
第三区域确定模块,用于若所述物体二维轮廓全部位于所述场景二维轮廓内,判定所述虚拟物体在当前帧全部被遮挡,若所述物体二维轮廓与所述场景二维轮廓部分相交,判定相交区域对应的虚拟物体区域为所述虚拟物体在当前帧被遮挡的区域。
本实施例中,由于将三维场景模型与虚拟物体的模型都投影到同一投影平面来确定虚拟物体被遮挡的区域,而投影到同一投影平面得到的是二维数据,且使用二维数据进行运算的计算量小于使用三维数据进行运算的计算量,因此,上述方法能够提高确定虚拟物体被遮挡的区域的速度。
在一些实施例中,为了便于快速确定投影平面,该虚拟物体被遮挡的区域确定装置4还包括:
投影平面建立单元,用于建立与连接线垂直的投影平面,所述连接线为所述虚拟物体 的模型的中心与相机视点的连线;
对应地,所述第二区域确定模块具体用于:
若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则根据相机的内参矩阵,将所述三维场景模型与所述虚拟物体的模型都投影到所述投影平面。
其中,相机的内参矩阵是指RGB相机的内参矩阵,该内参矩阵包括焦距,主点的实际距离等。
本实施例中,由于在相机确定之后就能够确定相机的内参矩阵以及视点,因此无需寻找其他参数就能快速确定出投影平面,从而提高投影平面的确定速度。
图5是本申请一实施例提供的终端设备的示意图。如图5所示,该实施例的终端设备5包括:处理器50、存储器51以及存储在所述存储器51中并可在所述处理器50上运行的计算机程序52。所述处理器50执行所述计算机程序52时实现上述各个方法实施例中的步骤,例如图1所示的步骤S11至S16。或者,所述处理器50执行所述计算机程序52时实现上述各装置实施例中各模块/单元的功能,例如图4所示模块41至46的功能。
示例性的,所述计算机程序52可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器51中,并由所述处理器50执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序52在所述终端设备5中的执行过程。例如,所述计算机程序52可以被分割成场景三维地图构建单元、虚拟物体显示单元、三维场景模型构建单元、深度值比较单元、当前帧被遮挡的区域确定单元、下一帧被遮挡的区域确定单元,各单元具体功能如下:
场景三维地图构建单元,用于根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
虚拟物体显示单元,用于若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
三维场景模型构建单元,用于根据所述当前帧的特征点信息构建三维场景模型;
深度值比较单元,用于比较所述三维场景模型与所述虚拟物体的模型的深度值;
当前帧被遮挡的区域确定单元,用于根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
下一帧被遮挡的区域确定单元,用于若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在 当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域。
所述终端设备5可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器50、存储器51。本领域技术人员可以理解,图5仅仅是终端设备5的示例,并不构成对终端设备5的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器50可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器51可以是所述终端设备5的内部存储单元,例如终端设备5的硬盘或内存。所述存储器51也可以是所述终端设备5的外部存储设备,例如所述终端设备5上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器51还可以既包括所述终端设备5的内部存储单元也包括外部存储设备。所述存储器51用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器51还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以 硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含 在本申请的保护范围之内。

Claims (20)

  1. 一种虚拟物体被遮挡的区域确定方法,其特征在于,包括:
    根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
    若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
    根据所述当前帧的特征点信息构建三维场景模型;
    比较所述三维场景模型与所述虚拟物体的模型的深度值;
    根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
    若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域。
  2. 如权利要求1所述的虚拟物体被遮挡的区域确定方法,其特征在于,所述根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域,包括:
    获取区域放大值,根据所述区域放大值以及所述虚拟物体在当前帧被遮挡的区域,在所述当前帧的下一帧确定放大后的区域,所述区域放大值大于或等于1;
    根据所述放大后的区域的特征点信息构建新的三维场景模型;
    比较所述新的三维场景模型与所述虚拟物体的模型的深度值;
    根据比较结果确定所述虚拟物体在所述当前帧的下一帧的被遮挡的区域。
  3. 如权利要求2所述的虚拟物体被遮挡的区域确定方法,其特征在于,在所述获取区域放大值之前,包括:
    获取相机从当前帧到所述当前帧的下一帧的旋转角度,根据所述旋转角度定区域放大值。
  4. 如权利要求1所述的虚拟物体被遮挡的区域确定方法,其特征在于,所述根据所述当前帧的特征点信息构建三维场景模型具体为:
    根据所述当前帧的特征点信息以及预先标定的外参矩阵确定所述当前帧的特征点的三维坐标;
    根据所述特征点的三维坐标构建三维场景模型。
  5. 如权利要求4所述的虚拟物体被遮挡的区域确定方法,其特征在于,在所述根据所述特征点的三维坐标构建三维场景模型之前,包括:
    判断所述当前帧的特征点的数量是否少于预设特征点数量阈值;
    对应地,所述根据所述特征点的三维坐标构建三维场景模型具体为:
    若所述当前帧的特征点的数量少于预设特征点数量阈值,则获取所述当前帧的深度图的深度信息;
    从所述深度信息提取深度特征点数据;
    根据所述特征点的三维坐标以及所述深度特征点数据构建三维场景模型。
  6. 如权利要求1至5任一项所述的虚拟物体被遮挡的区域确定方法,其特征在于,所述根据比较结果确定所述虚拟物体在当前帧被遮挡的区域,包括:
    若虚拟物体的模型的所有深度值都小于所述三维场景模型的深度值,则判定所述虚拟物体在当前帧没有被遮挡的区域;
    若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则将所述三维场景模型与所述虚拟物体的模型都投影到同一投影平面上,得到所述三维场景模型对应的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓;
    若所述物体二维轮廓全部位于所述场景二维轮廓内,判定所述虚拟物体在当前帧全部被遮挡,若所述物体二维轮廓与所述场景二维轮廓部分相交,判定相交区域对应的虚拟物体区域为所述虚拟物体在当前帧被遮挡的区域。
  7. 如权利要求6所述的虚拟物体被遮挡的区域确定方法,其特征在于,在所述若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则将所述三维场景模型与所述虚拟物体的模型都投影到同一投影平面上之前,包括:
    建立与连接线垂直的投影平面,所述连接线为所述虚拟物体的模型的中心与相机视点的连线;
    对应地,所述若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则将所述三维场景模型与所述虚拟物体的模型都投影到同一投影平面上具体为:
    若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则根据相机的内参矩阵,将所述三维场景模型与所述虚拟物体的模型都投影到所述投影平面。
  8. 如权利要求3所述的虚拟物体被遮挡的区域确定方法,其特征在于,所述获取相机从当前帧到所述当前帧的下一帧的旋转角度,根据所述旋转角度定区域放大值,包括:
    获取相机从当前帧到所述当前帧的下一帧的旋转角度,将所述虚拟物体和所述三维场景模型根据所述旋转角度投影到同一平面,得到所述三维场景模型对应的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓,确定所述场景二维轮廓和所述物体二维轮廓的相交区域,根据所述相交区域以及所述虚拟物体在当前帧被遮挡的区域确定区域放大值。
  9. 如权利要求3所述的虚拟物体被遮挡的区域确定方法,其特征在于,所述获取相机从当前帧到所述当前帧的下一帧的旋转角度,根据所述旋转角度定区域放大值,包括:
    获取相机从当前帧到所述当前帧的下一帧的旋转角度,若获取的旋转角度大于或等于预置角度阈值,则在默认的区域放大值增加一个数值,并将增加后得到的数值作为最终的区域放大值;反之,则在默认的区域放大值减少一个数值,并将减少后得到的数值作为最终的区域放大值。
  10. 如权利要求1所述的虚拟物体被遮挡的区域确定方法,其特征在于,在所述若检测到用户在所述场景三维地图上发出的点击操作之后,包括:检测所述点击操作对应的位置是否为一个平面,若不是一个平面,则提示用户当前位置不是平面。
  11. 一种虚拟物体被遮挡的区域确定装置,其特征在于,包括:
    场景三维地图构建单元,用于根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
    虚拟物体显示单元,用于若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
    三维场景模型构建单元,用于根据所述当前帧的特征点信息构建三维场景模型;
    深度值比较单元,用于比较所述三维场景模型与所述虚拟物体的模型的深度值;
    当前帧被遮挡的区域确定单元,用于根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
    下一帧被遮挡的区域确定单元,用于若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域。
  12. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现以下步骤:
    根据当前帧的特征点以及对应的深度信息构建所述当前帧的场景三维地图;
    若检测到用户在所述场景三维地图上发出的点击操作,在所述点击操作对应的位置上显示指定的虚拟物体;
    根据所述当前帧的特征点信息构建三维场景模型;
    比较所述三维场景模型与所述虚拟物体的模型的深度值;
    根据比较结果确定所述虚拟物体在当前帧被遮挡的区域;
    若所述虚拟物体在当前帧被遮挡的区域的特征点在所述当前帧的下一帧被匹配的对数大于或等于预设匹配对数阈值,根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物 体在所述当前帧的下一帧被遮挡的区域。
  13. 如权利要求12所述的终端设备,其特征在于,所述根据所述虚拟物体在当前帧被遮挡的区域确定所述虚拟物体在所述当前帧的下一帧被遮挡的区域,包括:
    获取区域放大值,根据所述区域放大值以及所述虚拟物体在当前帧被遮挡的区域,在所述当前帧的下一帧确定放大后的区域,所述区域放大值大于或等于1;
    根据所述放大后的区域的特征点信息构建新的三维场景模型;
    比较所述新的三维场景模型与所述虚拟物体的模型的深度值;
    根据比较结果确定所述虚拟物体在所述当前帧的下一帧的被遮挡的区域。
  14. 如权利要求13所述的终端设备,其特征在于,在所述获取区域放大值之前,所述处理器执行所述计算机程序时还包括执行以下步骤:
    获取相机从当前帧到所述当前帧的下一帧的旋转角度,根据所述旋转角度定区域放大值。
  15. 如权利要求12所述的终端设备,其特征在于,所述根据所述当前帧的特征点信息构建三维场景模型具体为:
    根据所述当前帧的特征点信息以及预先标定的外参矩阵确定所述当前帧的特征点的三维坐标;
    根据所述特征点的三维坐标构建三维场景模型。
  16. 如权利要求15所述的终端设备,其特征在于,在所述根据所述特征点的三维坐标构建三维场景模型之前,所述处理器执行所述计算机程序时还包括执行以下步骤:
    判断所述当前帧的特征点的数量是否少于预设特征点数量阈值;
    对应地,所述根据所述特征点的三维坐标构建三维场景模型具体为:
    若所述当前帧的特征点的数量少于预设特征点数量阈值,则获取所述当前帧的深度图的深度信息;
    从所述深度信息提取深度特征点数据;
    根据所述特征点的三维坐标以及所述深度特征点数据构建三维场景模型。
  17. 如权利要求12至16任一项所述的终端设备,其特征在于,所述根据比较结果确定所述虚拟物体在当前帧被遮挡的区域,包括:
    若虚拟物体的模型的所有深度值都小于所述三维场景模型的深度值,则判定所述虚拟物体在当前帧没有被遮挡的区域;
    若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则将所述三维场景模型与所述虚拟物体的模型都投影到同一投影平面上,得到所述三维场景模型对应 的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓;
    若所述物体二维轮廓全部位于所述场景二维轮廓内,判定所述虚拟物体在当前帧全部被遮挡,若所述物体二维轮廓与所述场景二维轮廓部分相交,判定相交区域对应的虚拟物体区域为所述虚拟物体在当前帧被遮挡的区域。
  18. 如权利要求17所述的终端设备,其特征在于,在所述若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则将所述三维场景模型与所述虚拟物体的模型都投影到同一投影平面上之前,所述处理器执行所述计算机程序时还包括执行以下步骤:
    建立与连接线垂直的投影平面,所述连接线为所述虚拟物体的模型的中心与相机视点的连线;
    对应地,所述若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则将所述三维场景模型与所述虚拟物体的模型都投影到同一投影平面上具体为:
    若虚拟物体的模型的所有深度值不是都小于所述三维场景模型的深度值,则根据相机的内参矩阵,将所述三维场景模型与所述虚拟物体的模型都投影到所述投影平面。
  19. 如权利要求14所述的终端设备,其特征在于,所述获取相机从当前帧到所述当前帧的下一帧的旋转角度,根据所述旋转角度定区域放大值,包括:
    获取相机从当前帧到所述当前帧的下一帧的旋转角度,将所述虚拟物体和所述三维场景模型根据所述旋转角度投影到同一平面,得到所述三维场景模型对应的场景二维轮廓和所述虚拟物体的模型对应的物体二维轮廓,确定所述场景二维轮廓和所述物体二维轮廓的相交区域,根据所述相交区域以及所述虚拟物体在当前帧被遮挡的区域确定区域放大值。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至10任一项所述的方法。
PCT/CN2020/079282 2019-04-12 2020-03-13 虚拟物体被遮挡的区域确定方法、装置及终端设备 WO2020207191A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20787305.0A EP3951721A4 (en) 2019-04-12 2020-03-13 METHOD AND DEVICE FOR DETERMINING THE HIDDEN AREA OF A VIRTUAL OBJECT AND FINAL DEVICE
US17/499,856 US11842438B2 (en) 2019-04-12 2021-10-12 Method and terminal device for determining occluded area of virtual object

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910295945.3A CN111815755B (zh) 2019-04-12 2019-04-12 虚拟物体被遮挡的区域确定方法、装置及终端设备
CN201910295945.3 2019-04-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/499,856 Continuation US11842438B2 (en) 2019-04-12 2021-10-12 Method and terminal device for determining occluded area of virtual object

Publications (1)

Publication Number Publication Date
WO2020207191A1 true WO2020207191A1 (zh) 2020-10-15

Family

ID=72751517

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079282 WO2020207191A1 (zh) 2019-04-12 2020-03-13 虚拟物体被遮挡的区域确定方法、装置及终端设备

Country Status (4)

Country Link
US (1) US11842438B2 (zh)
EP (1) EP3951721A4 (zh)
CN (1) CN111815755B (zh)
WO (1) WO2020207191A1 (zh)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK201870350A1 (en) 2018-05-07 2019-12-05 Apple Inc. Devices and Methods for Measuring Using Augmented Reality
US11227435B2 (en) 2018-08-13 2022-01-18 Magic Leap, Inc. Cross reality system
US10785413B2 (en) 2018-09-29 2020-09-22 Apple Inc. Devices, methods, and graphical user interfaces for depth-based annotation
JP2022512600A (ja) 2018-10-05 2022-02-07 マジック リープ, インコーポレイテッド 任意の場所における場所特有の仮想コンテンツのレンダリング
CN114600064A (zh) 2019-10-15 2022-06-07 奇跃公司 具有定位服务的交叉现实系统
CN114616534A (zh) 2019-10-15 2022-06-10 奇跃公司 具有无线指纹的交叉现实系统
JP2023504775A (ja) 2019-11-12 2023-02-07 マジック リープ, インコーポレイテッド 位置特定サービスおよび共有場所ベースのコンテンツを伴うクロスリアリティシステム
EP4073763A4 (en) 2019-12-09 2023-12-27 Magic Leap, Inc. CROSS-REALLY SYSTEM WITH SIMPLIFIED PROGRAMMING OF VIRTUAL CONTENT
US11080879B1 (en) 2020-02-03 2021-08-03 Apple Inc. Systems, methods, and graphical user interfaces for annotating, measuring, and modeling environments
US20210247846A1 (en) * 2020-02-07 2021-08-12 Krikey, Inc. Gesture tracking for mobile rendered augmented reality
US11410395B2 (en) 2020-02-13 2022-08-09 Magic Leap, Inc. Cross reality system with accurate shared maps
US11562525B2 (en) 2020-02-13 2023-01-24 Magic Leap, Inc. Cross reality system with map processing using multi-resolution frame descriptors
WO2021163295A1 (en) 2020-02-13 2021-08-19 Magic Leap, Inc. Cross reality system with prioritization of geolocation information for localization
CN115461787A (zh) * 2020-02-26 2022-12-09 奇跃公司 具有快速定位的交叉现实系统
US11727650B2 (en) * 2020-03-17 2023-08-15 Apple Inc. Systems, methods, and graphical user interfaces for displaying and manipulating virtual objects in augmented reality environments
US20220212107A1 (en) * 2020-03-17 2022-07-07 Tencent Technology (Shenzhen) Company Limited Method and Apparatus for Displaying Interactive Item, Terminal, and Storage Medium
CN113077510B (zh) * 2021-04-12 2022-09-20 广州市诺以德医疗科技发展有限公司 遮蔽下的立体视功能检查系统
US11941764B2 (en) 2021-04-18 2024-03-26 Apple Inc. Systems, methods, and graphical user interfaces for adding effects in augmented reality environments
CN113240692B (zh) * 2021-06-30 2024-01-02 北京市商汤科技开发有限公司 一种图像处理方法、装置、设备以及存储介质
CN113643357A (zh) * 2021-07-12 2021-11-12 杭州易现先进科技有限公司 一种基于3d定位信息的ar人像拍照方法和系统
CN114742884B (zh) * 2022-06-09 2022-11-22 杭州迦智科技有限公司 一种基于纹理的建图、里程计算、定位方法及系统
CN115619867B (zh) * 2022-11-18 2023-04-11 腾讯科技(深圳)有限公司 数据处理方法、装置、设备、存储介质
CN116030228B (zh) * 2023-02-22 2023-06-27 杭州原数科技有限公司 一种基于web的mr虚拟画面展示方法及装置
CN116860113B (zh) * 2023-08-16 2024-03-22 深圳职业技术大学 一种xr组合场景体验生成方法、系统及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129708A (zh) * 2010-12-10 2011-07-20 北京邮电大学 增强现实环境中快速多层次虚实遮挡处理方法
CN106803286A (zh) * 2017-01-17 2017-06-06 湖南优象科技有限公司 基于多视点图像的虚实遮挡实时处理方法
CN107292965A (zh) * 2017-08-03 2017-10-24 北京航空航天大学青岛研究院 一种基于深度图像数据流的虚实遮挡处理方法
CN108898676A (zh) * 2018-06-19 2018-11-27 青岛理工大学 一种虚实物体之间碰撞及遮挡检测方法及系统
CN109471521A (zh) * 2018-09-05 2019-03-15 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Ar环境下的虚实遮挡交互方法及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509343B (zh) 2011-09-30 2014-06-25 北京航空航天大学 一种基于双目图像和对象轮廓的虚实遮挡处理方法
GB2506338A (en) * 2012-07-30 2014-04-02 Sony Comp Entertainment Europe A method of localisation and mapping
US9996974B2 (en) * 2013-08-30 2018-06-12 Qualcomm Incorporated Method and apparatus for representing a physical scene
CN107025662B (zh) * 2016-01-29 2020-06-09 成都理想境界科技有限公司 一种实现增强现实的方法、服务器、终端及系统
CN105761245B (zh) * 2016-01-29 2018-03-06 速感科技(北京)有限公司 一种基于视觉特征点的自动跟踪方法及装置
CN108038902B (zh) * 2017-12-07 2021-08-27 合肥工业大学 一种面向深度相机的高精度三维重建方法和系统
US10636214B2 (en) * 2017-12-22 2020-04-28 Houzz, Inc. Vertical plane object simulation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129708A (zh) * 2010-12-10 2011-07-20 北京邮电大学 增强现实环境中快速多层次虚实遮挡处理方法
CN106803286A (zh) * 2017-01-17 2017-06-06 湖南优象科技有限公司 基于多视点图像的虚实遮挡实时处理方法
CN107292965A (zh) * 2017-08-03 2017-10-24 北京航空航天大学青岛研究院 一种基于深度图像数据流的虚实遮挡处理方法
CN108898676A (zh) * 2018-06-19 2018-11-27 青岛理工大学 一种虚实物体之间碰撞及遮挡检测方法及系统
CN109471521A (zh) * 2018-09-05 2019-03-15 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Ar环境下的虚实遮挡交互方法及系统

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
See also references of EP3951721A4 *
TIAN, YUAN ET AL.: "Handling Occlusions in Augmented Reality Based on 3D Reconstruction Method.", NEUROCOMPUTING, 31 December 2015 (2015-12-31), XP055741293, DOI: 20200513150425A *
TIAN, YUAN ET AL.: "Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach", SENSORS, 29 March 2010 (2010-03-29), XP002685272, DOI: 20200513150704A *

Also Published As

Publication number Publication date
US20220036648A1 (en) 2022-02-03
CN111815755B (zh) 2023-06-30
EP3951721A4 (en) 2022-07-20
US11842438B2 (en) 2023-12-12
CN111815755A (zh) 2020-10-23
EP3951721A1 (en) 2022-02-09

Similar Documents

Publication Publication Date Title
WO2020207191A1 (zh) 虚拟物体被遮挡的区域确定方法、装置及终端设备
CN109961406B (zh) 一种图像处理的方法、装置及终端设备
CN109064390B (zh) 一种图像处理方法、图像处理装置及移动终端
WO2018119889A1 (zh) 三维场景定位方法和装置
US20200058153A1 (en) Methods and Devices for Acquiring 3D Face, and Computer Readable Storage Media
KR20220009393A (ko) 이미지 기반 로컬화
US9311756B2 (en) Image group processing and visualization
US20210272306A1 (en) Method for training image depth estimation model and method for processing image depth information
CN110288710B (zh) 一种三维地图的处理方法、处理装置及终端设备
WO2020034785A1 (zh) 三维模型处理方法和装置
WO2022012085A1 (zh) 人脸图像处理方法、装置、存储介质及电子设备
WO2021136386A1 (zh) 数据处理方法、终端和服务器
KR20120093063A (ko) 이미지들로부터의 빠른 스테레오 재구성을 위한 기술들
CN110296686B (zh) 基于视觉的定位方法、装置及设备
US10147240B2 (en) Product image processing method, and apparatus and system thereof
WO2014117559A1 (en) 3d-rendering method and device for logical window
WO2023024441A1 (zh) 模型重建方法及相关装置、电子设备和存储介质
CN110866977A (zh) 增强现实处理方法及装置、系统、存储介质和电子设备
CN115439607A (zh) 一种三维重建方法、装置、电子设备及存储介质
CN113793387A (zh) 单目散斑结构光系统的标定方法、装置及终端
CN113870439A (zh) 用于处理图像的方法、装置、设备以及存储介质
JP7262530B2 (ja) 位置情報の生成方法、関連装置及びコンピュータプログラム製品
WO2023010565A1 (zh) 单目散斑结构光系统的标定方法、装置及终端
CN110097061B (zh) 一种图像显示方法及装置
WO2023197912A1 (zh) 图像的处理方法、装置、设备、存储介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20787305

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020787305

Country of ref document: EP

Effective date: 20211103