CN110276774B - Object drawing method, device, terminal and computer-readable storage medium - Google Patents

Object drawing method, device, terminal and computer-readable storage medium Download PDF

Info

Publication number
CN110276774B
CN110276774B CN201910565646.7A CN201910565646A CN110276774B CN 110276774 B CN110276774 B CN 110276774B CN 201910565646 A CN201910565646 A CN 201910565646A CN 110276774 B CN110276774 B CN 110276774B
Authority
CN
China
Prior art keywords
drawn
terminal
dimensional
acquiring
pose data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910565646.7A
Other languages
Chinese (zh)
Other versions
CN110276774A (en
Inventor
邓健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910565646.7A priority Critical patent/CN110276774B/en
Publication of CN110276774A publication Critical patent/CN110276774A/en
Application granted granted Critical
Publication of CN110276774B publication Critical patent/CN110276774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Abstract

The application belongs to the technical field of image processing, and particularly relates to a method, a device, a terminal and a computer-readable storage medium for drawing an object, wherein the method for drawing the object comprises the following steps: acquiring a first depth image of a real environment shot by a camera shooting assembly, and establishing a three-dimensional coordinate system based on the real environment according to the first depth image; acquiring a two-dimensional image obtained by shooting the whole surface of an object to be drawn by a camera assembly and a second depth image corresponding to the two-dimensional image, and acquiring real-time pose data of a terminal; performing edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image; calculating the coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generating a pattern of an object to be drawn according to the coordinates; the pattern comprises a three-dimensional stereo image of an object to be drawn and/or a plan view of the object to be drawn; the drawing efficiency of the three-dimensional stereogram and the plan of the object is improved.

Description

Object drawing method, device, terminal and computer-readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method, an apparatus, a terminal, and a computer-readable storage medium for drawing an object.
Background
When a three-dimensional object is expressed, the expression may be performed by drawing a three-dimensional perspective view of the object or a plan view of the object.
At present, when a user wants to draw a three-dimensional stereo image or a plan view of an object, the user needs to measure the object by using tools such as a ruler and the like, and then draws the object by hand drawing or computer drawing software, which has the problem of low drawing efficiency.
Disclosure of Invention
The embodiment of the application provides a drawing method, a drawing device, a terminal and a computer-readable storage medium for an object, which can solve the technical problem of low drawing efficiency of a three-dimensional stereogram or a plan view of the object.
A first aspect of the embodiments of the present application provides an object drawing method, applied to a terminal, including:
acquiring a first depth image of a real environment shot by a camera shooting assembly, and establishing a three-dimensional coordinate system based on the real environment according to the first depth image;
acquiring a two-dimensional image obtained by shooting the whole surface of an object to be drawn by the camera assembly and a second depth image corresponding to the two-dimensional image, and acquiring real-time pose data of the terminal;
performing edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image;
calculating the coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generating a pattern of the object to be drawn according to the coordinates; the pattern comprises a three-dimensional perspective view of the object to be drawn and/or a plan view of the object to be drawn.
A second aspect of the present invention provides an object drawing device, configured on a terminal, including:
the system comprises an establishing unit, a processing unit and a display unit, wherein the establishing unit is used for acquiring a first depth image of a real environment shot by a camera shooting assembly and establishing a three-dimensional coordinate system based on the real environment according to the first depth image;
the acquisition unit is used for acquiring a two-dimensional image obtained by shooting the whole surface of the object to be drawn by the camera shooting assembly and a second depth image corresponding to the two-dimensional image, and acquiring real-time pose data of the terminal;
the extraction unit is used for carrying out edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image;
the drawing unit is used for calculating the coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image and generating the pattern of the object to be drawn according to the coordinates; the pattern comprises a three-dimensional stereo image of the object to be drawn and/or a plan view of the object to be drawn.
A third aspect of the embodiments of the present application provides a terminal, including a camera module, a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
In the embodiment of the application, after a three-dimensional coordinate system based on a real environment is established, a two-dimensional image obtained by shooting the whole surface of an object to be drawn by a camera assembly of a terminal and a second depth image corresponding to the two-dimensional image are obtained, and real-time attitude data of the terminal are obtained at the same time; then, performing edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image; therefore, the terminal can calculate the coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generate a three-dimensional stereo image and/or a plan image of the object to be drawn according to the coordinates; that is to say, in the embodiment of the present application, when drawing a three-dimensional perspective view of an object to be drawn or a plan view of the object to be drawn, a user only needs to use a camera component of a terminal to shoot the whole surface of the object to be drawn, and then the three-dimensional perspective view or the plan view of the object to be drawn can be automatically generated by the terminal, and the user does not need to use a ruler to firstly measure the object in the field and then draw the three-dimensional perspective view and the plan view of the object by hand drawing or by using computer graphics software, so that the drawing efficiency of the three-dimensional perspective view and the plan view of the object is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are used in the embodiments are briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flow chart illustrating a first implementation of a method for drawing an object according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a process for establishing a three-dimensional coordinate system according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating an implementation of step 104 of a method for drawing an object according to an embodiment of the present disclosure;
fig. 4 is a first schematic diagram of a display interface of a terminal provided in an embodiment of the present application;
FIG. 5 is a flow chart illustrating a second implementation of a method for drawing an object according to an embodiment of the present disclosure;
fig. 6 is a second schematic diagram of a display interface of a terminal provided in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an object drawing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
At present, when a three-dimensional stereo image or a plan image of an object is drawn, the object generally needs to be measured by using tools such as a ruler and the like, and then is drawn by hand drawing or computer drawing software, and the drawing mode has the defects of large workload, easiness in being influenced by measurement errors, incapability of completely corresponding the drawn three-dimensional stereo image or plan image to an actual object, low drawing efficiency and poor drawing precision.
In the embodiment of the application, after a three-dimensional coordinate system based on a real environment is established, a two-dimensional image obtained by shooting the whole surface of an object to be drawn by a camera assembly of a terminal and a second depth image corresponding to the two-dimensional image are obtained, and real-time pose data of the terminal are obtained at the same time; then, performing edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image; therefore, the terminal can calculate the coordinate of the edge feature point under the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generate a three-dimensional stereo image of the object to be drawn and/or a plan image of the object to be drawn according to the coordinate; that is to say, in the embodiment of the present application, when drawing a three-dimensional perspective view of an object to be drawn or a plan view of the object to be drawn, a user only needs to shoot the entire surface of the object to be drawn by using the camera component of the terminal, and then the three-dimensional perspective view or the plan view of the object to be drawn can be automatically generated by the terminal, and the user does not need to draw the three-dimensional perspective view and the plan view of the object by using a ruler or by using computer drawing software after first measuring the object on the spot, so that the drawing efficiency and the drawing accuracy of the three-dimensional perspective view and the plan view of the object are improved.
Fig. 1 is a schematic flow chart illustrating an implementation of a drawing method for an object, which is applied to a terminal, and can be executed by a drawing device for an object configured on the terminal, and is suitable for a situation where it is necessary to improve drawing efficiency and drawing accuracy of a three-dimensional perspective view and/or a plan view of the object. The terminal can comprise a mobile terminal such as a smart phone, a tablet computer and a learning machine, and the terminal can be provided with a camera shooting assembly. The method for drawing the object may include steps 101 to 104.
Step 101, acquiring a first depth image of a real environment shot by a camera shooting assembly, and establishing a three-dimensional coordinate system based on the real environment according to the first depth image.
In the embodiment of the application, when drawing an object, a first depth image of a real environment shot by a camera shooting assembly is acquired to establish a three-dimensional coordinate system based on the real environment, and initialization of measurement is completed.
Wherein, the camera assembly may include a depth camera and an RGB camera. The depth camera is used for collecting a depth image, and the RGB camera is used for collecting a two-dimensional plane image (two-dimensional image).
The gray value of each pixel point of the depth image can be used for representing the distance between a certain point in a scene and the camera.
In some embodiments of the present application, the resolution of the depth camera may be equal to the resolution of the RGB camera, so that each pixel on the two-dimensional image may obtain accurate depth information.
In some embodiments of the present application, the depth camera may be a TOF camera.
In some embodiments of the present application, the camera assembly may also be a 3D camera that can output a depth image and a two-dimensional plane image simultaneously.
When the terminal is used for drawing an object, the camera application can be started first, the camera component is started, the first depth image of the real environment is obtained, and when the first depth image returned by the depth camera is received, a point in the real environment corresponding to any effective depth data in the first depth image is used as a coordinate origin, a three-dimensional coordinate system based on the real environment is established, and the three-dimensional coordinate system is used as a reference basis for coordinate calculation when the object is drawn.
For example, as shown in fig. 2, a three-dimensional coordinate system based on a real environment is established with an arbitrary point on the sofa as a coordinate origin.
In the embodiment of the present application, after the three-dimensional coordinate system is established, it indicates that the camera module has already initialized before drawing, and drawing of the object may be started. Because this initialization process can not need to carry out plane identification to also can not need to carry out the removal at terminal and gather multiframe photo, consequently, when this application realizes the drawing of object, have the fast characteristics of initialization speed, can reach the effect of "second opening".
And step 102, acquiring a two-dimensional image obtained by shooting the whole surface of the object to be drawn by the camera assembly and a second depth image corresponding to the two-dimensional image, and acquiring real-time pose data of the terminal.
In the embodiment of the application, in the process of acquiring the two-dimensional image obtained by shooting the whole surface of the object to be drawn by the camera shooting assembly and the second depth image corresponding to the two-dimensional image, the terminal can move around the object to be drawn so as to shoot the whole surface of the object to be drawn; the object to be drawn can also be made to move around the terminal, so that the terminal shoots the whole surface of the object to be drawn. In addition, in some embodiments of the application, the terminal and the object to be drawn can also move simultaneously, and only the terminal needs to be capable of shooting the whole surface of the object to be drawn, so that the motion state of the terminal and the object to be drawn is not limited.
Correspondingly, in this embodiment of the application, in the process of acquiring the real-time pose data of the terminal, if the position of the terminal relative to the origin of the three-dimensional coordinate system changes and the position of the object to be drawn relative to the origin of the three-dimensional coordinate system does not change, the acquiring the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the origin of the three-dimensional coordinate system. If the position of the terminal relative to the origin of the three-dimensional coordinate system is not changed, but the position of the object to be drawn relative to the origin of the three-dimensional coordinate system is changed, the acquiring of the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the object to be drawn. If the position of the terminal relative to the origin of the three-dimensional coordinate system changes and the position of the object to be drawn relative to the origin of the three-dimensional coordinate system also changes, the acquiring of the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the origin of the three-dimensional coordinate system and acquiring the pose data of the terminal relative to the object to be drawn.
Optionally, the acquiring pose data of the terminal may include: and starting when the three-dimensional coordinate system is established, and acquiring the six-degree-of-freedom pose data of the terminal in real time by using the inertial measurement unit IMU.
The object has six degrees of freedom in space, namely, the degree of freedom of movement along the directions of three orthogonal coordinate axes of x, y and z and the degree of freedom of rotation around the three coordinate axes. Therefore, to fully determine the position of the object, the six degrees of freedom must be known.
An Inertial Measurement Unit (IMU) is a device that measures the three-axis angular velocity and acceleration of an object. Generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detecting the acceleration of an object in three independent axes of a carrier coordinate system, and the gyroscopes detecting the angular velocity of the carrier relative to a navigation coordinate system, measuring the angular velocity and acceleration of the object in three-dimensional space, and calculating the attitude of the object based thereon. Therefore, when the pose data of the terminal relative to the origin of the three-dimensional coordinate system are obtained, the pose data can be obtained by the inertial measurement unit IMU.
In some embodiments of the present application, the above-mentioned acquiring of the pose data of the terminal with respect to the object to be drawn may refer to pose data of the terminal with respect to a specific point on the surface of the object to be drawn.
In some embodiments of the present application, when acquiring pose data of a specific point of the terminal relative to the surface of the object to be mapped, the inertial measurement unit IMU may also be arranged at the specific point to acquire pose data of six degrees of freedom of the terminal relative to the specific point in real time.
And 103, performing edge extraction on the two-dimensional image to obtain edge characteristic points corresponding to the two-dimensional image.
In the process of drawing the three-dimensional stereo image of the object to be drawn or drawing the plan view of the object to be drawn, only the edge of the object to be drawn needs to be paid attention to, so that the edge feature points needing to be used for drawing can be obtained in a mode of carrying out edge extraction on the two-dimensional image.
For example, when the object to be drawn is a rectangular parallelepiped, the edge feature points are points on respective edges of the rectangular parallelepiped.
The edge extraction method may include performing edge extraction using a differential detection algorithm, a Roberts gradient algorithm, and a Sobel (Sobel) edge detection algorithm.
104, calculating coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generating a pattern of an object to be drawn according to the coordinates; the drawing includes a three-dimensional perspective view of the object to be drawn and/or a plan view of the object to be drawn.
Wherein, the plan view can comprise six views and three views.
In the embodiment of the application, after a three-dimensional coordinate system based on a real environment is established, a two-dimensional image obtained by shooting the whole surface of an object to be drawn by a camera assembly of a terminal and a second depth image corresponding to the two-dimensional image are obtained, and real-time pose data of the terminal are obtained at the same time; then, performing edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image; therefore, the terminal can calculate the coordinate of the edge feature point under the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generate a three-dimensional stereo image of the object to be drawn and/or a plan image of the object to be drawn according to the coordinate; that is to say, in the embodiment of the present application, when drawing a three-dimensional perspective view of an object to be drawn or a plan view of the object to be drawn, a user only needs to shoot the entire surface of the object to be drawn by using the camera component of the terminal, and then the three-dimensional perspective view or the plan view of the object to be drawn can be automatically generated by the terminal, and the user does not need to draw the three-dimensional perspective view and the plan view of the object by using a ruler or by using computer drawing software after first measuring the object on the spot, so that the drawing efficiency and the drawing accuracy of the three-dimensional perspective view and the plan view of the object are improved.
In some embodiments of the present application, as shown in fig. 3, the calculating 104 coordinates of the edge feature point in the three-dimensional coordinate system according to the real-time pose data and the second depth image may include: step 301 to step 302.
Step 301, determining pixel coordinates of the edge feature point on the two-dimensional image according to the position of the edge feature point on the two-dimensional image and the corresponding depth value.
In the embodiment of the present application, the pixel coordinate refers to a relative position relationship between each pixel point on the two-dimensional image, and is a coordinate formed by combining depth values of each pixel point.
For example, a pixel coordinate system may be established with a pixel point at the lower left corner of the two-dimensional image as the origin of the two-dimensional coordinate, the two-dimensional coordinate of each pixel point on the two-dimensional image is determined, and then the two-dimensional coordinate is combined with the depth value of each pixel point of the two-dimensional image to obtain the pixel coordinate of each pixel point in the two-dimensional image.
And 302, mapping the pixel coordinates into coordinates in a three-dimensional coordinate system according to the parameter information and the real-time pose data of the camera shooting assembly.
Mapping the pixel coordinates to coordinates in a three-dimensional coordinate system according to the parameter information and pose data of the camera assembly comprises: and determining a mapping matrix of the pixel coordinates of the two-dimensional image on the display interface of the terminal and the coordinates under the three-dimensional coordinate system according to the parameter information of the camera shooting assembly and the pose data of the terminal, and mapping the pixel coordinates to the coordinates under the three-dimensional coordinate system according to the mapping matrix.
The parameter information of the camera shooting assembly comprises internal parameters and external parameters of the camera shooting assembly, wherein the internal parameters comprise equivalent focal lengths f in the u-axis and v-axis directionsx,fyAnd the actual center point coordinates u0, v0 of the image plane.
It should be noted that, in the process of mapping the pixel coordinates to the coordinates in the three-dimensional coordinate system according to the parameter information and the pose data of the camera module, a mapping matrix commonly used in the related art may be used for mapping.
In the embodiment of the present application, after the coordinates of each edge feature point of the two-dimensional image in the three-dimensional coordinate system are obtained, the coordinates of each point in the three-dimensional stereogram of the object to be drawn are equivalent to the coordinates of each point in the three-dimensional coordinate system that have been constructed, that is, after the coordinates of each edge feature point of the two-dimensional image in the three-dimensional coordinate system are obtained, the three-dimensional stereogram of the object to be drawn can be generated according to the coordinates, and after the three-dimensional stereogram of the object to be drawn is obtained, the three-dimensional stereogram is projected in any one projection direction, so that the plan view of the object to be drawn can be obtained.
In order to make the generated plan more in line with the habit of the user, in some embodiments of the present application, generating a plan of an object to be drawn according to coordinates may include: receiving a projection direction selection instruction for the three-dimensional stereogram, and generating a six-view or three-view of the object to be drawn according to the projection direction selection instruction.
For example, as shown in fig. 4, after obtaining the three-dimensional perspective view of the object to be drawn, the three-dimensional perspective view 42 of the object to be drawn may be displayed on the display interface 41 of the terminal, and a projection direction selection instruction for the three-dimensional perspective view, which is triggered by the user on the display interface 41 of the terminal, is received, and a six-view or three-view of the object to be drawn is generated according to the projection direction (e.g., the arrow direction shown in fig. 4) indicated by the projection direction selection instruction.
For example, a projection direction selection instruction is triggered by a touch gesture to determine the projection direction of the planar image of the object to be drawn.
Wherein, the six views may include a front view, a rear view, a left view, a right view, a top view and a bottom view, and the three views may include a front view, a left view and a top view.
In each of the above embodiments, as shown in fig. 5, the method for drawing an object may further include steps 501 to 504.
Step 501, acquiring a first depth image of a real environment shot by a camera shooting assembly, and establishing a three-dimensional coordinate system based on the real environment according to the first depth image.
Step 502, acquiring a multi-frame sub two-dimensional image shot by the camera assembly on the whole surface of the object to be drawn and a sub depth image corresponding to the sub two-dimensional image, and acquiring real-time pose data of the terminal.
And 503, splicing the sub two-dimensional image and the sub depth image according to the real-time pose data of the terminal to obtain a two-dimensional image and a second depth image corresponding to the two-dimensional image.
The image shooting component shoots an image in a mode of generating one frame of image by collecting the external light signal once. Typically, the frequency of image acquisition is 1 second for 30 frames. Therefore, when the whole surface of the object to be drawn is photographed, the photographed image generally includes multiple frames of images, not one frame of image, so in step 102, the method may include acquiring multiple frames of sub two-dimensional images photographed by the photographing component on the whole surface of the object to be drawn and sub depth images corresponding to the sub two-dimensional images.
In addition, in the process of acquiring the multi-frame sub two-dimensional images shot on the whole surface of the object to be drawn and the sub depth images corresponding to the sub two-dimensional images by the camera assembly, the terminal moves relative to the object to be drawn, and the motion track is not a regular circumference or straight line, so that the multi-frame sub two-dimensional images are spliced according to real-time pose data to remove redundant parts to obtain the two-dimensional images, and correspondingly, the sub depth images corresponding to the sub two-dimensional images are spliced in the same way to obtain the second depth images.
And 504, identifying an object to be drawn in the two-dimensional image, and performing edge extraction on the object to be drawn in the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image.
For example, an object to be drawn in a two-dimensional image may be identified using a target detection algorithm. Common target detection algorithms include a Local Binary Pattern (LBP) algorithm, a directional gradient feature combined support vector machine model, a convolutional neural network model and the like.
Step 505, calculating coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generating a pattern of the object to be drawn according to the coordinates; the drawing includes a three-dimensional perspective view of the object to be drawn and/or a plan view of the object to be drawn.
The implementation of the steps 501 and 505 can refer to the steps 101 and 104, which are not described herein again.
In practical applications, when an object to be drawn is drawn, a terminal is generally used to move around the object to be drawn so as to capture a two-dimensional image of the entire surface of the object to be drawn and a second depth image corresponding to the two-dimensional image.
For example, the terminal is a mobile phone, and when a user sees a sculpture with a relatively artistic sense and wants to draw, the user can shoot the sculpture around the whole surface of the sculpture by using the mobile phone to obtain a two-dimensional image of the sculpture and a second depth image corresponding to the two-dimensional image, and further obtain a three-dimensional stereogram and/or a plan view of the sculpture. However, during the photographing process, the user may not be able to confirm whether the entire surface of the sculpture has been photographed, leaving no place to be photographed.
In addition, when the structure of the object to be drawn is relatively symmetrical, for example, when an ellipsoidal object to be drawn is photographed, the user may not be able to confirm whether the entire surface of the object to be drawn is photographed.
Therefore, in some embodiments of the present application, the method for drawing an object may further include: and displaying the mobile guide identifier on a shooting interface of the terminal according to the real-time pose data of the terminal, wherein the mobile guide identifier changes along with the change of the position of the terminal, so that a user can judge whether the whole surface of the object to be drawn is shot completely according to the guide identifier.
For example, the movement guidance identifier may be used to guide the terminal to take a 360-degree photograph around the object to be drawn with the vertical direction as an axis, and guide the terminal to take the image of the object to be drawn upward and downward along the vertical direction, so as to obtain the second depth image and the two-dimensional image.
The shooting of 360 degrees around the object to be drawn by taking the vertical direction as an axis comprises the shooting of a front view, a left view, a rear view and a right view of the object to be drawn, and the shooting upwards along the vertical direction and downwards along the vertical direction comprises the shooting of a bottom view and a top view of the object to be drawn.
Specifically, as shown in fig. 6, the moving guide mark may include a vertical axis 61 with an upward arrow and a downward arrow, and a guide line 62 with an arrow, when the end of the guide line 62 without the arrow is overlapped with the end with the arrow, it indicates that the front view, the left view, the rear view and the right view of the object to be drawn have been taken; when the arrow on the vertical axis 61 disappears, the shooting of the bottom view of the object to be drawn is indicated; when the arrow pointing down from the vertical axis 61 disappears, this indicates that the object to be drawn has been photographed from above.
It should be noted that, this is merely an example, and in some embodiments of the present application, the guide mark may also be a guide mark with another shape, as long as the user can be guided to complete shooting the entire surface of the object to be drawn.
In addition, when the object to be drawn has a direction which is a direction incapable of being shot, the view in the direction can be directly abandoned, and the user triggers the shooting to be finished, namely the whole surface of the object to be drawn is shot.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 7 shows a schematic structural diagram of an object drawing apparatus 700 provided in an embodiment of the present application, and the object drawing apparatus is configured on a terminal and includes a creating unit 701, an acquiring unit 702, an extracting unit 703, and a drawing unit 704.
The system comprises an establishing unit 701, a processing unit and a display unit, wherein the establishing unit 701 is used for acquiring a first depth image of a real environment shot by a camera shooting assembly and establishing a three-dimensional coordinate system based on the real environment according to the first depth image;
the acquiring unit 702 is configured to acquire a two-dimensional image obtained by shooting the entire surface of the object to be drawn by the shooting assembly and a second depth image corresponding to the two-dimensional image, and acquire real-time pose data of the terminal at the same time;
an extraction unit 703, configured to perform edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image;
the drawing unit 704 is used for calculating coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generating a pattern of an object to be drawn according to the coordinates; the pattern comprises a three-dimensional perspective view of the object to be drawn and/or a plan view of the object to be drawn.
It should be noted that, for convenience and simplicity of description, the specific working process of the drawing apparatus 700 for an object described above may refer to the corresponding process of the method described in fig. 1 to fig. 6, and is not described herein again.
As shown in fig. 8, the present application provides a terminal for implementing the drawing method for the object, where the terminal may be a terminal such as a smart phone, a tablet computer, a Personal Computer (PC), a learning machine, and may include: a processor 81, a memory 82, one or more input devices 83 (only one shown in fig. 8), one or more output devices 84 (only one shown in fig. 8), and a camera assembly 85. The processor 81, memory 82, input device 83, output device 84, and camera assembly 85 are connected by a bus 86.
It should be understood that in the embodiments of the present application, the processor 81 may be a central processing unit, and may be other general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 83 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 84 may include a display, a speaker, etc.
The memory 82 may include a read-only memory and a random access memory, and provides instructions and data to the processor 81. Some or all of memory 82 may also include non-volatile random access memory. For example, the memory 82 may also store device type information.
The memory 82 stores a computer program that can be executed by the processor 81, and the computer program is, for example, a program of a drawing method of an object. The processor 81 implements steps of the drawing method of the object, such as steps 101 to 104 shown in fig. 1, when executing the computer program. Alternatively, the processor 81 implements the functions of the modules/units in the device embodiments, such as the functions of the units 701 to 704 shown in fig. 7, when executing the computer program.
The computer program may be divided into one or more modules/units, which are stored in the memory 82 and executed by the processor 81 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the terminal for drawing the object. For example, the computer program may be divided into a creating unit, an obtaining unit, an extracting unit, and a drawing unit, and each unit may specifically function as follows: the system comprises an establishing unit, a processing unit and a display unit, wherein the establishing unit is used for acquiring a first depth image of a real environment shot by a camera shooting assembly and establishing a three-dimensional coordinate system based on the real environment according to the first depth image; the acquisition unit is used for acquiring a two-dimensional image obtained by shooting the whole surface of the object to be drawn by the camera assembly and a second depth image corresponding to the two-dimensional image, and acquiring real-time pose data of the terminal; the extraction unit is used for carrying out edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image; the drawing unit is used for calculating the coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image and generating a pattern of an object to be drawn according to the coordinates; the drawing includes a three-dimensional perspective view of the object to be drawn and/or a plan view of the object to be drawn.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal are merely illustrative, and for example, the division of the above-described modules or units is only a division of one logic function, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by the present application, and the above computer program can be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of the above method embodiments can be realized. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier signal, telecommunication signal, and software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease depending on the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions are intended to be included within the scope of the present application without departing from the spirit and scope of the present application.

Claims (9)

1. A drawing method of an object is applied to a terminal, and is characterized by comprising the following steps:
acquiring a first depth image of a real environment shot by a camera shooting assembly, and establishing a three-dimensional coordinate system based on the real environment according to the first depth image;
acquiring a two-dimensional image obtained by shooting the whole surface of an object to be drawn by the camera assembly and a second depth image corresponding to the two-dimensional image, and acquiring real-time pose data of the terminal; in the process of acquiring the two-dimensional image obtained by shooting the whole surface of the object to be drawn by the camera assembly and the second depth image corresponding to the two-dimensional image, the terminal moves around the object to be drawn so as to shoot the whole surface of the object to be drawn; or, the object to be drawn moves around the terminal, so that the terminal shoots the whole surface of the object to be drawn; or the terminal and the object to be drawn move simultaneously, and only the terminal is required to be capable of shooting the whole surface of the object to be drawn; in the process of acquiring the real-time pose data of the terminal, if the position of the terminal relative to the origin of the three-dimensional coordinate system changes and the position of the object to be drawn relative to the origin of the three-dimensional coordinate system does not change, the acquiring of the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the origin of the three-dimensional coordinate system; if the position of the terminal relative to the origin of the three-dimensional coordinate system is not changed, and the position of the object to be drawn relative to the origin of the three-dimensional coordinate system is changed, the acquiring of the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the object to be drawn; if the position of the terminal is changed relative to the origin of the three-dimensional coordinate system and the position of the object to be drawn is also changed relative to the origin of the three-dimensional coordinate system, the acquiring of the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the origin of the three-dimensional coordinate system and acquiring the pose data of the terminal relative to the object to be drawn;
performing edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image;
calculating the coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image, and generating a pattern of the object to be drawn according to the coordinates; the pattern comprises a three-dimensional stereo image of the object to be drawn and/or a plan view of the object to be drawn;
the drawing method further comprises the following steps:
displaying a mobile guide identifier on a shooting interface of the terminal according to the real-time pose data of the terminal, wherein the mobile guide identifier changes along with the change of the position of the terminal; wherein the moving guide mark comprises a vertical axis with an upward arrow and a downward arrow, and a guide line with an arrow, when one end of the guide line is coincident with one end of the arrow, the front view, the left view, the rear view and the right view of the object to be drawn are shot; when the arrow in the vertical axial direction disappears, the shooting of the bottom view of the object to be drawn is shown; when the arrow in the vertical axis direction disappears, the shooting of the top view of the object to be drawn is shown.
2. The drawing method according to claim 1, wherein said generating a plan view of the object to be drawn from the coordinates comprises:
receiving a projection direction selection instruction of the three-dimensional stereogram, and generating six views or three views of the object to be drawn according to the projection direction selection instruction.
3. The drawing method according to claim 1 or 2, wherein the calculating coordinates of the edge feature point in the three-dimensional coordinate system from the real-time pose data and the second depth image includes:
determining the pixel coordinates of the edge feature points on the two-dimensional image according to the positions of the edge feature points on the two-dimensional image and the corresponding depth values;
and mapping the pixel coordinate to a coordinate in the three-dimensional coordinate system according to the parameter information of the camera shooting assembly and the real-time pose data.
4. The drawing method according to claim 1, wherein the acquiring a two-dimensional image obtained by photographing the entire surface of the object to be drawn by the camera assembly and a second depth image corresponding to the two-dimensional image comprises:
acquiring a multi-frame sub two-dimensional image shot by the camera assembly on the whole surface of an object to be drawn and a sub depth image corresponding to the sub two-dimensional image;
splicing the sub two-dimensional image and the sub depth image according to the real-time pose data of the terminal to obtain the two-dimensional image and a second depth image corresponding to the two-dimensional image;
the edge extraction of the two-dimensional image comprises the following steps:
and identifying the object to be drawn in the two-dimensional image, and performing edge extraction on the object to be drawn in the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image.
5. The drawing method as claimed in claim 1, wherein said acquiring real-time pose data of the terminal comprises:
and starting when the three-dimensional coordinate system is established, and acquiring the six-degree-of-freedom pose data of the terminal in real time by using an Inertial Measurement Unit (IMU).
6. An apparatus for drawing an object, which is provided in a terminal, comprising:
the system comprises an establishing unit, a processing unit and a display unit, wherein the establishing unit is used for acquiring a first depth image of a real environment shot by a camera shooting assembly and establishing a three-dimensional coordinate system based on the real environment according to the first depth image;
the acquisition unit is used for acquiring a two-dimensional image obtained by shooting the whole surface of an object to be drawn by the camera assembly and a second depth image corresponding to the two-dimensional image, and acquiring real-time pose data of the terminal; in the process of acquiring the two-dimensional image obtained by shooting the whole surface of the object to be drawn by the camera assembly and the second depth image corresponding to the two-dimensional image, the terminal moves around the object to be drawn so as to shoot the whole surface of the object to be drawn; or, the object to be drawn moves around the terminal, so that the terminal shoots the whole surface of the object to be drawn; or the terminal and the object to be drawn move simultaneously, and only the terminal is required to be capable of shooting the whole surface of the object to be drawn; in the process of acquiring the real-time pose data of the terminal, if the position of the terminal relative to the origin of the three-dimensional coordinate system changes and the position of the object to be drawn relative to the origin of the three-dimensional coordinate system does not change, the acquiring of the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the origin of the three-dimensional coordinate system; if the position of the terminal relative to the origin of the three-dimensional coordinate system is not changed, and the position of the object to be drawn relative to the origin of the three-dimensional coordinate system is changed, the acquiring of the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the object to be drawn; if the position of the terminal is changed relative to the origin of the three-dimensional coordinate system and the position of the object to be drawn is also changed relative to the origin of the three-dimensional coordinate system, the acquiring of the real-time pose data of the terminal refers to acquiring the pose data of the terminal relative to the origin of the three-dimensional coordinate system and acquiring the pose data of the terminal relative to the object to be drawn;
the extraction unit is used for carrying out edge extraction on the two-dimensional image to obtain edge feature points corresponding to the two-dimensional image;
the drawing unit is used for calculating the coordinates of the edge feature points in the three-dimensional coordinate system according to the real-time pose data and the second depth image and generating the pattern of the object to be drawn according to the coordinates; the pattern comprises a three-dimensional stereo image of the object to be drawn and/or a plan view of the object to be drawn;
displaying a mobile guide identifier on a shooting interface of the terminal according to the real-time pose data of the terminal, wherein the mobile guide identifier changes along with the change of the position of the terminal; wherein the moving guide mark comprises a vertical axis with an upward arrow and a downward arrow, and a guide line with an arrow, when one end of the guide line is coincident with one end of the arrow, the front view, the left view, the rear view and the right view of the object to be drawn are shot; when the arrow in the vertical axial direction disappears, the shooting of the bottom view of the object to be drawn is shown; when the arrow in the vertical axis direction disappears, the shooting of the top view of the object to be drawn is shown.
7. The drawing device as in claim 6,
the drawing unit is further configured to receive a projection direction selection instruction for the three-dimensional stereogram, and generate a six-view or three-view of the object to be drawn according to the projection direction selection instruction.
8. A terminal comprising a camera assembly, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 5 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201910565646.7A 2019-06-26 2019-06-26 Object drawing method, device, terminal and computer-readable storage medium Active CN110276774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910565646.7A CN110276774B (en) 2019-06-26 2019-06-26 Object drawing method, device, terminal and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910565646.7A CN110276774B (en) 2019-06-26 2019-06-26 Object drawing method, device, terminal and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110276774A CN110276774A (en) 2019-09-24
CN110276774B true CN110276774B (en) 2021-07-23

Family

ID=67963487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910565646.7A Active CN110276774B (en) 2019-06-26 2019-06-26 Object drawing method, device, terminal and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110276774B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838160A (en) * 2019-10-30 2020-02-25 广东优世联合控股集团股份有限公司 Building image display mode conversion method, device and storage medium
CN111063015B (en) * 2019-12-13 2023-07-21 重庆首厚智能科技研究院有限公司 Method and system for efficiently drawing point positions
CN112197708B (en) * 2020-08-31 2022-04-22 深圳市慧鲤科技有限公司 Measuring method and device, electronic device and storage medium
CN112150527A (en) * 2020-08-31 2020-12-29 深圳市慧鲤科技有限公司 Measuring method and device, electronic device and storage medium
CN112348890B (en) * 2020-10-27 2024-01-23 深圳技术大学 Space positioning method, device and computer readable storage medium
CN116704129B (en) * 2023-06-14 2024-01-30 维坤智能科技(上海)有限公司 Panoramic view-based three-dimensional image generation method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500467A (en) * 2013-10-21 2014-01-08 深圳市易尚展示股份有限公司 Constructive method of image-based three-dimensional model
US20140160123A1 (en) * 2012-12-12 2014-06-12 Microsoft Corporation Generation of a three-dimensional representation of a user
CN107562226A (en) * 2017-09-15 2018-01-09 广东虹勤通讯技术有限公司 A kind of 3D drafting systems and method
CN108304119A (en) * 2018-01-19 2018-07-20 腾讯科技(深圳)有限公司 object measuring method, intelligent terminal and computer readable storage medium
CN108564616A (en) * 2018-03-15 2018-09-21 中国科学院自动化研究所 Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN109186461A (en) * 2018-07-27 2019-01-11 南京阿凡达机器人科技有限公司 A kind of measurement method and measuring device of cabinet size

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4807693B2 (en) * 2001-09-26 2011-11-02 パイオニア株式会社 Image creating apparatus and method, electronic apparatus, and computer program
CN106651755A (en) * 2016-11-17 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Panoramic image processing method and device for terminal and terminal
CN108170297B (en) * 2017-09-11 2021-11-16 南京睿悦信息技术有限公司 Real-time six-degree-of-freedom VR/AR/MR device positioning method
CN108225258A (en) * 2018-01-09 2018-06-29 天津大学 Based on inertance element and laser tracker dynamic pose measuring apparatus and method
CN108629830A (en) * 2018-03-28 2018-10-09 深圳臻迪信息技术有限公司 A kind of three-dimensional environment method for information display and equipment
CN108924412B (en) * 2018-06-22 2021-01-12 维沃移动通信有限公司 Shooting method and terminal equipment
CN109102537B (en) * 2018-06-25 2020-03-20 中德人工智能研究院有限公司 Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140160123A1 (en) * 2012-12-12 2014-06-12 Microsoft Corporation Generation of a three-dimensional representation of a user
CN103500467A (en) * 2013-10-21 2014-01-08 深圳市易尚展示股份有限公司 Constructive method of image-based three-dimensional model
CN107562226A (en) * 2017-09-15 2018-01-09 广东虹勤通讯技术有限公司 A kind of 3D drafting systems and method
CN108304119A (en) * 2018-01-19 2018-07-20 腾讯科技(深圳)有限公司 object measuring method, intelligent terminal and computer readable storage medium
CN108564616A (en) * 2018-03-15 2018-09-21 中国科学院自动化研究所 Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN109186461A (en) * 2018-07-27 2019-01-11 南京阿凡达机器人科技有限公司 A kind of measurement method and measuring device of cabinet size

Also Published As

Publication number Publication date
CN110276774A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110006343B (en) Method and device for measuring geometric parameters of object and terminal
CN110276774B (en) Object drawing method, device, terminal and computer-readable storage medium
CN110276317B (en) Object size detection method, object size detection device and mobile terminal
CN107223269B (en) Three-dimensional scene positioning method and device
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
US9886774B2 (en) Photogrammetric methods and devices related thereto
CN109584295B (en) Method, device and system for automatically labeling target object in image
EP3155596B1 (en) 3d scanning with depth cameras using mesh sculpting
JP6573419B1 (en) Positioning method, robot and computer storage medium
TW201709718A (en) Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
WO2019196745A1 (en) Face modelling method and related product
CN112652016A (en) Point cloud prediction model generation method, pose estimation method and device
CN111199579A (en) Method, device, equipment and medium for building three-dimensional model of target object
US20160210761A1 (en) 3d reconstruction
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
CN110866977A (en) Augmented reality processing method, device and system, storage medium and electronic equipment
CN108028904B (en) Method and system for light field augmented reality/virtual reality on mobile devices
CN112686877A (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN112729327A (en) Navigation method, navigation device, computer equipment and storage medium
CN110458954B (en) Contour line generation method, device and equipment
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN116168143A (en) Multi-view three-dimensional reconstruction method
JP5518677B2 (en) Virtual information giving apparatus and virtual information giving program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant