CN115546417A - Three-dimensional reconstruction method, system, electronic device and computer-readable storage medium - Google Patents

Three-dimensional reconstruction method, system, electronic device and computer-readable storage medium Download PDF

Info

Publication number
CN115546417A
CN115546417A CN202211365846.6A CN202211365846A CN115546417A CN 115546417 A CN115546417 A CN 115546417A CN 202211365846 A CN202211365846 A CN 202211365846A CN 115546417 A CN115546417 A CN 115546417A
Authority
CN
China
Prior art keywords
target object
dimensional
surface contour
dimensional reconstruction
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211365846.6A
Other languages
Chinese (zh)
Inventor
李佳明
王文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202211365846.6A priority Critical patent/CN115546417A/en
Publication of CN115546417A publication Critical patent/CN115546417A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional reconstruction method, a three-dimensional reconstruction system, an electronic device and a computer readable storage medium, which are applied to a head-mounted display device, wherein the three-dimensional reconstruction method comprises the following steps: receiving surface contour position information uploaded by a hand wearing device, wherein the surface contour position information is obtained by measuring position information of a touch point between the hand wearing device and a surface contour of a target object by the hand wearing device; determining first depth information of the target object relative to the head-mounted display device according to the surface contour position information; and performing three-dimensional reconstruction on the target object according to the first depth information to obtain a target three-dimensional model. The technical problem of how to compromise the three-dimensional accuracy of rebuilding that promotes head mounted display device and reduce the consumption of head mounted display device when three-dimensional is rebuild is solved in this application.

Description

Three-dimensional reconstruction method, system, electronic device and computer-readable storage medium
Technical Field
The present application relates to the field of three-dimensional reconstruction technologies, and in particular, to a three-dimensional reconstruction method, a three-dimensional reconstruction system, an electronic device, and a computer-readable storage medium.
Background
With the proposition of the meta-space concept, the three-dimensional reconstruction technology is also developing more and more quickly, currently, when the head-mounted display device performs three-dimensional reconstruction, the depth camera is usually dependent on RGB-D, but currently, the head-mounted display device on the market is usually configured with a common RGB camera, and for the common RGB camera, in order to acquire depth information, the depth information is usually acquired by using a time difference through a double RGB camera, but this increases the power consumption of the head-mounted display device, and if the three-dimensional reconstruction is performed based on only a single RGB camera, the accuracy of the three-dimensional reconstruction performed by the head-mounted display device is affected due to the lack of the depth information.
Disclosure of Invention
A primary objective of the present application is to provide a three-dimensional reconstruction method, a three-dimensional reconstruction system, an electronic device, and a computer-readable storage medium, so as to solve a technical problem of improving three-dimensional reconstruction accuracy of a head-mounted display device and reducing power consumption of the head-mounted display device during three-dimensional reconstruction.
In order to achieve the above object, the present application provides a three-dimensional reconstruction method applied to a head-mounted display device, the three-dimensional reconstruction method including:
receiving surface contour position information uploaded by a hand wearing device, wherein the surface contour position information is obtained by measuring position information of a touch point between the hand wearing device and a surface contour of a target object by the hand wearing device;
determining first depth information of the target object relative to the head-mounted display device according to the surface contour position information;
and performing three-dimensional reconstruction on the target object according to the first depth information to obtain a target three-dimensional model.
Optionally, the step of performing three-dimensional reconstruction on the target object according to the first depth information includes:
acquiring a first object image of a target object and a second object image of the target object in a current three-dimensional reconstruction scene, wherein the first object image is obtained by shooting when the head-mounted display device measures position information of the touch point through the hand-mounted device;
determining view angle offset information of the second object image relative to the first object image by performing feature matching on the first object image and the second object image;
adjusting the first depth information according to the visual angle offset information to obtain second depth information;
and performing three-dimensional reconstruction on the target object according to the second depth information.
Optionally, the step of performing three-dimensional reconstruction on the target object according to the first depth information includes:
acquiring a two-dimensional object image of the target object, and performing three-dimensional modeling on the target object according to the two-dimensional object image to obtain a three-dimensional object model;
and according to the first depth information, carrying out space visual angle adjustment on the three-dimensional object model to obtain a target three-dimensional model.
Optionally, the surface contour position information includes at least one surface contour point coordinate,
the step of determining first depth information of the target object relative to the head-mounted display device according to the surface contour position information comprises:
determining object center point coordinates of the target object according to the surface contour point coordinates;
and calculating the distance between the object center point coordinate and the head-mounted display equipment to obtain first depth information of the target object relative to the head-mounted display equipment.
Optionally, before the step of receiving surface contour position information uploaded by a hand-worn device, the three-dimensional reconstruction method further comprises:
acquiring a two-dimensional object image of a target object, and performing image recognition on the two-dimensional object image to obtain an image recognition result;
according to the image recognition result, identifying key points for measurement in the two-dimensional object image to obtain an identified two-dimensional object image;
displaying the identification two-dimensional object image through the head-mounted display device, wherein the identification two-dimensional object image is used for indicating a user to touch the measurement key point of the target object through the hand-mounted device.
In order to achieve the above object, the present application provides a three-dimensional reconstruction method applied to a hand-worn device, the three-dimensional reconstruction method including:
measuring position information of a touch point between the hand-wearing device and the surface profile of the target object to obtain surface profile position information corresponding to the target object;
and uploading the surface contour position information to a head-mounted display device, so that the head-mounted display device determines the depth information of the target object relative to the head-mounted display device according to the surface contour position information, and performs three-dimensional reconstruction according to the depth information.
Optionally, the surface contour position information comprises at least one surface contour point coordinate, the hand-worn device indication comprises a fingertip position sensor,
the step of measuring the position information of the touch point between the hand-wearing device and the surface profile of the target object to obtain the surface profile position information corresponding to the target object comprises the following steps:
detecting an extrusion pressure value at a fingertip position of the hand-worn device;
and if the extrusion pressure value is larger than a preset pressure threshold value, measuring the position coordinates of a touch point between the hand-wearing device and the surface contour of the target object through the fingertip position sensor to obtain surface contour point coordinates.
To achieve the above object, the present application further provides a three-dimensional reconstruction system, including:
the head-mounted display equipment is used for receiving surface contour position information uploaded by the hand-mounted equipment; determining first depth information of a target object relative to the head-mounted display device according to the surface contour position information; according to the first depth information, performing three-dimensional reconstruction on the target object to obtain a target three-dimensional model;
the hand wearing equipment is used for measuring the position information of a touch point between the hand wearing equipment and the surface contour of the target object to obtain the surface contour position information corresponding to the target object; and uploading the surface contour position information to a head-mounted display device, so that the head-mounted display device determines the depth information of the target object relative to the head-mounted display device according to the surface contour position information, and performs three-dimensional reconstruction according to the depth information.
Optionally, the head mounted display device is further configured to:
acquiring a first object image of a target object and a second object image of the target object in a current three-dimensional reconstruction scene, wherein the first object image is obtained by shooting when the head-mounted display device measures position information of the touch point through the hand-mounted device;
determining view angle offset information of the second object image relative to the first object image by performing feature matching on the first object image and the second object image;
adjusting the first depth information according to the visual angle offset information to obtain second depth information;
and performing three-dimensional reconstruction on the target object according to the second depth information.
Optionally, the head mounted display device is further configured to:
acquiring a two-dimensional object image of the target object, and performing three-dimensional modeling on the target object according to the two-dimensional object image to obtain a three-dimensional object model;
and according to the first depth information, carrying out space visual angle adjustment on the three-dimensional object model to obtain a target three-dimensional model.
Optionally, the surface contour position information at least includes a surface contour point coordinate, and the head-mounted display device is further configured to:
determining object center point coordinates of the target object according to the surface contour point coordinates;
and calculating the distance between the object center point coordinate and the head-mounted display equipment to obtain first depth information of the target object relative to the head-mounted display equipment.
Optionally, the head mounted display device is further configured to:
acquiring a two-dimensional object image of a target object, and performing image recognition on the two-dimensional object image to obtain an image recognition result;
according to the image recognition result, identifying key points for measurement in the two-dimensional object image to obtain an identified two-dimensional object image;
displaying the identification two-dimensional object image through the head-mounted display device, wherein the identification two-dimensional object image is used for indicating a user to touch the measurement key point of the target object through the hand-mounted device.
Optionally, the surface contour position information includes at least one surface contour point coordinate, the hand-worn device indication includes a fingertip position sensor, the hand-worn device is further configured to:
detecting an extrusion pressure value at a fingertip position of the hand-worn device;
and if the extrusion pressure value is larger than a preset pressure threshold value, measuring the position coordinates of a touch point between the hand-wearing device and the surface contour of the target object through the fingertip position sensor to obtain surface contour point coordinates.
The present application further provides an electronic device, which is an entity device, the electronic device including: a memory, a processor and a program of the three-dimensional reconstruction method stored on the memory and executable on the processor, which program, when executed by the processor, may implement the steps of the three-dimensional reconstruction method as described above.
The present application also provides a computer-readable storage medium having stored thereon a program for implementing a three-dimensional reconstruction method, which when executed by a processor implements the steps of the three-dimensional reconstruction method as described above.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the three-dimensional reconstruction method as described above.
The application provides a three-dimensional reconstruction method, a three-dimensional reconstruction system, an electronic device and a computer readable storage medium, wherein a hand-wearing device and a head-wearing display device are arranged in the embodiment of the application, and when the hand-wearing device touches a target object, the hand-wearing device can measure position information of a touch point between the hand-wearing device and the surface contour of the target object as the surface contour position information of the target object; after the head-mounted display device receives the surface contour position information sent by the hand-mounted device, first depth information of the target object relative to the head-mounted display device can be determined according to the surface contour position information; and performing three-dimensional reconstruction on the target object according to the first depth information to obtain a target three-dimensional model. Like this, wear display device and can acquire with the help of hand wearing equipment the depth information that needs when carrying out three-dimensional reconstruction, need not to set up two RGB cameras and acquire depth information, acquire two-dimensional object image according to single RGB camera in this application, the depth information who acquires with the help of hand wearing equipment is reconciled, can realize carrying out accurate three-dimensional reconstruction process under the condition that sets up single camera, can promote the three-dimensional reconstruction degree of accuracy under the single RGB camera scene, and compare in the scene of two RGB cameras, the consumption of wearing display device has been practiced thrift on the basis of the degree of accuracy of guaranteeing three-dimensional reconstruction, the accuracy of wearing display device three-dimensional reconstruction has been realized having taken into account and has been promoted and wear the consumption of display device when three-dimensional reconstruction.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
Fig. 1 is a schematic flow chart of a first embodiment of a three-dimensional reconstruction method according to the present application;
fig. 2 is a schematic flowchart of a second embodiment of the three-dimensional reconstruction method according to the present application;
fig. 3 is a schematic flowchart of a third embodiment of the three-dimensional reconstruction method according to the present application;
fig. 4 is a schematic structural diagram of a hardware operating environment related to a three-dimensional reconstruction method in an embodiment of the present application.
The implementation of the objectives, functional features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying figures are described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
An embodiment of the present application provides a three-dimensional reconstruction method applied to a head-mounted display device, and referring to fig. 1, in a first embodiment of the three-dimensional reconstruction method of the present application, the three-dimensional reconstruction method includes:
step S10, receiving surface contour position information uploaded by hand-wearing equipment, wherein the surface contour position information is obtained by measuring position information of a touch point between the hand-wearing equipment and a surface contour of a target object by the hand-wearing equipment;
step S20, determining first depth information of the target object relative to the head-mounted display equipment according to the surface contour position information;
and S30, performing three-dimensional reconstruction on the target object according to the first depth information to obtain a target three-dimensional model.
In this embodiment, it should be noted that the target object is a real world object, the hand-worn device may be in communication connection with the head-worn display device, the hand-worn device may be an intelligent glove, the intelligent glove is provided with at least one position sensor, the position sensors may be disposed at fingertip positions of the intelligent glove, so that after a user operates the intelligent glove to touch the target object, the position sensors may measure position coordinates of at least one touch point between the fingertip position of the intelligent glove and the target object, so as to obtain surface contour position information of the target object, and the head-worn display device may be an AR (Augmented Reality) device, a VR (Virtual Reality) device, or an MR (Mixed Reality) device.
As an example, steps S10 to S30 include: receiving surface contour position information uploaded by a hand-wearing device, wherein the surface contour position information comprises position coordinates of at least one surface contour point on a surface contour of a target object, and the position coordinates of the surface contour point are obtained by measuring the position coordinates of a touch point between the hand-wearing device and the surface contour of the target object through a position sensor; determining the distance from the head-mounted display device to an object center point of a target object according to the position coordinates of each surface contour point of the target object to obtain first depth information, and performing three-dimensional reconstruction on the target object according to the first depth information to obtain a target three-dimensional model, wherein the first depth information is the spacing distance between the head-mounted display device and the target object and is used for adjusting the distance of the three-dimensional model in the field of view of the AR display picture during the three-dimensional reconstruction, and the farther the spacing distance is, the farther the three-dimensional model is in the field of view of the AR display picture. Like this, wear display device in this application embodiment only need set up single RGB camera and acquire two-dimensional object image, hand wearing equipment can assist wear display device and acquire depth information, thereby combine two-dimensional object image and depth information can realize accurate three-dimensional reconstruction, compare in the method of acquiring depth information through two RGB cameras and carrying out three-dimensional reconstruction, the camera is lower to wearing display device's consumption among the three-dimensional reconstruction process in this application embodiment, compare in the method of directly utilizing the image that single RGB camera shot to carry out three-dimensional reconstruction under the condition that lacks depth information, the degree of accuracy of three-dimensional reconstruction in this application embodiment is higher.
Wherein the step of performing three-dimensional reconstruction of the target object according to the first depth information comprises:
step S31, acquiring a two-dimensional object image of the target object, and performing three-dimensional modeling on the target object according to the two-dimensional object image to obtain a three-dimensional object model;
and S32, according to the first depth information, carrying out space visual angle adjustment on the three-dimensional object model to obtain a target three-dimensional model.
As an example, steps S31 to S32 include: acquiring a two-dimensional object image of a target object shot by a single RGB camera of a head-mounted display device, and performing three-dimensional modeling on the target object according to the two-dimensional object image to obtain a three-dimensional object model, wherein the three-dimensional reconstruction mode comprises an SFM (Structure-from-motion) method and the like, and for example, a three-dimensional point cloud model can be established through Colmap or OpenMVS; and according to the first depth information, performing spatial visual angle adjustment on the three-dimensional object model to adjust the visual angle display effect of the three-dimensional object model in an AR display picture to obtain a target three-dimensional model, wherein the visual angle display effect mainly comprises the display size, the stereoscopic impression displayed in the space and the like. According to the method and the device, the three-dimensional object model is adjusted according to the depth information, so that the display effect of the adjusted three-dimensional object model in the AR display picture is closer to the target object in the real space, and the accuracy of three-dimensional reconstruction can be improved.
Wherein the surface contour position information at least comprises a surface contour point coordinate, and the step of determining the first depth information of the target object relative to the head-mounted display device according to the surface contour position information comprises:
step S21, determining object center point coordinates of the target object according to the surface contour point coordinates;
step S22, calculating the distance between the object center point coordinates and the head-mounted display equipment to obtain first depth information of the target object relative to the head-mounted display equipment.
As an example, steps S21 to S22 identify a surface contour shape of the target object from a two-dimensional object image of the target object; calculating object center point coordinates of the center point of the target object according to the surface contour shape and the coordinates of each surface contour point; and acquiring the spatial position coordinates of the head-mounted display equipment corresponding to the head-mounted display equipment, calculating the interval distance between the center point of the target object and the head-mounted display equipment according to the object center point coordinates and the spatial position coordinates of the head-mounted display equipment, and taking the interval distance as the depth information of the target object relative to the head-mounted display equipment.
Wherein, before the step of receiving surface contour position information uploaded by the hand-worn device, the three-dimensional reconstruction method further comprises:
a10, acquiring a two-dimensional object image of a target object, and performing image recognition on the two-dimensional object image to obtain an image recognition result;
step A20, according to the image recognition result, identifying key points in the two-dimensional object image for measurement to obtain an identified two-dimensional object image;
step A30, displaying the marked two-dimensional object image through the head-mounted display device, wherein the marked two-dimensional object image is used for indicating that a user touches the measurement key point of the target object through the hand-mounted device.
As an example, steps a10 to a30 include: acquiring a two-dimensional object image of a target object, and performing image recognition on the two-dimensional object image to recognize the object shape and the object outline of the target object to obtain an image recognition result, wherein the object shape recognition result and the object outline recognition result are obtained; according to the object shape recognition result and the object outline recognition result, marking a measurement key point in the two-dimensional object image to obtain a marked two-dimensional object image, wherein the marking mode can be color marking or marker marking, for example, the marked two-dimensional object image uses a color mark with a more prominent display effect to mark the measurement key point or uses a more striking marker to mark the measurement key point; displaying the identification two-dimensional object image in an AR display picture of the head-mounted display device, wherein the identification two-dimensional object image is used for indicating a user to touch a measurement key point of the target object through the hand-mounted device. Like this, can instruct the user to control the accurate measurement key point that touches on the target object of hand wearing equipment, thereby can the accuracy measurement key point coordinate that will measure on the target object, thereby promote the accuracy of hand wearing equipment measurement target object's key point coordinate, obtain the surface contour positional information that the degree of accuracy is higher, promote the degree of accuracy of the first depth information of surface contour positional information calculation, thereby carry out three-dimensional reconstruction based on the first depth information that the degree of accuracy is higher, can promote the degree of accuracy of three-dimensional reconstruction.
The embodiment of the application provides a three-dimensional reconstruction method, wherein hand-wearing equipment and head-wearing display equipment are arranged in the embodiment of the application, and when the hand-wearing equipment touches a target object, the hand-wearing equipment can measure position information of a touch point between the hand-wearing equipment and the surface contour of the target object and the position information serves as the surface contour position information of the target object; after the head-mounted display device receives the surface contour position information sent by the hand-mounted device, first depth information of the target object relative to the head-mounted display device can be determined according to the surface contour position information; and performing three-dimensional reconstruction on the target object according to the first depth information to obtain a target three-dimensional model. Like this, wear display device and can acquire with the help of hand wearing equipment the depth information that needs when carrying out three-dimensional reconstruction, need not to set up two RGB cameras and acquire depth information, acquire the two-dimensional object image according to single RGB camera in the embodiment of this application, the depth information who acquires with the help of hand wearing equipment is reconciled, can realize carrying out accurate three-dimensional reconstruction process under the condition that sets up single camera, can promote the three-dimensional reconstruction degree of accuracy under the single RGB camera scene, and compare in the scene of two RGB cameras, the consumption of wearing display device has been practiced thrift on the basis of guaranteeing the degree of accuracy of three-dimensional reconstruction, the accuracy of wearing display device three-dimensional reconstruction has been realized having compromise and has been promoted and wear display device's consumption when three-dimensional reconstruction is reduced.
Example two
Referring to fig. 2, in another embodiment of the present application, based on the first embodiment of the present application, the step of performing three-dimensional reconstruction on the target object according to the first depth information includes:
step B10, acquiring a first object image of a target object and a second object image of the target object in a current three-dimensional reconstruction scene, wherein the first object image is obtained by shooting when the head-mounted display device measures the position information of the touch point by the hand-mounted device;
step B20, determining the view angle offset information of the second object image relative to the first object image by performing feature matching on the first object image and the second object image;
step B30, adjusting the first depth information according to the visual angle offset information to obtain second depth information;
and B40, performing three-dimensional reconstruction on the target object according to the second depth information.
In this embodiment, it should be noted that the camera for shooting the target object may be a camera mounted on the head-mounted display device.
For the first depth information, the first depth information may be stored in the head-mounted display device, for example, in a linked list, and when the depth information needs to be used for performing three-dimensional reconstruction, the first depth information is taken out for performing three-dimensional reconstruction, but a real space scene during the three-dimensional reconstruction is often inconsistent with a real space scene during the measurement of the depth information, so that the depth information of the target object relative to the head-mounted display device during the three-dimensional reconstruction is inconsistent with the depth information of the target object relative to the head-mounted display device during the measurement of the first depth information, and therefore, if the three-dimensional reconstruction is directly performed according to the first depth information, accuracy of the three-dimensional reconstruction may be affected.
As an example, steps B10 to B40 include: acquiring a first object image of a target object and a second object image of the target object in a current three-dimensional reconstruction scene, wherein the first object image is obtained by shooting of the head-mounted display device when the hand-mounted device measures the position information of the touch point; obtaining a feature matching result by performing feature matching on the first object image and the second object image, wherein the feature matching result can be a corresponding relation between pixel points in the first object image and pixel points in the second object image; determining view angle offset information of the second object image relative to the first object image according to the feature matching result, wherein the view angle offset information comprises a view angle rotation matrix and a view angle offset matrix; correcting the first depth information according to the visual angle offset information to obtain second depth information under the current three-dimensional reconstruction scene; according to the second object image, performing three-dimensional modeling on the target object to obtain a three-dimensional object model; and according to the second depth information, carrying out space visual angle adjustment on the three-dimensional object model to obtain a target three-dimensional model. The current three-dimensional reconstruction scene may be a real space scene when three-dimensional reconstruction is currently performed, where the real space scene relates to a relative distance, a relative orientation, and the like between the head-mounted display device and the target object.
The embodiment of the application provides a three-dimensional reconstruction method, namely acquiring a first object image of a target object and a second object image of the target object in a current three-dimensional reconstruction scene, wherein the first object image is obtained by shooting when the head-mounted display device measures position information of a touch point by the hand-mounted device; determining view angle offset information of the second object image relative to the first object image by performing feature matching on the first object image and the second object image; adjusting the first depth information according to the visual angle offset information to obtain second depth information; and performing three-dimensional reconstruction on the target object according to the second depth information. Therefore, in the embodiment of the application, the measured first depth information can be stored, when the depth information needs to be used for three-dimensional reconstruction, the first depth information is corrected according to the visual angle offset information of the real space scene relative to the real space reconstruction when the surface contour position information is measured, the second depth information under the current three-dimensional reconstruction scene is obtained, the depth information used for three-dimensional reconstruction is matched with the current three-dimensional reconstruction scene, and the target object is subjected to three-dimensional reconstruction under the current three-dimensional lottery drawing scene according to the second depth information, so that the accuracy of three-dimensional reconstruction can be improved.
EXAMPLE III
An embodiment of the present application provides a three-dimensional reconstruction method, which is applied to a hand-worn device, and with reference to fig. 3, in a first embodiment of the three-dimensional reconstruction method, the three-dimensional reconstruction method includes:
step C10, measuring position information of a touch point between the hand-wearing device and the surface contour of the target object to obtain surface contour position information corresponding to the target object;
and step C20, uploading the surface contour position information to a head-mounted display device, so that the head-mounted display device determines the depth information of the target object relative to the head-mounted display device according to the surface contour position information, and performing three-dimensional reconstruction according to the depth information.
In this embodiment, it should be noted that the hand-wearing device may be a smart glove, a position sensor is disposed at a fingertip position of the smart glove, when a fingertip position of the smart glove touches a surface contour of a target object, the position sensor takes a position coordinate of a measured touch point as a position coordinate of a surface contour point of the target object, and transmits the position coordinate of at least one surface contour point as surface contour position information to the head-wearing display device, so that the head-wearing display device determines depth information of the target object relative to the head-wearing display device according to the surface contour position information, and performs three-dimensional reconstruction according to the depth information, where the specific implementation process of the head-wearing display device determining the depth information of the target object relative to the head-wearing display device according to the surface contour position information and performing three-dimensional reconstruction according to the depth information may refer to the specific implementation contents in steps S10 to S30, and is not repeated here.
Wherein the surface contour position information at least comprises a surface contour point coordinate, the hand-worn device indication comprises a fingertip position sensor,
the step of measuring the position information of the touch point between the hand-wearing device and the surface contour of the target object to obtain the position information of the surface contour corresponding to the target object comprises the following steps:
step C21, detecting an extrusion pressure value at a fingertip position of the hand wearing device;
and step C22, if the squeezing pressure value is larger than a preset pressure threshold value, measuring the position coordinates of a touch point between the hand wearing device and the surface contour of the target object through the fingertip position sensor to obtain surface contour point coordinates.
In this embodiment, it should be noted that the fingertip position sensor is a position sensor disposed at a fingertip position of the hand-worn device, wherein a pressure sensor is further disposed on the hand-worn device.
As an example, steps C21 to C22 include: detecting an extrusion pressure value at a fingertip position of the hand-wearing device through a pressure sensor; if the squeezing pressure value is larger than a preset pressure threshold value, the fingertip position of the hand wearing device is judged to be touched with the measurement key point on the target object, the position coordinate of the fingertip position of the hand wearing device is measured through the fingertip position sensor, the position coordinate of the fingertip position is used as the position coordinate of the touch point between the hand wearing device and the surface contour of the target object, and the surface contour point coordinate is obtained.
The embodiment of the application provides a three-dimensional reconstruction method, namely measuring position information of a touch point between hand-worn equipment and a surface profile of a target object to obtain surface profile position information corresponding to the target object; and uploading the surface contour position information to a head-mounted display device, so that the head-mounted display device determines the depth information of the target object relative to the head-mounted display device according to the surface contour position information, and performs three-dimensional reconstruction according to the depth information. Like this, wear display device and can acquire with the help of hand wearing equipment the depth information that needs when carrying out three-dimensional reconstruction, need not to set up two RGB cameras and acquire depth information, acquire two-dimensional object image according to single RGB camera in this application, the depth information who acquires with the help of hand wearing equipment is reconciled, can realize carrying out accurate three-dimensional reconstruction process under the condition that sets up single camera, can promote the three-dimensional reconstruction degree of accuracy under the single RGB camera scene, and compare in the scene of two RGB cameras, the consumption of wearing display device has been practiced thrift on the basis of the degree of accuracy of guaranteeing three-dimensional reconstruction, the accuracy of wearing display device three-dimensional reconstruction has been realized having taken into account and has been promoted and wear the consumption of display device when three-dimensional reconstruction.
Example four
The present application further provides a three-dimensional reconstruction system, comprising:
the head-mounted display equipment is used for receiving surface contour position information uploaded by the hand-mounted equipment; determining first depth information of a target object relative to the head-mounted display device according to the surface contour position information; according to the first depth information, performing three-dimensional reconstruction on the target object to obtain a target three-dimensional model;
the hand wearing equipment is used for measuring the position information of a touch point between the hand wearing equipment and the surface contour of the target object to obtain the surface contour position information corresponding to the target object; and uploading the surface contour position information to a head-mounted display device, so that the head-mounted display device determines the depth information of the target object relative to the head-mounted display device according to the surface contour position information, and performing three-dimensional reconstruction according to the depth information.
Optionally, the head mounted display device is further configured to:
acquiring a first object image of a target object and a second object image of the target object in a current three-dimensional reconstruction scene, wherein the first object image is obtained by shooting when the head-mounted display device measures position information of the touch point through the hand-mounted device;
determining view angle offset information of the second object image relative to the first object image by performing feature matching on the first object image and the second object image;
adjusting the first depth information according to the visual angle offset information to obtain second depth information;
and performing three-dimensional reconstruction on the target object according to the second depth information.
Optionally, the head mounted display device is further configured to:
acquiring a two-dimensional object image of the target object, and performing three-dimensional modeling on the target object according to the two-dimensional object image to obtain a three-dimensional object model;
and according to the first depth information, carrying out space visual angle adjustment on the three-dimensional object model to obtain a target three-dimensional model.
Optionally, the surface contour position information at least includes a surface contour point coordinate, and the head-mounted display device is further configured to:
determining object center point coordinates of the target object according to the surface contour point coordinates;
and calculating the distance between the object center point coordinate and the head-mounted display equipment to obtain first depth information of the target object relative to the head-mounted display equipment.
Optionally, the head mounted display device is further configured to:
acquiring a two-dimensional object image of a target object, and performing image recognition on the two-dimensional object image to obtain an image recognition result;
according to the image recognition result, identifying key points for measurement in the two-dimensional object image to obtain an identified two-dimensional object image;
displaying the identification two-dimensional object image through the head-mounted display device, wherein the identification two-dimensional object image is used for indicating a user to touch the measurement key point of the target object through the hand-mounted device.
Optionally, the surface contour position information includes at least one surface contour point coordinate, the hand-worn device indication includes a fingertip position sensor, the hand-worn device is further configured to:
detecting an extrusion pressure value at a fingertip position of the hand wearing device;
and if the extrusion pressure value is larger than a preset pressure threshold value, measuring the position coordinates of a touch point between the hand-wearing device and the surface contour of the target object through the fingertip position sensor to obtain surface contour point coordinates.
The three-dimensional reconstruction system provided by the application adopts the three-dimensional reconstruction method in the embodiment, and solves the technical problem of improving the three-dimensional reconstruction accuracy of the head-mounted display equipment and reducing the power consumption of the head-mounted display equipment during three-dimensional reconstruction. Compared with the prior art, the beneficial effects of the three-dimensional reconstruction system provided by the embodiment of the application are the same as those of the three-dimensional reconstruction method provided by the embodiment, and other technical features of the three-dimensional reconstruction system are the same as those disclosed by the embodiment method, which are not repeated herein.
EXAMPLE five
The embodiment of the application provides an electronic device, the electronic device can be an earphone or a terminal device carrying the earphone, the electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the three-dimensional reconstruction method of the first embodiment.
Referring now to FIG. 4, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the electronic device may include a processing means (e.g., a central processing unit, a graphic processor, etc.) that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage means into a Random Access Memory (RAM). In the RAM, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device, the ROM, and the RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
Generally, the following systems may be connected to the I/O interface: input devices including, for example, touch screens, touch pads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, and the like; output devices including, for example, liquid Crystal Displays (LCDs), speakers, vibrators, and the like; storage devices including, for example, magnetic tape, hard disk, etc.; and a communication device. The communication means may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While the figures illustrate an electronic device with various systems, it is to be understood that not all illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means, or installed from a storage means, or installed from a ROM. The computer program, when executed by a processing device, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
The electronic device provided by the application adopts the three-dimensional reconstruction method in the embodiment, and the technical problem of how to improve the three-dimensional reconstruction accuracy of the head-mounted display device and reduce the power consumption of the head-mounted display device during three-dimensional reconstruction is solved. Compared with the prior art, the beneficial effects of the electronic device provided by the embodiment of the present application are the same as the beneficial effects of the three-dimensional reconstruction method provided by the first embodiment, and other technical features of the electronic device are the same as those disclosed in the method of the first embodiment, which are not repeated herein.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the foregoing description of embodiments, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
EXAMPLE six
The present embodiment provides a computer-readable storage medium having computer-readable program instructions stored thereon for performing the method of three-dimensional reconstruction in the first embodiment.
The computer readable storage medium provided by the embodiments of the present application may be, for example, a usb disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the above. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present embodiment, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer-readable storage medium may be embodied in an electronic device; or may be present alone without being incorporated into the electronic device.
The computer readable storage medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving surface contour position information uploaded by hand-wearing equipment, wherein the surface contour position information is obtained by measuring position information of a touch point between the hand-wearing equipment and a surface contour of a target object by the hand-wearing equipment; determining first depth information of the target object relative to the head-mounted display device according to the surface contour position information; and performing three-dimensional reconstruction on the target object according to the first depth information to obtain a target three-dimensional model.
Or measuring position information of a touch point between the hand-wearing device and the surface profile of the target object to obtain surface profile position information corresponding to the target object; and uploading the surface contour position information to a head-mounted display device, so that the head-mounted display device determines the depth information of the target object relative to the head-mounted display device according to the surface contour position information, and performs three-dimensional reconstruction according to the depth information.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the names of the modules do not in some cases constitute a limitation of the unit itself.
The computer-readable storage medium provided by the application stores computer-readable program instructions for executing the three-dimensional reconstruction method, and solves the technical problem of improving the three-dimensional reconstruction accuracy of the head-mounted display device and reducing the power consumption of the head-mounted display device during three-dimensional reconstruction. Compared with the prior art, the beneficial effects of the computer-readable storage medium provided by the embodiment of the present application are the same as the beneficial effects of the three-dimensional reconstruction method provided by the above embodiment, and are not described herein again.
EXAMPLE seven
The present application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the three-dimensional reconstruction method as described above.
The computer program product solves the technical problems of improving the three-dimensional reconstruction accuracy of the head-mounted display device and reducing the power consumption of the head-mounted display device during three-dimensional reconstruction. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the present application are the same as those of the three-dimensional reconstruction method provided by the above embodiment, and are not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all equivalent structures or equivalent processes, which are directly or indirectly applied to other related technical fields, and which are not limited by the present application, are also included in the scope of the present application.

Claims (10)

1. A three-dimensional reconstruction method is applied to a head-mounted display device, and comprises the following steps:
receiving surface contour position information uploaded by a hand wearing device, wherein the surface contour position information is obtained by measuring position information of a touch point between the hand wearing device and a surface contour of a target object by the hand wearing device;
determining first depth information of the target object relative to the head-mounted display device according to the surface contour position information;
and performing three-dimensional reconstruction on the target object according to the first depth information to obtain a target three-dimensional model.
2. The three-dimensional reconstruction method of claim 1, wherein the step of three-dimensionally reconstructing the target object based on the first depth information comprises:
acquiring a first object image of a target object and a second object image of the target object in a current three-dimensional reconstruction scene, wherein the first object image is obtained by shooting of the head-mounted display device when the hand-mounted device measures the position information of the touch point;
determining view angle offset information of the second object image relative to the first object image by performing feature matching on the first object image and the second object image;
adjusting the first depth information according to the visual angle offset information to obtain second depth information;
and performing three-dimensional reconstruction on the target object according to the second depth information.
3. The three-dimensional reconstruction method according to claim 1, wherein the step of three-dimensionally reconstructing the target object based on the first depth information comprises:
acquiring a two-dimensional object image of the target object, and performing three-dimensional modeling on the target object according to the two-dimensional object image to obtain a three-dimensional object model;
and according to the first depth information, carrying out space visual angle adjustment on the three-dimensional object model to obtain a target three-dimensional model.
4. A three-dimensional reconstruction method as claimed in claim 1, characterized in that the surface contour position information comprises at least one surface contour point coordinate,
the step of determining first depth information of the target object relative to the head-mounted display device according to the surface contour position information comprises:
determining object center point coordinates of the target object according to the surface contour point coordinates;
and calculating the distance between the object center point coordinate and the head-mounted display equipment to obtain first depth information of the target object relative to the head-mounted display equipment.
5. The three-dimensional reconstruction method of claim 1, wherein prior to the step of receiving surface contour position information uploaded by a hand-worn device, the three-dimensional reconstruction method further comprises:
acquiring a two-dimensional object image of a target object, and performing image recognition on the two-dimensional object image to obtain an image recognition result;
according to the image recognition result, identifying key points for measurement in the two-dimensional object image to obtain an identified two-dimensional object image;
displaying the identification two-dimensional object image through the head-mounted display device, wherein the identification two-dimensional object image is used for indicating a user to touch the measurement key point of the target object through the hand-mounted device.
6. A three-dimensional reconstruction method is applied to a hand-worn device, and comprises the following steps:
measuring position information of a touch point between the hand-wearing device and the surface profile of the target object to obtain surface profile position information corresponding to the target object;
and uploading the surface contour position information to a head-mounted display device, so that the head-mounted display device determines the depth information of the target object relative to the head-mounted display device according to the surface contour position information, and performs three-dimensional reconstruction according to the depth information.
7. The three-dimensional reconstruction method of claim 6, wherein the surface contour position information comprises at least one surface contour point coordinate, the hand-worn device indication comprises a fingertip position sensor,
the step of measuring the position information of the touch point between the hand-wearing device and the surface contour of the target object to obtain the position information of the surface contour corresponding to the target object comprises the following steps:
detecting an extrusion pressure value at a fingertip position of the hand-worn device;
and if the squeezing pressure value is larger than a preset pressure threshold value, measuring the position coordinates of a touch point between the hand wearing device and the surface contour of the target object through the fingertip position sensor to obtain surface contour point coordinates.
8. A three-dimensional reconstruction system, comprising:
the head-mounted display equipment is used for receiving surface contour position information uploaded by the hand-mounted equipment; determining first depth information of a target object relative to the head-mounted display device according to the surface contour position information; according to the first depth information, performing three-dimensional reconstruction on the target object to obtain a target three-dimensional model;
the hand wearing equipment is used for measuring position information of a touch point between the hand wearing equipment and the surface profile of a target object to obtain surface profile position information corresponding to the target object; and uploading the surface contour position information to a head-mounted display device, so that the head-mounted display device determines the depth information of the target object relative to the head-mounted display device according to the surface contour position information, and performing three-dimensional reconstruction according to the depth information.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the three-dimensional reconstruction method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program for implementing a three-dimensional reconstruction method, the program being executed by a processor to implement the steps of the three-dimensional reconstruction method according to any one of claims 1 to 7.
CN202211365846.6A 2022-10-31 2022-10-31 Three-dimensional reconstruction method, system, electronic device and computer-readable storage medium Pending CN115546417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211365846.6A CN115546417A (en) 2022-10-31 2022-10-31 Three-dimensional reconstruction method, system, electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211365846.6A CN115546417A (en) 2022-10-31 2022-10-31 Three-dimensional reconstruction method, system, electronic device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115546417A true CN115546417A (en) 2022-12-30

Family

ID=84721420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211365846.6A Pending CN115546417A (en) 2022-10-31 2022-10-31 Three-dimensional reconstruction method, system, electronic device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115546417A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117697749A (en) * 2023-12-25 2024-03-15 墨现科技(东莞)有限公司 Device control method, device, gripping device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117697749A (en) * 2023-12-25 2024-03-15 墨现科技(东莞)有限公司 Device control method, device, gripping device and storage medium

Similar Documents

Publication Publication Date Title
EP3951721A1 (en) Method and apparatus for determining occluded area of virtual object, and terminal device
WO2020207190A1 (en) Three-dimensional information determination method, three-dimensional information determination device, and terminal apparatus
CN111062981A (en) Image processing method, device and storage medium
CN112424832A (en) System and method for detecting 3D association of objects
CN110388919B (en) Three-dimensional model positioning method based on feature map and inertial measurement in augmented reality
WO2014090090A1 (en) Angle measurement method and device
US11995254B2 (en) Methods, devices, apparatuses, and storage media for mapping mouse models for computer mouses
US9269004B2 (en) Information processing terminal, information processing method, and program
CN110740315B (en) Camera correction method and device, electronic equipment and storage medium
CN112150560A (en) Method and device for determining vanishing point and computer storage medium
CN115546417A (en) Three-dimensional reconstruction method, system, electronic device and computer-readable storage medium
CN111753685B (en) Method and device for adjusting facial hairline in image and electronic equipment
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
AU2020301254A1 (en) Sticker generating method and apparatus, and medium and electronic device
CN109816791B (en) Method and apparatus for generating information
US10965930B2 (en) Graphical user interface for indicating off-screen points of interest
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN110349109B (en) Fisheye distortion correction method and system and electronic equipment thereof
CN112633143A (en) Image processing system, method, head-mounted device, processing device, and storage medium
US10229522B2 (en) Fixed size scope overlay for digital images
CN113227708B (en) Method and device for determining pitch angle and terminal equipment
CN112634371B (en) Method and device for outputting information and calibrating camera
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium
CN111506280B (en) Graphical user interface for indicating off-screen points of interest
CN111179174B (en) Image stretching method and device based on face recognition points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination