CN108876900A - A kind of virtual target projective techniques merged with reality scene and system - Google Patents

A kind of virtual target projective techniques merged with reality scene and system Download PDF

Info

Publication number
CN108876900A
CN108876900A CN201810450105.5A CN201810450105A CN108876900A CN 108876900 A CN108876900 A CN 108876900A CN 201810450105 A CN201810450105 A CN 201810450105A CN 108876900 A CN108876900 A CN 108876900A
Authority
CN
China
Prior art keywords
plane
virtual target
reality scene
pose
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810450105.5A
Other languages
Chinese (zh)
Inventor
林鸿运
叶祖霈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing IQIYI Intelligent Technology Co Ltd
Original Assignee
Chongqing IQIYI Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing IQIYI Intelligent Technology Co Ltd filed Critical Chongqing IQIYI Intelligent Technology Co Ltd
Priority to CN201810450105.5A priority Critical patent/CN108876900A/en
Publication of CN108876900A publication Critical patent/CN108876900A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a kind of virtual target projective techniques merged with reality scene and system, this method and system are applied to augmented reality system, specially relative pose of the calculating camera relative to a marker plate in plane;Extract the plane and planar boundary in reality scene;Virtual target is rendered on the image of reality scene according to relative pose, plane and planar boundary.By the calculating to camera pose, and further pass through the extraction to plane, it is determined that exact position of the plane relative to camera, so as to so that dummy object is precisely placed in respective planes.

Description

A kind of virtual target projective techniques merged with reality scene and system
Technical field
The present invention relates to augmented reality fields, project more particularly to a kind of virtual target merged with reality scene Method and system.
Background technique
Augmented reality (Augmented Reality, abbreviation AR) is a kind of position for calculating camera image in real time It sets and angle and the technology for being superimposed respective image, video, 3D model, the target of this technology is on the screen virtual world set In real world and interacted.Apple ARKit and Google ARCore leading at present has been realized in relatively accurate Motion tracking (Motion tracking), it estimates the relative position of mobile phone based on the camera of mobile phone, can be with when use Virtual objects are fixed on a position, and are moved around it.
But being made in terms of the two is in Context awareness (Environmental understanding) and not enough It is ideal.By taking ARCore as an example, a rough plane can only be estimated, then draws dummy object in the plane, such as Fig. 1 institute Show.Discovery when applicant of the present invention states technical solution on the implementation, if it is desired to reach dummy object and real world Be perfectly combined, such effect is inadequate, it is necessary to plane (such as desktop, floor or ceiling) is detected, to make System " can accurately place dummy object " in respective planes.
Summary of the invention
In view of this, the present invention provides a kind of virtual target projective techniques merged with reality scene and system, with solution The problem of certainly dummy object can not be placed exactly in respective planes by augmented reality.
To solve the above-mentioned problems, it the invention discloses a kind of virtual target projective techniques merged with reality scene, answers For augmented reality system, optionally, the virtual target exchange method includes step:
Calculate relative pose of the camera relative to a marker plate in plane;
Extract the planar boundary of the plane and the plane in reality scene;
Virtual target is rendered into the reality scene according to the relative pose, the plane and the planar boundary On image.
Optionally, the relative pose for calculating camera relative to a marker plate in plane, including:
Obtain the image of the marker plate of the camera intake;
The coordinate of four angle points of the marker plate is detected using described image;
The relative pose is calculated according to the coordinate of four angle points;
Optionally, described that the relative pose is calculated according to the coordinate of four angle points, including:
PNP problem is solved according to the coordinate of four angle points, obtains the relative pose.
Optionally, the plane and planar boundary for extracting reality scene, including:
Semantic segmentation method based on deep learning extract the reality scene the plane and the planar boundary;
Alternatively, extracting the plane in reality scene where marker plate, and it is partitioned into the planar boundary.
Correspondingly, the implementation in order to guarantee the above method, the present invention also provides it is a kind of merged with reality scene it is virtual Target projection system, is applied to augmented reality system, and the virtual target interactive system includes:
Pose computing module, for calculating relative pose of the camera relative to a marker plate in plane;
Plane extraction module, for extracting the planar boundary of plane and the plane in reality scene;
Image rendering module, for according to the relative pose, the plane and the planar boundary by virtual target wash with watercolours On the image for contaminating the reality scene.
Optionally, the pose computing module includes:
Image acquisition unit, the image of the marker plate for obtaining the camera intake;
Marker plate detection unit, the coordinate of four angle points for detecting the marker plate using described image;
Pose computing unit, for calculating the relative pose according to the coordinate of four angle points;
Optionally, the pose computing unit is specifically used for solving PNP problem according to the coordinate of four angle points, obtains The relative pose.
Optionally, the plane extraction module includes:
First extraction unit extracts the plane and the institute of the reality scene based on the semantic segmentation method of deep learning State planar boundary;
Second extraction unit for extracting the plane in reality scene where marker plate, and is partitioned into described flat Face boundary.
It can be seen from the above technical proposal that the present invention provides a kind of virtual target projection sides merged with reality scene Method and system, this method and system are applied to augmented reality system, and specially calculating camera is relative to a marker in plane The relative pose of plate;Extract the plane and planar boundary in reality scene;It will be virtual according to relative pose, plane and planar boundary Target is rendered on the image of reality scene.It is determined by the calculating to camera pose, and further by the extraction to plane Exact position of the plane relative to camera, so as to so that dummy object is precisely placed in respective planes.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of schematic diagram for drawing dummy object in the plane provided by the invention;
Fig. 2 is a kind of step process of the virtual target projective techniques merged with reality scene provided in an embodiment of the present invention Figure;
Fig. 3 is a kind of calculating schematic diagram of the relative position of camera and marker plate provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram that a kind of plane provided in an embodiment of the present invention is extracted;
Fig. 5 is the schematic diagram that another plane provided in an embodiment of the present invention is extracted;
Fig. 6 is a kind of game schematic diagram of augmented reality game provided in an embodiment of the present invention;
Fig. 7 is a kind of structural frames of the virtual target projection system merged with reality scene provided in an embodiment of the present invention Figure.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Embodiment one
Fig. 2 is a kind of step process of the virtual target projective techniques merged with reality scene provided in an embodiment of the present invention Figure.
Referring to shown in Fig. 2, virtual target exchange method provided in this embodiment is applied to augmented reality system, the projection side Method specifically comprises the following steps:
S101:Calculate relative pose of the camera relative to a marker plate in plane.
Here camera refers to realize augmented reality purpose, the photograph of the image for obtaining realities of the day scene Or picture pick-up device.The marker plate is placed in corresponding plane, such as desktop or ground, the marker plate by being obtained to camera here Image calculate relative pose of the camera relative to marker plate.Detailed process is:
Firstly, obtaining the image of marker plate acquired in the camera;It is then detected that the seat of four angle points of marker plate Mark;Finally, calculating relative pose of the camera relative to marker plate according to the coordinate of four angle points.Specifically, it is counting When calculating the relative pose, PNP problem can be solved according to the relative coordinate of aforementioned four angle point, to obtain the opposite of the camera The translational movement and rotation amount of pose, the i.e. camera relative to the marker plate under world coordinate system.Alternatively, it is also possible to utilize slam Algorithm and imu in conjunction with method carry out pose calculating.
In three dimensions, the transformation between video camera and label can be estimated by marking the exact position of angle point.This Operation is known as the Attitude estimation of 2 d-to-3 d.The estimation procedure can find an Euclidean space between object and video camera Transformation (transformation is only made of rotation and coordinate translation).As shown in figure 3, the C in figure indicates the center of camera, point P1-P4 is Three-dimensional point in world coordinate system, and p1-p4 is that point P1-P4 is projected to subpoint on the plane of delineation of video camera.Wherein P1 is (0.5;-0.5;0.0), P2 is (0.5;0.5;0.0), P3 is (- 0.5;0.5;0.0), P1 is (- 0.5;-0.5;0.0).
Mark position estimation purpose be exactly known three-dimensional world mark position (P1-P4), have inner parameter matrix Camera C and known image plane subpoint (p1-p4) in the case where, find Relative Transformation between mark position and camera Relationship can calculate pose of the camera relative to marker.Specifically be exactly according to mark position (P1-P4), camera it is interior Portion's parameter matrix and subpoint (p1-p4) solve PNP problem, actually refer to P4P problem here, to obtain mark position and throw Transformation relation between shadow point, and then translational movement and rotation amount of the camera relative to world coordinate system are obtained according to transformation relation, Obtain the relative pose of camera, or perhaps the relative pose variation of camera.
S102:Extract the plane and planar boundary of reality scene.
Equally, marker plate can be set in corresponding plane, extracted the plane where marker plate, then divide The planar boundary of the plane is cut out, referring to shown in Fig. 4.The table being also based in the semantic segmentation extraction scene of deep learning The planes such as face, metope and ceiling and the boundary for being partitioned into plane, referring to Figure 5.
S103:Image rendering is carried out according to relative pose, plane and planar boundary.
Image wash with watercolours is being carried out relative to the relative pose of respective planes, the plane and its planar boundary according to above-mentioned camera Dye, virtual target is rendered on the image of reality scene, i.e., renders on the display interface of the image of display reality scene The virtual target, such as virtual objects, virtual portrait, and virtual target is controlled to the operation of touch screen or handle by user Activity, to make augmented reality game, as shown in Figure 6.
It can be seen from the above technical proposal that present embodiments providing a kind of virtual target projection merged with reality scene Method, this method are applied to augmented reality system, specially opposite position of the calculating camera relative to a marker plate in plane Appearance;Extract the plane and planar boundary in reality scene;Virtual target is rendered into according to relative pose, plane and planar boundary On the image of reality scene.By the calculating to camera pose, and further pass through the extraction to plane, it is determined that plane is opposite In the exact position of camera, so as to so that dummy object is precisely placed in respective planes.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented Necessary to example.
Embodiment two
Fig. 7 is a kind of structural frames of the virtual target projection system merged with reality scene provided in an embodiment of the present invention Figure.
Referring to shown in Fig. 7, virtual target projection system provided in this embodiment is applied to augmented reality system, the projection system System specifically includes pose computing module 10, plane extraction module 20 and image rendering module 30.
Pose computing module is for calculating relative pose of the camera relative to a marker plate in plane.
Here camera refers to realize augmented reality purpose, the photograph of the image for obtaining realities of the day scene Or picture pick-up device.The marker plate is placed in corresponding plane, such as desktop or ground, the marker plate by being obtained to camera here Image calculate relative pose of the camera relative to marker plate.The module includes image acquisition unit, the detection of marker plate Unit and pose computing unit.
Image acquisition unit is used to obtain the image of marker plate acquired in the camera;Marker plate detection unit is used for Detect the coordinate of four angle points of marker plate;It is opposite that pose computing unit then calculates the camera according to the coordinate of four angle points In the relative pose of marker plate.Specifically, the second computing unit, can be according to aforementioned four when calculating the relative pose The relative coordinate of angle point solves PNP problem, to obtain the relative pose of the camera.Alternatively, it is also possible to using slam algorithm and Imu in conjunction with method carry out pose calculating.
In three dimensions, the transformation between video camera and label can be estimated by marking the exact position of angle point.This Operation is known as the Attitude estimation of 2 d-to-3 d.The estimation procedure can find an Euclidean space between object and video camera Transformation (transformation is only made of rotation and coordinate translation).As shown in figure 3, the C in figure indicates the center of camera, point P1-P4 is Three-dimensional point in world coordinate system, and p1-p4 is that point P1-P4 is projected to subpoint on the plane of delineation of video camera.Wherein P1 is (0.5;-0.5;0.0), P2 is (0.5;0.5;0.0), P3 is (- 0.5;0.5;0.0), P1 is (- 0.5;-0.5;0.0).
Mark position estimation purpose be exactly known three-dimensional world mark position (P1-P4), have inner parameter matrix Camera C and known image plane subpoint (p1-p4) in the case where, find Relative Transformation between mark position and camera Relationship can calculate pose of the camera relative to marker.Specifically be exactly according to mark position (P1-P4), camera it is interior Portion's parameter matrix and subpoint (p1-p4) solve PNP problem, actually refer to P4P problem here, to obtain mark position and throw Transformation relation between shadow point, and then translational movement and rotation amount of the camera relative to world coordinate system are obtained according to transformation relation, Obtain the relative pose of camera, or perhaps the relative pose variation of camera.
Plane extraction module is used to extract the plane and planar boundary of reality scene.
The module includes a unit or two units in the first extraction unit and the second extraction unit.It is put down accordingly Marker plate can be set in face, the first extraction unit is used to extract the plane where marker plate, is then partitioned into this The planar boundary of plane, referring to shown in Fig. 4.Second extraction unit then utilizes the semantic segmentation based on deep learning to extract in scene The planes such as desktop, metope and ceiling and be partitioned into the boundary of plane, referring to Figure 5.
Image rendering module is used to carry out image rendering according to relative pose, plane and planar boundary.
Image wash with watercolours is being carried out relative to the relative pose of respective planes, the plane and its planar boundary according to above-mentioned camera Dye, virtual target is rendered on the image of reality scene, i.e., renders on the display interface of the image of display reality scene The virtual target, such as virtual objects, virtual portrait, and virtual target is controlled to the operation of touch screen or handle by user Activity, to make augmented reality game, as shown in Figure 6.
It can be seen from the above technical proposal that present embodiments providing a kind of virtual target projection merged with reality scene System, the system are applied to augmented reality system, specially opposite position of the calculating camera relative to a marker plate in plane Appearance;Extract the plane and planar boundary in reality scene;Virtual target is rendered into according to relative pose, plane and planar boundary On the image of reality scene.By the calculating to camera pose, and further pass through the extraction to plane, it is determined that plane is opposite In the exact position of camera, so as to so that dummy object is precisely placed in respective planes.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple Place illustrates referring to the part of embodiment of the method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can provide as method, apparatus or calculate Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart And/or in one or more blocks of the block diagram specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Technical solution provided by the present invention is described in detail above, specific case used herein is to this hair Bright principle and embodiment is expounded, method of the invention that the above embodiments are only used to help understand and its Core concept;At the same time, for those skilled in the art, according to the thought of the present invention, in specific embodiment and application There will be changes in range, in conclusion the contents of this specification are not to be construed as limiting the invention.

Claims (8)

1. a kind of virtual target projective techniques merged with reality scene are applied to augmented reality system, which is characterized in that described Virtual target exchange method includes step:
Calculate relative pose of the camera relative to a marker plate in plane;
Extract the planar boundary of the plane and the plane in reality scene;
Virtual target is rendered into the image of the reality scene according to the relative pose, the plane and the planar boundary On.
2. virtual target projective techniques as described in claim 1, which is characterized in that the calculating camera is relative in plane one The relative pose of a marker plate, including:
Obtain the image of the marker plate of the camera intake;
The coordinate of four angle points of the marker plate is detected using described image;
The relative pose is calculated according to the coordinate of four angle points.
3. virtual target projective techniques as claimed in claim 2, which is characterized in that the coordinate according to four angle points The relative pose is calculated, including:
PNP problem is solved according to the coordinate of four angle points, obtains the relative pose.
4. virtual target projective techniques as described in claim 1, which is characterized in that the plane peace for extracting reality scene Face boundary, including:
Semantic segmentation method based on deep learning extract the reality scene the plane and the planar boundary;
Alternatively, extracting the plane in reality scene where marker plate, and it is partitioned into the planar boundary.
5. a kind of virtual target projection system merged with reality scene is applied to augmented reality system, which is characterized in that described Virtual target interactive system includes:
Pose computing module, for calculating relative pose of the camera relative to a marker plate in plane;
Plane extraction module, for extracting the planar boundary of plane and the plane in reality scene;
Image rendering module, for being rendered into virtual target according to the relative pose, the plane and the planar boundary On the image of the reality scene.
6. virtual target projection system as claimed in claim 5, which is characterized in that the pose computing module includes:
Image acquisition unit, the image of the marker plate for obtaining the camera intake;
Marker plate detection unit, the coordinate of four angle points for detecting the marker plate using described image;
Pose computing unit, for calculating the relative pose according to the coordinate of four angle points.
7. virtual target projection system as claimed in claim 6, which is characterized in that the pose computing unit is specifically used for root PNP problem is solved according to the coordinate of four angle points, obtains the relative pose.
8. virtual target projection system as claimed in claim 5, which is characterized in that the plane extraction module includes:
First extraction unit extracts the plane and the institute of the reality scene for the semantic segmentation method based on deep learning State planar boundary;
Second extraction unit for extracting the plane in reality scene where marker plate, and is partitioned into the flat edge Boundary.
CN201810450105.5A 2018-05-11 2018-05-11 A kind of virtual target projective techniques merged with reality scene and system Pending CN108876900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810450105.5A CN108876900A (en) 2018-05-11 2018-05-11 A kind of virtual target projective techniques merged with reality scene and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810450105.5A CN108876900A (en) 2018-05-11 2018-05-11 A kind of virtual target projective techniques merged with reality scene and system

Publications (1)

Publication Number Publication Date
CN108876900A true CN108876900A (en) 2018-11-23

Family

ID=64333801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810450105.5A Pending CN108876900A (en) 2018-05-11 2018-05-11 A kind of virtual target projective techniques merged with reality scene and system

Country Status (1)

Country Link
CN (1) CN108876900A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801341A (en) * 2019-01-30 2019-05-24 北京经纬恒润科技有限公司 A kind of position method of calibration and device for demarcating target
CN110111428A (en) * 2019-05-28 2019-08-09 艾瑞迈迪科技石家庄有限公司 A kind of virtual target scaling method and device applied to augmented reality
CN110135323A (en) * 2019-05-09 2019-08-16 北京四维图新科技股份有限公司 Image labeling method, device, system and storage medium
CN111127661A (en) * 2019-12-17 2020-05-08 北京超图软件股份有限公司 Data processing method and device and electronic equipment
CN112147786A (en) * 2020-10-28 2020-12-29 南京爱奇艺智能科技有限公司 Augmented reality display system
CN112991857A (en) * 2021-03-04 2021-06-18 华北电力大学 Electric power emergency rescue training system and method
CN113744335A (en) * 2021-08-24 2021-12-03 北京体育大学 Sports guiding method, system and storage medium based on field mark
CN115147520A (en) * 2022-06-07 2022-10-04 聚好看科技股份有限公司 Method and equipment for driving virtual character based on visual semantics
CN115810100A (en) * 2023-02-06 2023-03-17 阿里巴巴(中国)有限公司 Method, apparatus, storage medium and program product for determining object placement plane

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140118339A1 (en) * 2012-10-31 2014-05-01 The Boeing Company Automated frame of reference calibration for augmented reality
US20160239952A1 (en) * 2013-09-30 2016-08-18 National Institute Of Advanced Industrial Science And Technology Marker image processing system
CN106408515A (en) * 2016-08-31 2017-02-15 郑州捷安高科股份有限公司 Augmented reality-based vision synthesis system
CN106548519A (en) * 2016-11-04 2017-03-29 上海玄彩美科网络科技有限公司 Augmented reality method based on ORB SLAM and the sense of reality of depth camera
CN106952312A (en) * 2017-03-10 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 It is a kind of based on line feature describe without mark augmented reality register method
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
US20170352192A1 (en) * 2014-11-16 2017-12-07 Eonite Perception Inc. Systems and methods for augmented reality preparation, processing, and application
CN107665507A (en) * 2016-07-29 2018-02-06 成都理想境界科技有限公司 The method and device of augmented reality is realized based on plane monitoring-network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140118339A1 (en) * 2012-10-31 2014-05-01 The Boeing Company Automated frame of reference calibration for augmented reality
US20160239952A1 (en) * 2013-09-30 2016-08-18 National Institute Of Advanced Industrial Science And Technology Marker image processing system
US20170352192A1 (en) * 2014-11-16 2017-12-07 Eonite Perception Inc. Systems and methods for augmented reality preparation, processing, and application
CN107665507A (en) * 2016-07-29 2018-02-06 成都理想境界科技有限公司 The method and device of augmented reality is realized based on plane monitoring-network
CN106408515A (en) * 2016-08-31 2017-02-15 郑州捷安高科股份有限公司 Augmented reality-based vision synthesis system
CN106548519A (en) * 2016-11-04 2017-03-29 上海玄彩美科网络科技有限公司 Augmented reality method based on ORB SLAM and the sense of reality of depth camera
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
CN106952312A (en) * 2017-03-10 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 It is a kind of based on line feature describe without mark augmented reality register method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴丽华: "网络数字媒体技术在生物多样性数字博物馆中的应用研究", 《国防工业出版社》, pages: 113 - 114 *
李超: "深入浅出,ARCore开发原理", 《声网》 *
李超: "深入浅出,ARCore开发原理", 《声网》, 24 January 2018 (2018-01-24), pages 1 - 14 *
王慧星: "基于iOS的增强现实技术的研究与应用", 《中国优秀硕士学位论文全文数据库》, no. 11, 15 November 2014 (2014-11-15), pages 7 - 36 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801341A (en) * 2019-01-30 2019-05-24 北京经纬恒润科技有限公司 A kind of position method of calibration and device for demarcating target
CN110135323A (en) * 2019-05-09 2019-08-16 北京四维图新科技股份有限公司 Image labeling method, device, system and storage medium
CN110111428B (en) * 2019-05-28 2023-06-20 艾瑞迈迪科技石家庄有限公司 Virtual target calibration method and device applied to augmented reality
CN110111428A (en) * 2019-05-28 2019-08-09 艾瑞迈迪科技石家庄有限公司 A kind of virtual target scaling method and device applied to augmented reality
CN111127661A (en) * 2019-12-17 2020-05-08 北京超图软件股份有限公司 Data processing method and device and electronic equipment
CN111127661B (en) * 2019-12-17 2023-08-29 北京超图软件股份有限公司 Data processing method and device and electronic equipment
CN112147786A (en) * 2020-10-28 2020-12-29 南京爱奇艺智能科技有限公司 Augmented reality display system
CN112147786B (en) * 2020-10-28 2024-04-12 南京爱奇艺智能科技有限公司 Augmented reality display system
CN112991857A (en) * 2021-03-04 2021-06-18 华北电力大学 Electric power emergency rescue training system and method
CN113744335A (en) * 2021-08-24 2021-12-03 北京体育大学 Sports guiding method, system and storage medium based on field mark
CN113744335B (en) * 2021-08-24 2024-01-16 北京体育大学 Motion guiding method, system and storage medium based on field mark
CN115147520A (en) * 2022-06-07 2022-10-04 聚好看科技股份有限公司 Method and equipment for driving virtual character based on visual semantics
CN115810100A (en) * 2023-02-06 2023-03-17 阿里巴巴(中国)有限公司 Method, apparatus, storage medium and program product for determining object placement plane

Similar Documents

Publication Publication Date Title
CN108876900A (en) A kind of virtual target projective techniques merged with reality scene and system
CN107180406B (en) Image processing method and equipment
TWI509221B (en) Processor-implemented method , computer readable non-transitory storage medium ,data processing device and apparatus for visual simultaneous localization and mapping
Gordon et al. What and where: 3D object recognition with accurate pose
TWI587205B (en) Method and system of three - dimensional interaction based on identification code
JP6456347B2 (en) INSITU generation of plane-specific feature targets
CN109584295A (en) The method, apparatus and system of automatic marking are carried out to target object in image
JP6609640B2 (en) Managing feature data for environment mapping on electronic devices
JP2018537758A (en) Expansion of multi-viewpoint image data including composite objects using IMU and image data
CN108028871A (en) The more object augmented realities of unmarked multi-user in mobile equipment
CN112348968B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN110473293A (en) Virtual objects processing method and processing device, storage medium and electronic equipment
Kurz et al. Handheld augmented reality involving gravity measurements
Viyanon et al. AR furniture: Integrating augmented reality technology to enhance interior design using marker and markerless tracking
KR101851303B1 (en) Apparatus and method for reconstructing 3d space
CN112529097B (en) Sample image generation method and device and electronic equipment
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
US11758100B2 (en) Portable projection mapping device and projection mapping system
CN110310325A (en) A kind of virtual measurement method, electronic equipment and computer readable storage medium
Schall et al. 3D tracking in unknown environments using on-line keypoint learning for mobile augmented reality
Oh et al. Mobile augmented reality system for Design Drawing visualization
CN110880187A (en) Camera position information determining method and device, electronic equipment and storage medium
US10466818B2 (en) Pointing action
CN109167992A (en) Image processing method and device
Makita et al. Photo-shoot localization of a mobile camera based on registered frame data of virtualized reality models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181123