CN114742977A - Video perspective method based on AR technology - Google Patents

Video perspective method based on AR technology Download PDF

Info

Publication number
CN114742977A
CN114742977A CN202210322777.4A CN202210322777A CN114742977A CN 114742977 A CN114742977 A CN 114742977A CN 202210322777 A CN202210322777 A CN 202210322777A CN 114742977 A CN114742977 A CN 114742977A
Authority
CN
China
Prior art keywords
image
real
processor
coordinate system
display screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210322777.4A
Other languages
Chinese (zh)
Inventor
于洋
严小天
刘鲁峰
刘琳
刘文彪
刘训福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Virtual Reality Research Institute Co ltd
Original Assignee
Qingdao Virtual Reality Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Virtual Reality Research Institute Co ltd filed Critical Qingdao Virtual Reality Research Institute Co ltd
Priority to CN202210322777.4A priority Critical patent/CN114742977A/en
Publication of CN114742977A publication Critical patent/CN114742977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A video perspective method based on AR technology comprises the following steps: s1, acquiring a real image in the current real state by using a binocular camera; s2, receiving the real image through the processor, and determining the current first position data; s3, acquiring IMU measurement data and second position and attitude data by adopting an IMU sensor; s4, performing Kalman fusion on the first position and attitude data to generate a first coordinate system; s5, the processor calls the virtual image in the memory, determines a second coordinate system displayed in the display screen, and establishes a position transformation relation with the first coordinate system; and S6, the processor superimposes the virtual image and the real image, and displays the superimposed image on the display screen according to the position conversion relation. The invention can enhance the positioning accuracy of the AR equipment in the real state, enhance the display effect of the virtual image in the real image and improve the user experience.

Description

Video perspective method based on AR technology
Technical Field
The invention relates to the technical field of augmented reality, in particular to a video perspective method based on an AR technology.
Background
Currently, there are two main display modes for a head-mounted AR device: video See-Through and Optical See-Through. The perspective type means that the AR device uses a semitransparent lens, and human eyes can see the external real scene through the lens directly. The video perspective type means that images collected by a camera are displayed on a virtual image screen in a video or image stream mode after being superposed with superposed information, and the images displayed on the virtual image screen are not overlapped with real pictures directly seen by human eyes, namely, virtual and real images are not overlapped. The optical perspective type means that the enhanced information is directly superposed on the corresponding position in the real picture seen by human eyes through the lens through an optical principle and equipment, and only the enhanced information is displayed on the screen. The latter has no problem of virtual-real misalignment, but has very high requirements on hardware equipment and software technology. For video see-through AR devices, ghosting due to the misalignment of real and virtual images can cause visual interference to the user.
In addition, in the conventional video perspective display mode, since the picture of the real scene changes along with the movement of the AR device, and the virtual image is not changed, after the AR device moves, the position of the virtual image in the screen is easily different from the original position, which affects the user experience.
Disclosure of Invention
In view of the above, the technical problems to be solved by the present invention are: the video perspective method based on the AR technology can enhance the positioning accuracy of the AR equipment in a real state, enhance the display effect of a virtual image in a real image and improve the user experience.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method of video perspective based on AR technology, the method comprising the steps of:
s1, acquiring a real image in the current real state by using a binocular camera;
s2, receiving the reality image through a processor, determining a key frame image, and determining current first pose data based on the key frame image;
s3, acquiring IMU measurement data by adopting an IMU sensor, and performing integral operation on the acceleration value and the angular velocity value to acquire second attitude data;
s4, performing Kalman fusion on the first position and orientation data and the second position and orientation data, and generating a first coordinate system according to the position and orientation information obtained after fusion;
s5, the processor calls the virtual image in the memory, determines a second coordinate system displayed in the display screen, and establishes a position transformation relation with the first coordinate system;
and S6, the processor superimposes the virtual image and the real image, and displays the superimposed image on a display screen according to the position transformation relation.
Preferably, the step S1 further includes:
s11, respectively acquiring a left eye image and a right eye image by a left eye camera and a right eye camera in the binocular cameras;
and S12, carrying out time synchronization on the left eye image and the right eye image.
Preferably, the step S2 further includes:
and S21, the processor integrates the left eye image and the right eye image printed on the same time stamp to form the complete key frame image.
Preferably, the step S6 further includes:
s61, determining a relative distance between the binocular camera and an object in the current real state through a distance sensor;
s62, establishing a corresponding size transformation relation with the size change of the real image based on the change of the relative distance;
s62, based on the size transformation relation, the processor changes the size of the virtual image on the display screen to adapt to the size change of the real image.
After the technical scheme is adopted, the invention has the beneficial effects that:
the invention discloses a video perspective method based on AR technology, which comprises the following steps: s1, acquiring a real image in the current real state by using a binocular camera; s2, receiving the reality image through the processor, determining a key frame image, and determining the current first pose data based on the key frame image; s3, acquiring IMU measurement data by adopting an IMU sensor, and performing integral operation on the acceleration value and the angular velocity value to acquire second attitude data; s4, performing Kalman fusion on the first position and orientation data and the second position and orientation data, and generating a first coordinate system according to the position and orientation information obtained after fusion; s5, the processor calls the virtual image in the memory, determines a second coordinate system displayed in the display screen, and establishes a position transformation relation with the first coordinate system; and S6, the processor superimposes the virtual image and the real image, and displays the superimposed image on the display screen according to the position conversion relation. According to the method and the device, the first position and posture data acquired through the binocular camera and the second position and posture data acquired through the IMU sensor are fused to generate the first coordinate system, the positioning accuracy of the AR equipment in a real state is enhanced, the first coordinate system and the second coordinate system of the virtual image establish a position transformation relation, the virtual image and the real image are overlapped and displayed based on the accuracy of the first coordinate system, the display effect of the virtual image in the real image is enhanced, and the user experience is improved.
In the present invention, the step S6 further includes: s61, determining a relative distance between the binocular camera and an object in the current real state through a distance sensor; s62, establishing a corresponding size transformation relation with the size change of the real image based on the change of the relative distance; s62, the processor changes the size of the virtual image on the display screen based on the size transformation relationship to accommodate the size change of the real image. By establishing a size transformation relation between the relative distance and the size, the size of the virtual image on the display screen is changed, so that when a user is close to a certain object in the real image, the virtual image on the object can be adaptively changed, and the user experience is enhanced.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
As shown in fig. 1, the present invention comprises the steps of:
s1, acquiring a real image in the current real state by using a binocular camera;
in the step S1, the method further includes:
s11, respectively acquiring a left eye image and a right eye image by a left eye camera and a right eye camera in the binocular cameras;
and S12, carrying out time synchronization on the left eye image and the right eye image.
S2, receiving the reality image through the processor, determining a key frame image, and determining the current first pose data based on the key frame image;
in the step S2, the method further includes:
and S21, the processor integrates the left eye image and the right eye image on the same time stamp to form the complete key frame image.
S3, acquiring IMU measurement data by adopting an IMU sensor, and performing integral operation on the acceleration value and the angular velocity value to acquire second attitude data;
s4, performing Kalman fusion on the first position and attitude data, and generating a first coordinate system according to the position and attitude information obtained after fusion;
s5, the processor calls the virtual image in the memory, determines a second coordinate system displayed in the display screen, and establishes a position transformation relation with the first coordinate system;
and S6, the processor superimposes the virtual image and the real image, and displays the superimposed image on the display screen according to the position conversion relation.
According to the method and the device, the first position and posture data acquired through the binocular camera and the second position and posture data acquired through the IMU sensor are fused to generate the first coordinate system, the positioning accuracy of the AR equipment in a real state is enhanced, the first coordinate system and the second coordinate system of the virtual image establish a position transformation relation, the virtual image and the real image are overlapped and displayed based on the accuracy of the first coordinate system, the display effect of the virtual image in the real image is enhanced, and the user experience is improved.
In the step S6, the method further includes:
s61, determining a relative distance between the binocular camera and an object in the current real state through a distance sensor;
s62, establishing a corresponding size transformation relation with the size change of the real image based on the change of the relative distance;
s62, based on the size transformation relation, the processor changes the size of the virtual image on the display screen to adapt to the size change of the real image.
By establishing a size transformation relation between the relative distance and the size, the size of the virtual image on the display screen is changed, so that when a user is close to a certain object in the real image, the virtual image on the object can be adaptively changed, and the user experience is enhanced.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (4)

1. A video perspective method based on AR technology is characterized by comprising the following steps:
s1, acquiring a real image in the current real state by using a binocular camera;
s2, receiving the reality image through a processor, determining a key frame image, and determining current first pose data based on the key frame image;
s3, acquiring IMU measurement data by adopting an IMU sensor, and performing integral operation on the acceleration value and the angular velocity value to acquire second attitude data;
s4, performing Kalman fusion on the first position and orientation data and the second position and orientation data, and generating a first coordinate system according to the position and orientation information obtained after fusion;
s5, the processor calls the virtual image in the memory, determines a second coordinate system displayed in the display screen, and establishes a position transformation relation with the first coordinate system;
and S6, the processor superimposes the virtual image and the real image, and displays the superimposed image on a display screen according to the position transformation relation.
2. The AR technology-based video perspective method as claimed in claim 1, wherein the step of S1 further comprises:
s11, respectively acquiring a left eye image and a right eye image by a left eye camera and a right eye camera in the binocular cameras;
and S12, carrying out time synchronization on the left eye image and the right eye image.
3. The AR technology-based video perspective method according to claim 2, wherein the step of S2 further comprises:
and S21, the processor integrates the left eye image and the right eye image printed on the same time stamp to form the complete key frame image.
4. The AR technology-based video perspective method as claimed in claim 1, wherein the step of S6 further comprises:
s61, determining a relative distance between the binocular camera and an object in the current real state through a distance sensor;
s62, establishing a corresponding size transformation relation with the size change of the real image based on the change of the relative distance;
s62, based on the size transformation relation, the processor changes the size of the virtual image on the display screen to adapt to the size change of the real image.
CN202210322777.4A 2022-03-30 2022-03-30 Video perspective method based on AR technology Pending CN114742977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210322777.4A CN114742977A (en) 2022-03-30 2022-03-30 Video perspective method based on AR technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210322777.4A CN114742977A (en) 2022-03-30 2022-03-30 Video perspective method based on AR technology

Publications (1)

Publication Number Publication Date
CN114742977A true CN114742977A (en) 2022-07-12

Family

ID=82276608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210322777.4A Pending CN114742977A (en) 2022-03-30 2022-03-30 Video perspective method based on AR technology

Country Status (1)

Country Link
CN (1) CN114742977A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117221511A (en) * 2023-11-07 2023-12-12 深圳市麦谷科技有限公司 Video processing method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN108021241A (en) * 2017-12-01 2018-05-11 西安枭龙科技有限公司 A kind of method for realizing AR glasses virtual reality fusions
CN113129451A (en) * 2021-03-15 2021-07-16 北京航空航天大学 Holographic three-dimensional image space quantitative projection method based on binocular vision positioning
WO2022021980A1 (en) * 2020-07-30 2022-02-03 北京市商汤科技开发有限公司 Virtual object control method and apparatus, and electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN108021241A (en) * 2017-12-01 2018-05-11 西安枭龙科技有限公司 A kind of method for realizing AR glasses virtual reality fusions
WO2022021980A1 (en) * 2020-07-30 2022-02-03 北京市商汤科技开发有限公司 Virtual object control method and apparatus, and electronic device and storage medium
CN113129451A (en) * 2021-03-15 2021-07-16 北京航空航天大学 Holographic three-dimensional image space quantitative projection method based on binocular vision positioning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117221511A (en) * 2023-11-07 2023-12-12 深圳市麦谷科技有限公司 Video processing method and device, storage medium and electronic equipment
CN117221511B (en) * 2023-11-07 2024-03-12 深圳市麦谷科技有限公司 Video processing method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US11914147B2 (en) Image generation apparatus and image generation method using frequency lower than display frame rate
CN109863538B (en) Continuous time warping and binocular time warping systems and methods for virtual and augmented reality displays
US9904056B2 (en) Display
KR102358932B1 (en) Stabilization plane determination based on gaze location
EP3379525A1 (en) Image processing device and image generation method
Sauer et al. Augmented workspace: Designing an AR testbed
KR20160033763A (en) Late stage reprojection
KR20160026898A (en) Reprojection oled display for augmented reality experiences
CN108093244B (en) Remote follow-up stereoscopic vision system
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
JP2012128779A (en) Virtual object display device
US11003408B2 (en) Image generating apparatus and image generating method
JP2011165068A (en) Image generation device, image display system, image generation method, and program
CN112655202B (en) Reduced bandwidth stereoscopic distortion correction for fisheye lenses of head-mounted displays
US10719995B2 (en) Distorted view augmented reality
EP4300943A1 (en) Subtitle rendering method and apparatus for virtual reality space, device, and medium
WO2019098198A1 (en) Image generation device, head-mounted display, image generation system, image generation method, and program
CN111488056A (en) Manipulating virtual objects using tracked physical objects
CN108153417B (en) Picture compensation method and head-mounted display device adopting same
CN114742977A (en) Video perspective method based on AR technology
CN117542253A (en) Pilot cockpit training system
WO2019045174A1 (en) Method for providing location corrected image to hmd, method for displaying location corrected image on hmd, and hmd for displaying location corrected image using same
US11521297B2 (en) Method and device for presenting AR information based on video communication technology
WO2019073925A1 (en) Image generation device and image generation method
CN114742872A (en) Video perspective system based on AR technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination