CN109978945B - Augmented reality information processing method and device - Google Patents

Augmented reality information processing method and device Download PDF

Info

Publication number
CN109978945B
CN109978945B CN201910142496.9A CN201910142496A CN109978945B CN 109978945 B CN109978945 B CN 109978945B CN 201910142496 A CN201910142496 A CN 201910142496A CN 109978945 B CN109978945 B CN 109978945B
Authority
CN
China
Prior art keywords
scene
target
image
target scene
media information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910142496.9A
Other languages
Chinese (zh)
Other versions
CN109978945A (en
Inventor
杨萌
李建军
戴付建
赵烈烽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sunny Optics Co Ltd
Original Assignee
Zhejiang Sunny Optics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sunny Optics Co Ltd filed Critical Zhejiang Sunny Optics Co Ltd
Priority to CN201910142496.9A priority Critical patent/CN109978945B/en
Publication of CN109978945A publication Critical patent/CN109978945A/en
Application granted granted Critical
Publication of CN109978945B publication Critical patent/CN109978945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The application discloses an augmented reality information processing method and device. The method comprises the following steps: acquiring virtual media information, wherein the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene; acquiring a subsequent image of the target scene; determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene; under the condition that the visual angle change is within a preset range, correcting the virtual media information according to the visual angle change to obtain first corrected virtual media information; and combining the first corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display. By the method and the device, the technical problem of how to efficiently finish data processing of the augmented reality scene under the condition of not adding additional equipment in the related technology of AR data processing is solved.

Description

Augmented reality information processing method and device
Technical Field
The present application relates to the field of AR (augmented reality), and in particular, to an augmented reality information processing method and apparatus.
Background
Augmented reality ar (augmented reality) is a technology that adds other computer-generated virtual media information including video, images, text, sound, etc. on the basis of real-world visual information.
An important application area of the technology is to help users experience experiences that are not likely to be touched in the current scene in physical distance or time, and to increase or improve the perception of the users for information in real-world scenes. AR technology may require specialized systems or hardware devices such as head-mounted displays, smart eyes, computers with separate graphics cards, etc., all of which require a cost or use environment that virtually limits the AR usage scenarios.
In the related art of AR data processing, how to efficiently complete data processing of an augmented reality scene without adding additional devices is still an important problem to be solved at present.
Disclosure of Invention
The application provides an augmented reality information processing method and device, which aim to solve the technical problem of how to efficiently finish data processing of an augmented reality scene without adding extra equipment in the related technology of AR data processing.
According to one aspect of the present application, an augmented reality information processing method is provided. The method comprises the following steps: acquiring virtual media information, wherein the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene; acquiring a subsequent image of the target scene; determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene; under the condition that the visual angle change is within a preset range, correcting the virtual media information according to the visual angle change to obtain first corrected virtual media information; and combining the first corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display.
Further, the method further comprises: under the condition that the change of the visual angle exceeds a preset range, adjusting the shooting angle of a remote camera device according to the change of the visual angle, wherein the remote camera device is used for acquiring the image information of a remote scene; acquiring a corrected image of a remote scene through the remote camera device with the adjusted shooting angle; obtaining second corrected virtual media information according to the subsequent image and the corrected image of the remote scene; and combining the second corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display.
Further, adjusting the shooting angle of the remote camera according to the change of the angle of view includes: determining a plurality of preset shooting angles corresponding to the remote camera device; determining a target shooting angle from a plurality of preset shooting angles according to the change of the visual angle; and adjusting the shooting angle of the remote camera to be the target shooting angle.
Further, the acquiring of the virtual media information includes: acquiring initial images of a plurality of target scenes and an initial image of at least one remote scene, wherein the initial images of the plurality of target scenes are generated by shooting the target scenes through a plurality of groups of lenses; determining depth information of the target scene according to differences among the matched features in the initial images of the plurality of target scenes; and fitting the depth information of the target scene with the initial image of at least one remote scene to obtain virtual media information.
Further, determining depth information for the target scene based on differences between matching features in the initial images of the plurality of target scenes comprises: identifying matching features in the initial images of the plurality of target scenes; determining coordinates of the matching features in the initial image of each target scene; determining depth information of the target scene based on a difference between coordinates of the matching features in the initial images of the at least two target scenes.
Further, determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene comprises: determining a view angle corresponding to an initial image and a view angle translation change parameter corresponding to a subsequent image relative to a target scene; and determining a visual angle corresponding to the initial image and a visual angle rotation change parameter corresponding to the subsequent image relative to the target scene.
Further, when the remote camera device obtains the corrected image of the remote scene, the target shooting angle adopted by the remote camera device at least includes a standard shooting angle, wherein the real world coordinate corresponding to the standard shooting angle is aligned with the virtual world coordinate corresponding to the virtual media information.
According to another aspect of the present application, there is provided an augmented reality information processing apparatus. The device includes: a first obtaining unit, configured to obtain virtual media information, where the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene; a second acquisition unit for acquiring a subsequent image of the target scene; a determining unit configured to determine a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene; the third acquisition unit is used for correcting the virtual media information according to the change of the visual angle under the condition that the change of the visual angle is within a preset range to obtain first corrected virtual media information; and the first generating unit is used for combining the first corrected virtual media information with the subsequent image of the target scene to generate the augmented reality scene for display.
According to another aspect of the present application, there is provided a storage medium including a stored program, wherein the program executes the augmented reality information processing method of any one of the above.
According to another aspect of the present application, a processor for executing a program is provided, where the program executes the augmented reality information processing method according to any one of the above.
Through the application, the following steps are adopted: acquiring virtual media information, wherein the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene; acquiring a subsequent image of the target scene; determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene; under the condition that the visual angle change is within a preset range, correcting the virtual media information according to the visual angle change to obtain first corrected virtual media information; the first corrected virtual media information is combined with the subsequent image of the target scene to generate the augmented reality scene for display, so that the technical problem of how to finish data processing of the augmented reality scene without adding extra equipment in the related technology of AR data processing is solved.
That is, under the condition that the user terminal changes the viewing angle, the calculation process of the depth map is not performed again, but whether the viewing angle change of the user terminal is within a preset range is judged first, if the viewing angle change of the user terminal is within the preset range, the previously determined virtual media information is directly corrected (translated or/and rotated) to obtain first corrected virtual media information, and then the first corrected virtual media information is combined with a subsequent image acquired by the user terminal in real time to obtain an augmented reality scene for display.
That is, the data processing of the augmented reality scene can be efficiently completed without adding additional equipment, the situation that the depth information calculation processing needs to be carried out on the subsequent image after the view angle is updated again under the condition that the view angle of the user terminal is changed every time is avoided, and the processing time of a large amount of depth information is saved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 is a flowchart of an augmented reality information processing method according to a first embodiment of the present application;
fig. 2 is a flowchart of an augmented reality information processing method according to a second embodiment of the present application; and
fig. 3 is a schematic diagram of an augmented reality information processing apparatus provided according to an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to a first embodiment of the present application, there is provided an augmented reality information processing method.
Fig. 1 is a flowchart of an augmented reality information processing method according to a first embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S102, acquiring virtual media information, wherein the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene.
Step S104, acquiring subsequent images of the target scene.
Step S106, determining the view angle change between the view angle corresponding to the subsequent image of the target scene and the view angle corresponding to the initial image of the target scene.
Step S108, under the condition that the change of the visual angle is within the preset range, the virtual media information is corrected according to the change of the visual angle, and first corrected virtual media information is obtained.
Step S110, combining the first corrected virtual media information with the subsequent image of the target scene to generate an augmented reality scene for display.
In the information processing method for augmented reality provided by the first embodiment of the present application, virtual media information is obtained, where the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene; acquiring a subsequent image of the target scene; determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene; under the condition that the visual angle change is within a preset range, correcting the virtual media information according to the visual angle change to obtain first corrected virtual media information; the first corrected virtual media information is combined with the subsequent image of the target scene to generate the augmented reality scene for display, so that the technical problem of how to finish data processing of the augmented reality scene without adding extra equipment in the related technology of AR data processing is solved.
That is, under the condition that the user terminal changes the viewing angle, the calculation process of the depth map is not performed again, but whether the viewing angle change of the user terminal is within a preset range is judged first, if the viewing angle change of the user terminal is within the preset range, the previously determined virtual media information is directly corrected (translated or/and rotated) to obtain first corrected virtual media information, and then the first corrected virtual media information is combined with a subsequent image acquired by the user terminal in real time to obtain an augmented reality scene for display.
That is, the data processing of the augmented reality scene can be efficiently completed without adding additional equipment, the situation that the depth information calculation processing needs to be carried out on the subsequent image after the view angle is updated again under the condition that the view angle of the user terminal is changed every time is avoided, and the processing time of a large amount of depth information is saved.
In addition, a second embodiment of the present application also provides an augmented reality information processing method.
Fig. 2 is a flowchart of an augmented reality information processing method according to a second embodiment of the present application. As shown in fig. 2, the method further comprises the steps of:
step S202, under the condition that the change of the visual angle exceeds the preset range, adjusting the shooting angle of a remote camera device according to the change of the visual angle, wherein the remote camera device is used for acquiring the image information of a remote scene.
And step S204, acquiring a corrected image of the remote scene through the remote camera device with the adjusted shooting angle.
Step S206, obtaining second corrected virtual media information according to the subsequent image and the corrected image of the remote scene.
And step S208, combining the second corrected virtual media information with the subsequent image of the target scene to generate an augmented reality scene for display.
That is, under the condition that the user terminal changes the angle of view, not only is the calculation processing of the depth map performed again simply, but also whether the change of the angle of view of the user terminal is within the preset range is judged first, if the change of the angle of view of the user terminal exceeds the preset range, the shooting angle of the remote camera device is remotely regulated according to the change of the angle of view of the user terminal, and the corrected image of the remote scene is obtained again according to the remote camera device with the regulated shooting angle, so that the second corrected virtual media information is further obtained, and at the moment, the second corrected virtual media information is combined with the subsequent image of the target scene to obtain the augmented reality scene for display. The method and the device avoid the situation that the virtual media information is corrected only according to the change of the visual angle under the condition that the change of the visual angle of the user terminal is overlarge, so that the correction effect is not ideal.
In the third embodiment of the present application, the obtaining of the virtual media information (step S102) can be implemented by the following steps: step A1, acquiring initial images of a plurality of target scenes and an initial image of at least one remote scene, wherein the initial images of the plurality of target scenes are generated by shooting the target scenes through a plurality of groups of lenses; step A2, determining depth information of a target scene according to differences between matched features in initial images of a plurality of target scenes; step A3, fitting the depth information of the target scene with the initial image of at least one remote scene to obtain virtual media information.
In a fourth embodiment of the present application, determining depth information of a target scene from differences between matching features in initial images of a plurality of target scenes (step a2) comprises: identifying matching features in the initial images of the plurality of target scenes; determining coordinates of the matching features in the initial image of each target scene; determining depth information of the target scene based on a difference between coordinates of the matching features in the initial images of the at least two target scenes.
Further, the initial images of the plurality of target scenes are obtained by shooting the target scenes through two groups of lenses of the user terminal, wherein the angles of view of the two groups of lenses of the user terminal are the same (for example, the angles of view of the two groups of lenses are both 60 °, 80 °, 100 °, and the like). That is, two groups of lenses of the user terminal are used to shoot the target scene to obtain two groups of initial images of the target scene (initial images of a plurality of target scenes), and then the two groups of initial images of the target scene are calculated according to a binocular range finding method to obtain a depth map (depth information) of the target scene.
It should be noted that: when the user terminal acquires the initial image of the target scene, the translation speed and/or the rotation acceleration of the user terminal are/is larger than those when the user terminal acquires the subsequent image of the target scene. For example, the user terminal is in a stationary state when acquiring an initial image of a target scene, and the user terminal is in a moving state when acquiring a subsequent image of the target scene.
For example, the following steps are carried out: the principle of determining the depth map of the target scene according to the binocular ranging method is as follows: ideally, in the pictures taken by the two groups of lenses, the same scene will be located on the same shooting horizontal line; in the non-ideal situation, the same scene in the pictures shot by the two groups of lenses is located on different shooting horizontal lines, and at the moment, the image correction is carried out through prestored data so as to convert the image into the ideal situation equivalent to the coaxial coplanarity of the two groups of lenses. After that, a matching feature is searched on the shooting horizontal line, and the coordinate difference L1-L2 of the matching feature on the images respectively called by the two groups of lenses is calculated, and further, the coordinate difference is subjected to calculation processing to obtain the depth information of the corresponding image of the target scene (to obtain the depth map of the target scene, wherein the depth map is used for inserting the 3D model corresponding to the target object in the remote scene).
The above formula for calculating the coordinate difference is: z ═ EFL [1+ (B1/L1-L2) ], where EFL is the focal length and B1 is the proportional relationship.
In the fifth embodiment of the present application, determining the viewing angle change between the viewing angle corresponding to the subsequent image of the target scene and the viewing angle corresponding to the initial image of the target scene (step S106) may be implemented by: and step B, under the condition that the user terminal moves and/or rotates, determining the change of the visual angle between the visual angle corresponding to the subsequent image of the target scene and the visual angle corresponding to the initial image of the target scene according to the translation and/or rotation condition of the user terminal, wherein the two groups of lenses corresponding to the user terminal are used for shooting the target scene so as to obtain the initial image of the target scene and the subsequent image of the target scene.
Similarly, in the sixth embodiment of the present application, determining the view angle change between the view angle corresponding to the subsequent image of the target scene and the view angle corresponding to the initial image of the target scene (step S106) may also be implemented by: step C1, determining a view angle corresponding to the initial image of the target scene and a view angle translation change parameter corresponding to the subsequent image; step C2 determines the view angle rotation variation parameter corresponding to the initial image and the subsequent image of the target scene.
That is, the view angle change between the view angle corresponding to the subsequent image of the target scene and the view angle corresponding to the initial image of the target scene mainly involves two change parameters, namely, a view angle translation change parameter and a view angle rotation change parameter.
In the seventh embodiment of the present application, modifying the virtual media information according to the change of the viewing angle to obtain the first modified virtual media information (step S108) can be implemented by: and carrying out pose transformation processing on the virtual media information according to the change of the visual angle between the visual angle corresponding to the subsequent image of the target scene and the visual angle corresponding to the initial image of the target scene. For example, if the viewing angle corresponding to the subsequent image of the target scene translates 10cm in the Y-axis direction of the preset coordinate system relative to the viewing angle corresponding to the initial image of the target scene, the virtual media information also translates 10cm in the Y-axis direction of the preset coordinate system; if the viewing angle corresponding to the subsequent image of the target scene is rotated 60 degrees counterclockwise with respect to the viewing angle corresponding to the initial image of the target scene with the Y axis of the preset coordinate system as a standard, the virtual media information is also rotated 60 degrees counterclockwise with the Y axis of the preset coordinate system as a standard.
In an eighth embodiment of the present application, adjusting a shooting angle of the remote image capturing apparatus according to a change of a viewing angle includes: determining a plurality of preset shooting angles corresponding to the remote camera device; determining a target shooting angle from a plurality of preset shooting angles according to the change of the visual angle; and adjusting the shooting angle of the remote camera to be the target shooting angle.
In an alternative example, the remote camera may be at least one movable camera or a plurality of fixed cameras. In the case that the remote camera is a plurality of fixed cameras, adjusting the shooting angle of the remote camera to the target shooting angle may be: the fixed camera of shooting angle at present is closed, opens the fixed camera of target shooting angle, and wherein, the fixed camera of target shooting angle can be a plurality of.
It should be noted that: under the condition that the telephoto imaging device is provided with a plurality of fixed cameras, the fixed cameras are discretely distributed in a remote scene, and the shooting angles of the fixed cameras are mutually vertical.
In a ninth embodiment of the present application, when a remote camera device obtains a corrected image of the remote scene, a target shooting angle adopted by the remote camera device at least includes a standard shooting angle, where a real world coordinate corresponding to the standard shooting angle is aligned with a virtual world coordinate of the virtual media information.
That is, the remote camera device, in addition to acquiring the corrected image of the remote scene, is reset back to the initial position, e.g., with respect to the direction of gravity, moving all camera shots to a horizontal position for alignment with the virtual space produced from the depth information.
Further illustration is now made with respect to the above embodiments:
in an alternative example, a user makes a video with a target object (person/article) in a remote scene B through a user terminal in a target scene a, and a plurality of remote cameras are arranged around the target object in the remote scene B, wherein the plurality of remote cameras are all in communication connection with the user terminal so as to transmit image information shot by the remote cameras to the user terminal.
At this time, the user terminal/main server constructs and processes the image information collected from the remote scene to obtain the 3D model corresponding to the target object in the remote scene B. The user terminal establishes a depth map of a target scene A through the two corresponding groups of lenses, and further establishes a 3D map corresponding to the target scene A based on the depth map of the target scene A. Further, the 3D model corresponding to the target object in the remote scene B is displayed in the 3D map corresponding to the target scene a.
Further, under the condition that the user terminal only slightly moves or rotates, the user terminal does not need to acquire new image information from the remote camera device in the remote scene B, and only needs to correspondingly move or rotate the 3D model corresponding to the target object in the remote scene B according to the movement or rotation sensed by the sensor of the user terminal.
Further, when the lens angle of the user terminal changes greatly, the user terminal acquires new image information from the remote camera in the remote scene B, wherein the new image information is acquired by the remote camera after the shooting direction/shooting angle is changed. Furthermore, when the user terminal changes the shooting range or angle due to various motions, the 3D model corresponding to the target object in the remote scene B and the 3D map (augmented reality scene) corresponding to the target scene a can still be adapted to the remote scene B.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The tenth embodiment of the present application further provides an augmented reality information processing apparatus, and it should be noted that the augmented reality information processing apparatus according to the tenth embodiment of the present application may be configured to execute the information processing method for augmented reality provided in the embodiment of the present application. An augmented reality information processing apparatus according to a tenth embodiment of the present application will be described below.
Fig. 3 is a schematic diagram of an augmented reality information processing apparatus according to a tenth embodiment of the present application. As shown in fig. 3, the apparatus includes: the device comprises a first acquisition unit, a second acquisition unit, a determination unit, a third acquisition unit and a first generation unit.
A first obtaining unit, configured to obtain virtual media information, where the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene;
a second acquisition unit for acquiring a subsequent image of the target scene;
a determining unit configured to determine a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene;
the third acquisition unit is used for correcting the virtual media information according to the change of the visual angle under the condition that the change of the visual angle is within a preset range to obtain first corrected virtual media information;
and the first generating unit is used for combining the first corrected virtual media information with the subsequent image of the target scene to generate the augmented reality scene for display.
In an augmented reality information processing apparatus according to a tenth embodiment of the present application, virtual media information is acquired by a first acquisition unit, where the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene; the second acquisition unit acquires a subsequent image of the target scene; the determining unit determines a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene; the third acquisition unit corrects the virtual media information according to the change of the visual angle under the condition that the change of the visual angle is within a preset range to obtain first corrected virtual media information; the first generation unit combines the first corrected virtual media information with the subsequent image of the target scene to generate the augmented reality scene for display, and the technical problem of how to complete data processing of the augmented reality scene without adding additional equipment in the related technology of AR data processing is solved.
That is, under the condition that the user terminal changes the viewing angle, the calculation process of the depth map is not performed again, but whether the viewing angle change of the user terminal is within a preset range is judged first, if the viewing angle change of the user terminal is within the preset range, the previously determined virtual media information is directly corrected (translated or/and rotated) to obtain first corrected virtual media information, and then the first corrected virtual media information is combined with a subsequent image acquired by the user terminal in real time to obtain an augmented reality scene for display.
That is, the data processing of the augmented reality scene can be efficiently completed without adding additional equipment, the situation that the depth information calculation processing needs to be carried out on the subsequent image after the view angle is updated again under the condition that the view angle of the user terminal is changed every time is avoided, and the processing time of a large amount of depth information is saved.
In an eleventh embodiment of the present application, the apparatus further comprises: the adjusting unit is used for adjusting the shooting angle of the remote camera device according to the change of the visual angle under the condition that the change of the visual angle exceeds a preset range, wherein the remote camera device is used for acquiring the image information of a remote scene; a fourth obtaining unit, configured to obtain a corrected image of the remote scene by using the remote camera whose shooting angle is adjusted; the fifth obtaining unit is used for obtaining second corrected virtual media information according to the subsequent image and the corrected image of the remote scene; and the second generating unit is used for combining the second corrected virtual media information with the subsequent image of the target scene to generate the augmented reality scene for display.
In a twelfth embodiment of the present application, the adjusting unit includes: the first determining module is used for determining a plurality of preset shooting angles corresponding to the remote camera device; the second determining module is used for determining a target shooting angle from a plurality of preset shooting angles according to the change of the visual angle; and the adjusting module is used for adjusting the shooting angle of the remote camera device to be a target shooting angle.
In a thirteenth embodiment of the present application, the first acquisition unit includes: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring initial images of a plurality of target scenes and an initial image of at least one remote scene, and the initial images of the plurality of target scenes are generated by shooting the target scenes through a plurality of groups of lenses; a third determining module, configured to determine depth information of the target scene according to a difference between the matching features in the initial images of the multiple target scenes; and the second acquisition module is used for fitting the depth information of the target scene with the initial image of at least one remote scene to obtain the virtual media information.
In a fourteenth embodiment of the present application, the third determining module includes: an identification sub-module for identifying matching features in the initial images of the plurality of target scenes; a first determining submodule for determining coordinates of the matching features in the initial image of each target scene; and the second determining submodule is used for determining the depth information of the target scene according to the difference of the coordinates of the matched features in the initial images of the at least two target scenes.
In a fifteenth embodiment of the present application, the determination unit includes: a fourth determining module, configured to determine a view angle corresponding to the initial image and a view angle translation change parameter corresponding to the subsequent image of the target scene; and the fifth determining module is used for determining a visual angle corresponding to the initial image and a visual angle rotation change parameter corresponding to the subsequent image of the target scene.
In a sixteenth embodiment of the present application, when the remote camera device obtains the corrected image of the remote scene, the target shooting angle adopted by the remote camera device at least includes a standard shooting angle, and the real world coordinate corresponding to the standard shooting angle is aligned with the virtual world coordinate corresponding to the virtual media information.
In a seventeenth embodiment of the present application, an augmented reality information processing apparatus includes a processor and a memory, where the first acquiring unit, the second acquiring unit, the determining unit, the third acquiring unit, the first generating unit, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, the data processing of the augmented reality scene can be efficiently completed under the condition that no additional equipment is added by adjusting the kernel parameters, the situation that the depth information calculation processing needs to be carried out on the subsequent image after the view angle is updated again under the condition that the view angle is changed every time of the user terminal is avoided, and the processing time of a large amount of depth information is saved.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
In an eighteenth embodiment of the present application, an embodiment of the present invention provides a storage medium having a program stored thereon, the program implementing an augmented reality information processing method when executed by a processor.
In a nineteenth embodiment of the present application, an embodiment of the present invention provides a processor, where the processor is configured to execute a program, and the program executes an augmented reality information processing method when running.
In a twentieth embodiment of the present application, an embodiment of the present invention provides an apparatus, where the apparatus includes a processor, a memory, and a program stored in the memory and executable on the processor, and the processor implements the following steps when executing the program: acquiring virtual media information, wherein the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene; acquiring a subsequent image of the target scene; determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene; under the condition that the visual angle change is within a preset range, correcting the virtual media information according to the visual angle change to obtain first corrected virtual media information; and combining the first corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display.
Further, the method further comprises: under the condition that the change of the visual angle exceeds a preset range, adjusting the shooting angle of a remote camera device according to the change of the visual angle, wherein the remote camera device is used for acquiring the image information of a remote scene; acquiring a corrected image of a remote scene through the remote camera device with the adjusted shooting angle; obtaining second corrected virtual media information according to the subsequent image and the corrected image of the remote scene; and combining the second corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display.
Further, adjusting the shooting angle of the remote camera according to the change of the angle of view includes: determining a plurality of preset shooting angles corresponding to the remote camera device; determining a target shooting angle from a plurality of preset shooting angles according to the change of the visual angle; and adjusting the shooting angle of the remote camera to be the target shooting angle.
Further, the acquiring of the virtual media information includes: acquiring initial images of a plurality of target scenes and an initial image of at least one remote scene, wherein the initial images of the plurality of target scenes are generated by shooting the target scenes through a plurality of groups of lenses; determining depth information of the target scene according to differences among the matched features in the initial images of the plurality of target scenes; and fitting the depth information of the target scene with the initial image of at least one remote scene to obtain virtual media information.
Further, determining depth information for the target scene based on differences between matching features in the initial images of the plurality of target scenes comprises: identifying matching features in the initial images of the plurality of target scenes; determining coordinates of the matching features in the initial image of each target scene; determining depth information of the target scene based on a difference between coordinates of the matching features in the initial images of the at least two target scenes.
Further, determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene comprises: determining a view angle corresponding to an initial image and a view angle translation change parameter corresponding to a subsequent image relative to a target scene; and determining a visual angle corresponding to the initial image and a visual angle rotation change parameter corresponding to the subsequent image relative to the target scene.
Further, when the remote camera device obtains the corrected image of the remote scene, the target shooting angle adopted by the remote camera device at least includes a standard shooting angle, wherein the real world coordinate corresponding to the standard shooting angle is aligned with the virtual world coordinate corresponding to the virtual media information. The device herein may be a server, a PC, a PAD, a mobile phone, etc.
In a twenty-first embodiment of the present application, the present application further provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: acquiring virtual media information, wherein the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene; acquiring a subsequent image of the target scene; determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene; under the condition that the visual angle change is within a preset range, correcting the virtual media information according to the visual angle change to obtain first corrected virtual media information; and combining the first corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display.
Further, the method further comprises: under the condition that the change of the visual angle exceeds a preset range, adjusting the shooting angle of a remote camera device according to the change of the visual angle, wherein the remote camera device is used for acquiring the image information of a remote scene; acquiring a corrected image of a remote scene through the remote camera device with the adjusted shooting angle; obtaining second corrected virtual media information according to the subsequent image and the corrected image of the remote scene; and combining the second corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display.
Further, adjusting the shooting angle of the remote camera according to the change of the angle of view includes: determining a plurality of preset shooting angles corresponding to the remote camera device; determining a target shooting angle from a plurality of preset shooting angles according to the change of the visual angle; and adjusting the shooting angle of the remote camera to be the target shooting angle.
Further, the acquiring of the virtual media information includes: acquiring initial images of a plurality of target scenes and an initial image of at least one remote scene, wherein the initial images of the plurality of target scenes are generated by shooting the target scenes through a plurality of groups of lenses; determining depth information of the target scene according to differences among the matched features in the initial images of the plurality of target scenes; and fitting the depth information of the target scene with the initial image of at least one remote scene to obtain virtual media information.
Further, determining depth information for the target scene based on differences between matching features in the initial images of the plurality of target scenes comprises: identifying matching features in the initial images of the plurality of target scenes; determining coordinates of the matching features in the initial image of each target scene; determining depth information of the target scene based on a difference between coordinates of the matching features in the initial images of the at least two target scenes.
Further, determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene comprises: determining a view angle corresponding to an initial image and a view angle translation change parameter corresponding to a subsequent image relative to a target scene; and determining a visual angle corresponding to the initial image and a visual angle rotation change parameter corresponding to the subsequent image relative to the target scene.
Further, when the remote camera device obtains the corrected image of the remote scene, the target shooting angle adopted by the remote camera device at least includes a standard shooting angle, wherein the real world coordinate corresponding to the standard shooting angle is aligned with the virtual world coordinate corresponding to the virtual media information.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (7)

1. An augmented reality information processing method, comprising:
acquiring virtual media information, wherein the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene;
acquiring a subsequent image of the target scene;
determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene;
under the condition that the visual angle change is within a preset range, correcting the virtual media information according to the visual angle change to obtain first corrected virtual media information; combining the first corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display;
under the condition that the visual angle change exceeds a preset range, adjusting the shooting angle of a remote camera device according to the visual angle change, wherein the remote camera device is used for acquiring the image information of the remote scene; acquiring a corrected image of the remote scene through the remote camera device with the adjusted shooting angle; obtaining second corrected virtual media information according to the subsequent image and the corrected image of the remote scene; combining the second corrected virtual media information with subsequent images of the target scene to generate an augmented reality scene for display;
wherein, adjusting the shooting angle of the remote camera device according to the change of the angle of view comprises: determining a plurality of preset shooting angles corresponding to the remote camera device; determining a target shooting angle from the plurality of preset shooting angles according to the change of the visual angle; adjusting the shooting angle of the remote camera to be a target shooting angle;
and under the condition that the remote camera device acquires the corrected image of the remote scene, at least one standard shooting angle is included in target shooting angles adopted by the remote camera device, wherein real world coordinates corresponding to the standard shooting angle are aligned with virtual world coordinates corresponding to the virtual media information.
2. The method of claim 1, wherein the obtaining virtual media information comprises:
acquiring a plurality of initial images of the target scene and at least one initial image of the remote scene, wherein the plurality of initial images of the target scene are generated by shooting the target scene through a plurality of groups of lenses;
determining depth information of the target scene according to differences between matched features in initial images of a plurality of target scenes;
and fitting the depth information of the target scene with the initial image of at least one remote scene to obtain virtual media information.
3. The method of claim 2, wherein determining depth information for the target scene based on differences between matching features in a plurality of initial images of the target scene comprises:
identifying matching features in a plurality of initial images of the target scene;
determining coordinates of the matching features in the initial image of each target scene;
and determining the depth information of the target scene according to the difference of the coordinates of the matched features in the initial images of at least two target scenes.
4. The method of claim 1, wherein determining a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene comprises:
determining a view angle corresponding to an initial image of the target scene and a view angle translation change parameter corresponding to a subsequent image;
and determining a view angle corresponding to the initial image relative to the target scene, and rotating the change parameter of the view angle corresponding to the subsequent image.
5. An augmented reality information processing apparatus, comprising:
a first acquisition unit configured to acquire virtual media information, wherein the virtual media information is determined by an initial image of a target scene and an initial image of a remote scene;
a second acquisition unit for acquiring a subsequent image of the target scene;
a determining unit configured to determine a change in perspective between a perspective corresponding to a subsequent image of the target scene and a perspective corresponding to an initial image of the target scene;
the third acquisition unit is used for correcting the virtual media information according to the change of the visual angle under the condition that the change of the visual angle is within a preset range to obtain first corrected virtual media information; the first generating unit is used for combining the first corrected virtual media information with the subsequent image of the target scene to generate an augmented reality scene for display;
the adjusting unit is used for adjusting the shooting angle of the remote camera device according to the change of the visual angle under the condition that the change of the visual angle exceeds a preset range, wherein the remote camera device is used for acquiring the image information of a remote scene; a fourth obtaining unit, configured to obtain a corrected image of the remote scene by using the remote camera whose shooting angle is adjusted; the fifth obtaining unit is used for obtaining second corrected virtual media information according to the subsequent image and the corrected image of the remote scene; the second generating unit is used for combining the second corrected virtual media information with a subsequent image of the target scene to generate an augmented reality scene for display;
the adjusting unit includes: the first determining module is used for determining a plurality of preset shooting angles corresponding to the remote camera device; the second determining module is used for determining a target shooting angle from a plurality of preset shooting angles according to the change of the visual angle; the adjusting module is used for adjusting the shooting angle of the remote camera device to a target shooting angle;
and under the condition that the remote camera device acquires the corrected image of the remote scene, at least one standard shooting angle is included in the target shooting angles adopted by the remote camera device, wherein the real world coordinates corresponding to the standard shooting angles are aligned with the virtual world coordinates corresponding to the virtual media information.
6. A storage medium characterized by comprising a stored program, wherein the program executes the augmented reality information processing method according to any one of claims 1 to 4.
7. A processor, configured to execute a program, wherein the program executes the augmented reality information processing method according to any one of claims 1 to 4.
CN201910142496.9A 2019-02-26 2019-02-26 Augmented reality information processing method and device Active CN109978945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910142496.9A CN109978945B (en) 2019-02-26 2019-02-26 Augmented reality information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910142496.9A CN109978945B (en) 2019-02-26 2019-02-26 Augmented reality information processing method and device

Publications (2)

Publication Number Publication Date
CN109978945A CN109978945A (en) 2019-07-05
CN109978945B true CN109978945B (en) 2021-08-31

Family

ID=67077448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910142496.9A Active CN109978945B (en) 2019-02-26 2019-02-26 Augmented reality information processing method and device

Country Status (1)

Country Link
CN (1) CN109978945B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110999377A (en) * 2019-11-22 2020-04-10 北京小米移动软件有限公司 Resource switching method, device and storage medium
CN111881861B (en) * 2020-07-31 2023-07-21 北京市商汤科技开发有限公司 Display method, device, equipment and storage medium
CN113220251B (en) * 2021-05-18 2024-04-09 北京达佳互联信息技术有限公司 Object display method, device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016071244A2 (en) * 2014-11-06 2016-05-12 Koninklijke Philips N.V. Method and system of communication for use in hospitals
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment
CN106162204A (en) * 2016-07-06 2016-11-23 传线网络科技(上海)有限公司 Panoramic video generation, player method, Apparatus and system
CN106302132A (en) * 2016-09-14 2017-01-04 华南理工大学 A kind of 3D instant communicating system based on augmented reality and method
CN106383587A (en) * 2016-10-26 2017-02-08 腾讯科技(深圳)有限公司 Augmented reality scene generation method, device and equipment
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN106973283A (en) * 2017-03-30 2017-07-21 北京炫房科技有限公司 A kind of method for displaying image and device
CN107678538A (en) * 2017-09-05 2018-02-09 北京原力创新科技有限公司 Augmented reality system and information processing method therein, storage medium, processor
CN108230428A (en) * 2017-12-29 2018-06-29 掌阅科技股份有限公司 E-book rendering method, electronic equipment and storage medium based on augmented reality

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867617B (en) * 2016-03-25 2018-12-25 京东方科技集团股份有限公司 Augmented reality equipment, system, image processing method and device
US20170295229A1 (en) * 2016-04-08 2017-10-12 Osterhout Group, Inc. Synchronizing head-worn computers

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016071244A2 (en) * 2014-11-06 2016-05-12 Koninklijke Philips N.V. Method and system of communication for use in hospitals
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment
CN106162204A (en) * 2016-07-06 2016-11-23 传线网络科技(上海)有限公司 Panoramic video generation, player method, Apparatus and system
CN106302132A (en) * 2016-09-14 2017-01-04 华南理工大学 A kind of 3D instant communicating system based on augmented reality and method
CN106383587A (en) * 2016-10-26 2017-02-08 腾讯科技(深圳)有限公司 Augmented reality scene generation method, device and equipment
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN106973283A (en) * 2017-03-30 2017-07-21 北京炫房科技有限公司 A kind of method for displaying image and device
CN107678538A (en) * 2017-09-05 2018-02-09 北京原力创新科技有限公司 Augmented reality system and information processing method therein, storage medium, processor
CN108230428A (en) * 2017-12-29 2018-06-29 掌阅科技股份有限公司 E-book rendering method, electronic equipment and storage medium based on augmented reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
全息视频会议研究;宋克凡;《科技传播》;20151231(第13期);第117-118页 *

Also Published As

Publication number Publication date
CN109978945A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN108830894B (en) Remote guidance method, device, terminal and storage medium based on augmented reality
CN109615703B (en) Augmented reality image display method, device and equipment
US11282264B2 (en) Virtual reality content display method and apparatus
EP3337158A1 (en) Method and device for determining points of interest in an immersive content
CN109978945B (en) Augmented reality information processing method and device
US9286718B2 (en) Method using 3D geometry data for virtual reality image presentation and control in 3D space
US10999412B2 (en) Sharing mediated reality content
JP7271099B2 (en) File generator and file-based video generator
CN111161398B (en) Image generation method, device, equipment and storage medium
US20190266802A1 (en) Display of Visual Data with a Virtual Reality Headset
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
US20190295324A1 (en) Optimized content sharing interaction using a mixed reality environment
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN107659772B (en) 3D image generation method and device and electronic equipment
RU2020126876A (en) Device and method for forming images of the view
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
US20230106679A1 (en) Image Processing Systems and Methods
CN112312041B (en) Shooting-based image correction method and device, electronic equipment and storage medium
WO2021149509A1 (en) Imaging device, imaging method, and program
CN109272453B (en) Modeling device and positioning method based on 3D camera
CN113485547A (en) Interaction method and device applied to holographic sand table
KR101741149B1 (en) Method and device for controlling a virtual camera's orientation
US20120162199A1 (en) Apparatus and method for displaying three-dimensional augmented reality
CN109348132B (en) Panoramic shooting method and device
KR102151250B1 (en) Device and method for deriving object coordinate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant