CN113641325B - Image acquisition method and system for AR display - Google Patents

Image acquisition method and system for AR display Download PDF

Info

Publication number
CN113641325B
CN113641325B CN202111212452.2A CN202111212452A CN113641325B CN 113641325 B CN113641325 B CN 113641325B CN 202111212452 A CN202111212452 A CN 202111212452A CN 113641325 B CN113641325 B CN 113641325B
Authority
CN
China
Prior art keywords
image data
virtual model
background image
frame
available
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111212452.2A
Other languages
Chinese (zh)
Other versions
CN113641325A (en
Inventor
陈建华
刘绪龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lianzhi Photoelectric Science & Technology Co ltd
Original Assignee
Shenzhen Lianzhi Photoelectric Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lianzhi Photoelectric Science & Technology Co ltd filed Critical Shenzhen Lianzhi Photoelectric Science & Technology Co ltd
Priority to CN202111212452.2A priority Critical patent/CN113641325B/en
Publication of CN113641325A publication Critical patent/CN113641325A/en
Application granted granted Critical
Publication of CN113641325B publication Critical patent/CN113641325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros

Abstract

The invention is suitable for the technical field of image acquisition and processing, and provides an image acquisition method and a system for AR display, wherein the method comprises the following steps: acquiring original background image data; processing the original background image data according to the virtual model data to obtain available background image data; importing the first frame of virtual model data into the first frame of available background image data to obtain a first frame of AR image; acquiring a motion track of a virtual model, and importing each frame of virtual model data into available background image data of a corresponding frame to obtain preliminary AR image data; and processing the preliminary AR image data according to the light information in the available background image data to obtain available AR image data. According to the method, the first frame of AR image is taken as a reference, and each frame of virtual model data is imported into the available background image data of the corresponding frame according to the motion trail of the virtual model to obtain the preliminary AR image data, so that the dynamic virtual model and the background image are well combined.

Description

Image acquisition method and system for AR display
Technical Field
The invention relates to the technical field of image acquisition processing, in particular to an image acquisition method and system for AR display.
Background
With the continuous development of the AR technology, the AR technology can display virtual information and real information in the same picture, where the virtual information is generally a pre-established virtual model, such as an animation model, and the virtual model is displayed in a real background image to achieve the effect of combining virtual and real.
The existing virtual model is generally a dynamic character, and if the virtual model is directly superimposed on a real background, the virtual model is very abrupt and stiff, so that the presented AR image is not real enough, and the user impression effect is not good.
Disclosure of Invention
In view of the shortcomings in the prior art, an object of the present invention is to provide an image capturing method and system for AR display to solve the above problems in the background art.
The invention is realized in such a way that an image acquisition method for AR display comprises the following steps:
acquiring original background image data;
processing the original background image data according to the virtual model data to obtain available background image data, wherein the available background image data is matched with the virtual model data;
importing the first frame of virtual model data into the first frame of available background image data to obtain a first frame of AR image;
acquiring a motion track of the virtual model, and importing each frame of virtual model data into available background image data of a corresponding frame according to the motion track to obtain preliminary AR image data;
and processing the preliminary AR image data according to the light information in the available background image data to obtain available AR image data.
As a further scheme of the invention: the step of processing the original background image data according to the virtual model data to obtain the available background image data specifically includes:
acquiring the frame rate and the frame number of virtual model data;
adjusting the frame rate of the original background image data to enable the frame rate of the original background image data to be equal to that of the virtual model data;
and modifying the frame number of the original background image data to ensure that the frame number of the original background image data is equal to the frame number of the virtual model data, wherein the modified original background image data is the available background image data.
As a further scheme of the invention: the step of obtaining the motion trail of the virtual model and importing each frame of virtual model data into the available background image data of the corresponding frame according to the motion trail to obtain the preliminary AR image data specifically comprises the following steps:
obtaining a motion track of the virtual model;
obtaining the relative position of each frame of virtual model according to the motion track, wherein the reference point of the relative position is the characteristic position of the last frame of virtual model;
determining the characteristic position of the virtual model in the available background image data of each frame according to the characteristic position of the virtual model in the first frame of AR image;
and importing the virtual model according to the characteristic position of the virtual model in the available background image data to obtain preliminary AR image data.
As a further scheme of the invention: the step of importing the virtual model according to the feature position of the virtual model in the available background image data to obtain preliminary AR image data specifically includes:
importing the virtual model into the available background image data in dependence of the feature locations,
scaling the virtual model in the available background image data such that a sensory size of the virtual model matches a motion trajectory of the virtual model.
As a further scheme of the invention: the step of scaling the virtual model in the available background image data specifically includes:
obtaining a relative position value of the feature position of each frame relative to the feature position of the previous frame according to the motion trail of the virtual model, wherein the relative position value is represented by (a, b and c), wherein a represents a left-right relative value, b represents a front-back relative value, and c represents an upper-lower relative value;
and extracting a value b in the relative position value, and zooming the virtual model in the available background image data according to the value b.
As a further scheme of the invention: the step of processing the preliminary AR image data according to the light information in the available background image data to obtain available AR image data specifically includes:
the method comprises the following steps of collecting light information of an environment where available background image data are located through a camera, wherein the light information comprises: light intensity, light irradiation angle, light color;
adjusting light source parameters of a virtual model in the preliminary AR image data according to the light information to obtain available AR image data, wherein the light source parameters comprise: the intensity of the radiation of the light source, the position of the light source, the color of the light source.
It is another object of the present invention to provide an image acquisition system for AR display, the system comprising:
the original image acquisition module is used for acquiring original background image data;
the parameter adaptation module is used for processing the original background image data according to the virtual model data to obtain available background image data, and the available background image data is adapted to the virtual model data;
the first frame AR image determining module is used for importing the first frame virtual model data into the first frame available background image data to obtain a first frame AR image;
the preliminary AR image data determining module is used for acquiring the motion trail of the virtual model and importing each frame of virtual model data into the available background image data of the corresponding frame according to the motion trail to obtain preliminary AR image data; and
and the light adjusting module is used for processing the preliminary AR image data according to the light information in the available background image data to obtain available AR image data.
As a further scheme of the invention: the parameter adaptation module comprises:
a frame rate and frame number obtaining unit, configured to obtain a frame rate and a frame number of the virtual model data;
the frame rate determining unit is used for adjusting the frame rate of the original background image data so as to enable the frame rate of the original background image data to be equal to the frame rate of the virtual model data; and
and the frame number determining unit is used for modifying the frame number of the original background image data so that the frame number of the original background image data is equal to the frame number of the virtual model data, and the modified original background image data is the available background image data.
As a further scheme of the invention: the preliminary AR image data determination module includes:
the motion track acquisition unit is used for acquiring the motion track of the virtual model;
a relative position obtaining unit, configured to obtain a relative position of each frame of virtual model according to the motion trajectory, where a reference point of the relative position is a feature position of a previous frame of virtual model;
the characteristic position determining unit is used for determining the characteristic position of the virtual model in the available background image data of each frame according to the characteristic position of the virtual model in the first frame of AR image; and
and the virtual model importing unit is used for importing the virtual model according to the characteristic position of the virtual model in the available background image data to obtain preliminary AR image data.
As a further scheme of the invention: the virtual model importing unit includes:
an importing subunit for importing the virtual model into the available background image data in dependence on the characteristic position, an
And the scaling subunit is used for scaling the virtual model in the available background image data so as to enable the sensory size of the virtual model to be matched with the motion trail of the virtual model.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of firstly importing a first frame of virtual model data into a first frame of available background image data to obtain a first frame of AR image, importing each frame of virtual model data into corresponding frame of available background image data according to the motion track of the virtual model by taking the first frame of AR image as a reference to obtain preliminary AR image data, importing the virtual models frame by frame to enable the dynamic virtual model to be well combined with the background image, and processing the preliminary AR image data through light information in the available background image data to obtain the available AR image data, so that the light and shadow information of the virtual model is consistent with the light and shadow information of the background image, and the whole AR image looks more harmonious.
Drawings
Fig. 1 is a flowchart of an image acquisition method for AR display.
Fig. 2 is a flowchart of obtaining available background image data in an image acquisition method for AR display.
Fig. 3 is a flowchart of obtaining preliminary AR image data in an image acquisition method for AR display.
Fig. 4 is a flowchart of importing a virtual model according to a feature position of the virtual model in available background image data in an image acquisition method for AR display.
Fig. 5 is a flowchart of scaling a virtual model in available background image data in an image acquisition method for AR display.
Fig. 6 is a flowchart of obtaining available AR image data in an image acquisition method for AR display.
Fig. 7 is a schematic structural diagram of an image acquisition system for AR display.
Fig. 8 is a schematic structural diagram of a parameter adaptation module in an image acquisition system for AR display.
Fig. 9 is a schematic structural diagram of a preliminary AR image data determination module in an image acquisition system for AR display.
Fig. 10 is a schematic structural diagram of a virtual model importing unit in an image capturing system for AR display.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear, the present invention is further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Specific implementations of the present invention are described in detail below with reference to specific embodiments.
As shown in fig. 1, an embodiment of the present invention provides an image acquisition method for AR display, including the following steps:
s100, acquiring original background image data;
s200, processing the original background image data according to the virtual model data to obtain available background image data, wherein the available background image data is matched with the virtual model data;
s300, importing the first frame of virtual model data into the first frame of available background image data to obtain a first frame of AR image;
s400, acquiring a motion track of the virtual model, and importing each frame of virtual model data into available background image data of a corresponding frame according to the motion track to obtain preliminary AR image data;
and S500, processing the preliminary AR image data according to the light information in the available background image data to obtain available AR image data.
It should be noted that the Augmented Reality (AR) technology is a technology that ingeniously fuses virtual information and a real world, and the virtual information is simulated through a scientific technology and then is superimposed to the real world to be perceived by human senses, so that the sense experience beyond Reality is achieved; nowadays, AR technology often needs to display virtual information and real information in the same picture, and the virtual information is generally a virtual model established in advance, and the virtual model is displayed in a real image to achieve the effect of combining reality and virtuality. However, the virtual model is generally a dynamic character, and if the virtual model is directly superimposed on a real background, the virtual model is often very obtrusive, so that the presented AR image is not real enough, and the user impression effect is not good.
In the embodiment of the invention, the original background image data in the real environment can be collected by a monocular or a multi-view camera, when the multi-view camera is adopted for collection, a three-dimensional live-action image can be obtained, then processing the original background image data according to the virtual model data to obtain available background image data, the available background image data is adapted to the virtual model data, it should be noted that, in the embodiment of the present invention, the virtual model data of each frame needs to be imported into the background image data, it is therefore necessary to ensure that the frame rate and the number of frames of the available background image data are the same as those of the virtual model data, then importing the first frame of virtual model data into the first frame of available background image data to obtain a first frame of AR image, wherein the specific position of the first frame of virtual model imported into the first frame of available background image is subjectively determined by imported workers; then, a motion track of the virtual model needs to be acquired, and each frame of virtual model data is imported into the available background image data of the corresponding frame according to the motion track to obtain preliminary AR image data, so that the dynamic virtual model and the background image can be well combined, and in addition, the preliminary AR image data needs to be processed according to light information in the available background image data to obtain the available AR image data, so that the light and shadow information of the virtual model is consistent with the light and shadow information of the background image, and the virtual model looks more harmonious; it should be noted that the virtual model is generally a dynamic three-dimensional model, and the background image is generally a relatively static landscape image or a scene image.
As shown in fig. 2, as a preferred embodiment of the present invention, the step of processing the original background image data according to the virtual model data to obtain the available background image data specifically includes:
s201, acquiring a frame rate and a frame number of virtual model data;
s202, adjusting the frame rate of the original background image data to enable the frame rate of the original background image data to be equal to that of the virtual model data;
s203, modifying the frame number of the original background image data to enable the frame number of the original background image data to be equal to the frame number of the virtual model data, wherein the modified original background image data is the available background image data.
In the embodiment of the present invention, in order to enable the virtual model data of each frame to be imported into the background image data, and the background image data of each frame after being imported has the virtual model, the frame rate and the frame number of the virtual model data need to be equal to the frame rate and the frame number of the available background image data.
As shown in fig. 3, as a preferred embodiment of the present invention, the step of obtaining a motion trajectory of a virtual model, and importing each frame of virtual model data into available background image data of a corresponding frame according to the motion trajectory to obtain preliminary AR image data specifically includes:
s401, obtaining a motion track of a virtual model;
s402, obtaining the relative position of each frame of virtual model according to the motion track, wherein the reference point of the relative position is the characteristic position of the previous frame of virtual model;
s403, determining the characteristic position of the virtual model in the available background image data of each frame according to the characteristic position of the virtual model in the first frame of AR image;
s404, importing the virtual model according to the characteristic position of the virtual model in the available background image data to obtain preliminary AR image data.
In the embodiment of the present invention, first, a motion trajectory of a virtual model needs to be obtained, where the motion trajectory is obtained by scaling an original motion trajectory of the virtual model if necessary, when a worker introduces a first frame of virtual model into a first frame of available background image, the worker needs to determine a specific position of the first frame of virtual model and a specific size of the first frame of virtual model, at this time, the whole virtual model is scaled, the original motion trajectory is also scaled to obtain a motion trajectory of the virtual model, then, a relative position of each frame of virtual model is obtained according to the motion trajectory, a reference point of the relative position is a feature position of the previous frame of virtual model, the feature position is a specific position of the virtual model in available background image data, according to a feature position of the virtual model in an AR image of the first frame, determining the characteristic position of the virtual model in the available background image data of each frame; and finally, importing the virtual model according to the characteristic position of the virtual model in the available background image data to obtain the preliminary AR image data.
As shown in fig. 4, as a preferred embodiment of the present invention, the step of importing the virtual model into the preliminary AR image data according to the feature position of the virtual model in the available background image data specifically includes:
s4041, importing the virtual model into the available background image data according to the characteristic position,
s4042, scaling the virtual model in the available background image data to match a sensory size of the virtual model with a motion trajectory of the virtual model.
In the embodiment of the present invention, first, the virtual model needs to be imported according to the specific position of the virtual model in the available background image data, and then the virtual model in the available background image data is scaled so as to match the sensory size of the virtual model with the motion trajectory of the virtual model.
As shown in fig. 5, as a preferred embodiment of the present invention, the step of scaling the virtual model in the available background image data specifically includes:
s40421, obtaining a relative position value of the feature position of each frame relative to the feature position of the previous frame according to the motion trajectory of the virtual model, where a represents a left-right relative value, b represents a front-back relative value, and c represents a top-bottom relative value;
s40422, extracting the value b in the relative position value, and scaling the virtual model in the available background image data according to the value b.
In the embodiment of the invention, firstly, relative position values of characteristic positions are obtained, the relative position values are represented by (a, b, c), wherein a represents a left-right relative value, b represents a front-back relative value, and c represents an up-down relative value, it can be understood that when the virtual model moves in the up-down direction and the left-right direction, the distance of the virtual model is basically not changed, and at the moment, the virtual model does not need to be scaled; when the virtual model moves in the front-back direction, the virtual model needs to be scaled, the scaling factor =1+ Kb, K represents a scaling constant, when the virtual model moves backwards, b is smaller than 0, the scaling factor is smaller than 1, and the virtual model is scaled down; when the virtual model moves forward, b is greater than 0, the scaling factor is greater than 1, and the virtual model is enlarged.
As shown in fig. 6, as a preferred embodiment of the present invention, the step of processing the preliminary AR image data according to the light information in the available background image data to obtain available AR image data specifically includes:
s501, acquiring light information of the environment where the available background image data is located through a camera, wherein the light information comprises: light intensity, light irradiation angle, light color;
s502, according to the light information, adjusting light source parameters of a virtual model in the preliminary AR image data to obtain available AR image data, wherein the light source parameters comprise: the intensity of the radiation of the light source, the position of the light source, the color of the light source.
In the embodiment of the invention, the radiation light intensity of the light source in the virtual model can be adjusted according to the light intensity of the background image in the environment; adjusting the position of a light source in the virtual model according to the light irradiation angle of the background image in the environment; and adjusting the color of the light source in the virtual model according to the light color of the environment where the background image is located. It should be noted that the light information extraction and analysis technology is a conventional technology, and can be obtained by analyzing the reflected light intensity of the test light and the like according to the brightness value of the pixel point of the image and the image.
As shown in fig. 7, an embodiment of the present invention further provides an image capturing system for AR display, where the system includes:
an original image obtaining module 100, configured to obtain original background image data;
the parameter adaptation module 200 is configured to process the original background image data according to the virtual model data to obtain available background image data, where the available background image data is adapted to the virtual model data;
a first frame AR image determining module 300, configured to import the first frame virtual model data into the first frame available background image data to obtain a first frame AR image;
a preliminary AR image data determining module 400, configured to obtain a motion trajectory of the virtual model, and import each frame of virtual model data into available background image data of a corresponding frame according to the motion trajectory to obtain preliminary AR image data; and
and the light adjusting module 500 is configured to process the preliminary AR image data according to the light information in the available background image data to obtain available AR image data.
In the embodiment of the invention, the original background image data in the real environment can be collected by a monocular or a multi-view camera, when the multi-view camera is adopted for collection, a three-dimensional live-action image can be obtained, then processing the original background image data according to the virtual model data to obtain available background image data, the available background image data is adapted to the virtual model data, it should be noted that, in the embodiment of the present invention, the virtual model data of each frame needs to be imported into the background image data, it is therefore necessary to ensure that the frame rate and the number of frames of the available background image data are the same as those of the virtual model data, then importing the first frame of virtual model data into the first frame of available background image data to obtain a first frame of AR image, wherein the specific position of the first frame of virtual model imported into the first frame of available background image is subjectively determined by imported workers; then, a motion track of the virtual model needs to be acquired, and each frame of virtual model data is imported into the available background image data of the corresponding frame according to the motion track to obtain preliminary AR image data, so that the dynamic virtual model and the background image can be well combined, and in addition, the preliminary AR image data needs to be processed according to light information in the available background image data to obtain the available AR image data, so that the light and shadow information of the virtual model is consistent with the light and shadow information of the background image, and the virtual model looks more harmonious; it should be noted that the virtual model is generally a dynamic three-dimensional model, and the background image is generally a relatively static landscape image or a scene image.
As shown in fig. 8, as a preferred embodiment of the present invention, the parameter adaptation module 200 includes:
a frame rate and frame number obtaining unit 201, configured to obtain a frame rate and a frame number of the virtual model data;
a frame rate determining unit 202, configured to adjust a frame rate of the original background image data, so that the frame rate of the original background image data is equal to the frame rate of the virtual model data; and
a frame number determining unit 203, configured to modify the frame number of the original background image data so that the frame number of the original background image data is equal to the frame number of the virtual model data, where the modified original background image data is the available background image data.
In the embodiment of the present invention, in order to enable the virtual model data of each frame to be imported into the background image data, and the background image data of each frame after being imported has the virtual model, the frame rate and the frame number of the virtual model data need to be equal to the frame rate and the frame number of the available background image data.
As shown in fig. 9, as a preferred embodiment of the present invention, the preliminary AR image data determining module 400 includes:
a motion trajectory acquisition unit 401 configured to acquire a motion trajectory of the virtual model;
a relative position obtaining unit 402, configured to obtain a relative position of each frame of virtual model according to the motion trajectory, where a reference point of the relative position is a feature position of a previous frame of virtual model;
a feature position determining unit 403, configured to determine, according to a feature position of the virtual model in the first frame of AR image, a feature position of the virtual model in each frame of available background image data; and
and a virtual model importing unit 404, configured to import the virtual model into the preliminary AR image data according to a feature position of the virtual model in the available background image data.
In the embodiment of the present invention, first, a motion trajectory of a virtual model needs to be obtained, where the motion trajectory is obtained by scaling an original motion trajectory of the virtual model if necessary, when a worker introduces a first frame of virtual model into a first frame of available background image, the worker needs to determine a specific position of the first frame of virtual model and a specific size of the first frame of virtual model, at this time, the whole virtual model is scaled, the original motion trajectory is also scaled to obtain a motion trajectory of the virtual model, then, a relative position of each frame of virtual model is obtained according to the motion trajectory, a reference point of the relative position is a feature position of the previous frame of virtual model, the feature position is a specific position of the virtual model in available background image data, according to a feature position of the virtual model in an AR image of the first frame, determining the characteristic position of the virtual model in the available background image data of each frame; and finally, importing the virtual model according to the characteristic position of the virtual model in the available background image data to obtain the preliminary AR image data.
As shown in fig. 10, as a preferred embodiment of the present invention, the virtual model importing unit 404 includes:
an importing sub-unit 4041 for importing the virtual model into the available background image data in dependence on the feature locations, an
A scaling subunit 4042, configured to scale the virtual model in the available background image data such that the sensory size of the virtual model matches the motion trajectory of the virtual model.
In the embodiment of the present invention, first, the virtual model needs to be imported according to the specific position of the virtual model in the available background image data, and then the virtual model in the available background image data is scaled so as to match the sensory size of the virtual model with the motion trajectory of the virtual model.
The present invention has been described in detail with reference to the preferred embodiments thereof, and it should be understood that the invention is not limited thereto, but is intended to cover modifications, equivalents, and improvements within the spirit and scope of the present invention.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (3)

1. An image acquisition method for AR display, characterized in that it comprises the following steps:
acquiring original background image data;
processing the original background image data according to the virtual model data to obtain available background image data, wherein the available background image data is matched with the virtual model data;
importing the first frame of virtual model data into the first frame of available background image data to obtain a first frame of AR image;
acquiring a motion track of a virtual model, and importing each frame of virtual model data into available background image data of a corresponding frame according to the motion track to obtain preliminary AR image data, wherein the preliminary AR image data specifically comprises the following steps: obtaining a motion track of the virtual model; obtaining the relative position of each frame of virtual model according to the motion track, wherein the reference point of the relative position is the characteristic position of the last frame of virtual model; determining the characteristic position of the virtual model in the available background image data of each frame according to the characteristic position of the virtual model in the first frame of AR image; importing the virtual model according to the characteristic position of the virtual model in the available background image data to obtain preliminary AR image data; the step of importing the virtual model according to the feature position of the virtual model in the available background image data to obtain preliminary AR image data specifically includes: importing the virtual model into the available background image data according to the characteristic position, and scaling the virtual model in the available background image data to enable the sensory size of the virtual model to be matched with the motion track of the virtual model, which specifically comprises the following steps: obtaining a relative position value of the feature position of each frame relative to the feature position of the previous frame according to the motion trail of the virtual model, wherein the relative position value is represented by (a, b and c), wherein a represents a left-right relative value, b represents a front-back relative value, and c represents an upper-lower relative value; extracting a value b in the relative position value, and zooming the virtual model in the available background image data according to the value b;
and processing the preliminary AR image data according to the light information in the available background image data to obtain available AR image data.
2. The image acquisition method for AR display according to claim 1, wherein the step of processing the original background image data according to the virtual model data to obtain the available background image data specifically includes:
acquiring the frame rate and the frame number of virtual model data;
adjusting the frame rate of the original background image data to enable the frame rate of the original background image data to be equal to that of the virtual model data;
and modifying the frame number of the original background image data to ensure that the frame number of the original background image data is equal to the frame number of the virtual model data, wherein the modified original background image data is the available background image data.
3. The image capturing method as claimed in claim 1, wherein the step of processing the preliminary AR image data according to the light information in the available background image data to obtain available AR image data specifically includes:
the method comprises the following steps of collecting light information of an environment where available background image data are located through a camera, wherein the light information comprises: light intensity, light irradiation angle and light color;
adjusting light source parameters of a virtual model in the preliminary AR image data according to the light information to obtain available AR image data, wherein the light source parameters comprise: the intensity of the radiation of the light source, the position of the light source and the color of the light source.
CN202111212452.2A 2021-10-19 2021-10-19 Image acquisition method and system for AR display Active CN113641325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111212452.2A CN113641325B (en) 2021-10-19 2021-10-19 Image acquisition method and system for AR display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111212452.2A CN113641325B (en) 2021-10-19 2021-10-19 Image acquisition method and system for AR display

Publications (2)

Publication Number Publication Date
CN113641325A CN113641325A (en) 2021-11-12
CN113641325B true CN113641325B (en) 2022-02-08

Family

ID=78427276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111212452.2A Active CN113641325B (en) 2021-10-19 2021-10-19 Image acquisition method and system for AR display

Country Status (1)

Country Link
CN (1) CN113641325B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105261041A (en) * 2015-10-19 2016-01-20 联想(北京)有限公司 Information processing method and electronic device
EP3082017A1 (en) * 2015-04-15 2016-10-19 Thomson Licensing Method and system for displaying additional information associated with a content via an optical head mounted display device
CN107680164A (en) * 2016-08-01 2018-02-09 中兴通讯股份有限公司 A kind of virtual objects scale adjusting method and device
CN108629847A (en) * 2018-05-07 2018-10-09 网易(杭州)网络有限公司 Virtual objects mobile route generation method, device, storage medium and electronic equipment
CN109255841A (en) * 2018-08-28 2019-01-22 百度在线网络技术(北京)有限公司 AR image presentation method, device, terminal and storage medium
CN112181340A (en) * 2020-09-29 2021-01-05 联想(北京)有限公司 AR image sharing method and electronic device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6867753B2 (en) * 2002-10-28 2005-03-15 University Of Washington Virtual image registration in augmented display field
CN107025662B (en) * 2016-01-29 2020-06-09 成都理想境界科技有限公司 Method, server, terminal and system for realizing augmented reality

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3082017A1 (en) * 2015-04-15 2016-10-19 Thomson Licensing Method and system for displaying additional information associated with a content via an optical head mounted display device
CN105261041A (en) * 2015-10-19 2016-01-20 联想(北京)有限公司 Information processing method and electronic device
CN107680164A (en) * 2016-08-01 2018-02-09 中兴通讯股份有限公司 A kind of virtual objects scale adjusting method and device
CN108629847A (en) * 2018-05-07 2018-10-09 网易(杭州)网络有限公司 Virtual objects mobile route generation method, device, storage medium and electronic equipment
CN109255841A (en) * 2018-08-28 2019-01-22 百度在线网络技术(北京)有限公司 AR image presentation method, device, terminal and storage medium
CN112181340A (en) * 2020-09-29 2021-01-05 联想(北京)有限公司 AR image sharing method and electronic device

Also Published As

Publication number Publication date
CN113641325A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN110363858B (en) Three-dimensional face reconstruction method and system
CN110136243B (en) Three-dimensional face reconstruction method, system, device and storage medium thereof
CN105938627B (en) Processing method and system for virtual shaping of human face
CN110738595B (en) Picture processing method, device and equipment and computer storage medium
JP4642757B2 (en) Image processing apparatus and image processing method
CN106896925A (en) The device that a kind of virtual reality is merged with real scene
KR20140108828A (en) Apparatus and method of camera tracking
CN117115256A (en) image processing system
US20230419438A1 (en) Extraction of standardized images from a single-view or multi-view capture
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN112907751B (en) Virtual decoration method, system, equipment and medium based on mixed reality
CN109525786B (en) Video processing method and device, terminal equipment and storage medium
CN113112612A (en) Positioning method and system for dynamic superposition of real person and mixed reality
CN112733672A (en) Monocular camera-based three-dimensional target detection method and device and computer equipment
CN107016730A (en) The device that a kind of virtual reality is merged with real scene
WO2017173578A1 (en) Image enhancement method and device
CN115984447A (en) Image rendering method, device, equipment and medium
CN115035580A (en) Figure digital twinning construction method and system
CN106981100A (en) The device that a kind of virtual reality is merged with real scene
CN113641325B (en) Image acquisition method and system for AR display
KR101841750B1 (en) Apparatus and Method for correcting 3D contents by using matching information among images
CN110910487B (en) Construction method, construction device, electronic device, and computer-readable storage medium
CN110597397A (en) Augmented reality implementation method, mobile terminal and storage medium
CN114785957A (en) Shooting method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant