CN115766981A - Image display method and device based on augmented reality - Google Patents

Image display method and device based on augmented reality Download PDF

Info

Publication number
CN115766981A
CN115766981A CN202211385265.9A CN202211385265A CN115766981A CN 115766981 A CN115766981 A CN 115766981A CN 202211385265 A CN202211385265 A CN 202211385265A CN 115766981 A CN115766981 A CN 115766981A
Authority
CN
China
Prior art keywords
content
virtual content
real
target
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211385265.9A
Other languages
Chinese (zh)
Inventor
胡耀
刘杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211385265.9A priority Critical patent/CN115766981A/en
Publication of CN115766981A publication Critical patent/CN115766981A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image display method and device based on augmented reality, and belongs to the technical field of augmented reality. The image display method is applied to a first device, the first device is in communication connection with a second device, and the method comprises the following steps: acquiring real content shot by the user, and receiving virtual content from the second equipment; displaying a first target image including the virtual content and the real content.

Description

Image display method and device based on augmented reality
Technical Field
The application belongs to the technical field of augmented reality, and particularly relates to an image display method and device based on augmented reality.
Background
Augmented Reality (AR) technology is to apply virtual information to the real world, so that virtual objects, scenes, etc. are superimposed on the real scene, thereby realizing the enhancement of Reality. Currently, a user can see an image in which virtual content and real content are superimposed by wearing augmented reality devices such as AR glasses and the like.
However, in a shooting scene, when a user shoots with a terminal having a shooting function, only real content can be shot, virtual content cannot be shot, and it is difficult to obtain shot content including the virtual content.
Disclosure of Invention
The embodiment of the application provides an image display method and device based on augmented reality, and the method and device can solve the problems that shooting contents do not include virtual contents and the shooting contents including the virtual contents are difficult to obtain in the related art.
In a first aspect, an embodiment of the present application provides an image display method based on augmented reality, which is applied to a first device, where the first device is in communication connection with a second device, and the image display method includes:
acquiring real content shot by the user, and receiving virtual content from the second equipment;
displaying a first target image including the virtual content and the real content.
In a second aspect, an embodiment of the present application provides an augmented reality-based image display apparatus, including:
the acquisition module is used for acquiring real content shot by the device under the condition that the device establishes communication connection with second equipment and receiving virtual content from the second equipment;
a display module for displaying a first target image containing the virtual content and the real content.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executed on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, under the condition that the first device establishes communication connection with the second device, the first device acquires real content shot by the first device, receives virtual content from the second device, and displays a first target image containing the virtual content and the real content. In this way, by acquiring the virtual content from another device through the established communication connection and displaying the virtual content while displaying the real content, it can be ensured that the real content and the virtual content are simultaneously displayed in the shot content, thereby solving the problem that the shot content including the virtual content is difficult to obtain because the virtual content is not included in the shot content in the related art.
Drawings
Fig. 1 is a schematic diagram of a related art in which virtual content is not included in captured content;
fig. 2 is a schematic flowchart of an augmented reality-based image display method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another augmented reality-based image display method provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of another augmented reality-based image display method provided in an embodiment of the present application;
FIGS. 5-1 and 5-2 are schematic diagrams of hidden displays provided by embodiments of the present application;
fig. 6 is a schematic flowchart of another augmented reality-based image display method provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of a reproduction of a first target image provided by an embodiment of the present application;
fig. 8 is a schematic flowchart of another augmented reality-based image display method provided in an embodiment of the present application;
fig. 9 is a schematic diagram illustrating that virtual content is included in the shot content provided in the embodiment of the present application;
fig. 10 is a schematic structural diagram of an augmented reality-based image display device according to an embodiment of the present application;
fig. 11 is a hardware structure diagram of an electronic device implementing various embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
As described in the background of the present application, as shown in fig. 1, a user can see an image D in which virtual content B and real content C are superimposed by wearing an augmented reality device a. However, when a user performs shooting using the terminal E having a shooting function, only the real content C is often shot, and the virtual content B cannot be shot, so that the virtual content B is not included in the shot content F, and it is difficult to obtain the shot content including the virtual content B.
In view of the above, embodiments of the present application provide an image display method based on augmented reality to solve the above problems. The image display method may be applied to a first device which is communicatively connected with a second device, as shown in fig. 2, and may include the steps of:
step 101, acquiring the real content shot by the user, and receiving the virtual content from the second device.
The first device may be a device configured with a camera and having a shooting function, including but not limited to a mobile phone, a tablet computer, a digital camera, and the like. The second device may be an augmented reality device capable of presenting an overlay image of the virtual content and the real content within its own perspective; the second device may include, but is not limited to, AR glasses or the like.
The real content is content that actually exists in the real world, and may be, for example, a real scene, a real person, a real object, and the like. Specifically, when the first device starts a shooting function, the real content may be captured by a camera of the first device, and an image may be formed in the first device based on an optical imaging principle.
The virtual content may be a virtual object that does not exist in the real world. Generally, when the first device starts a shooting function, the virtual content cannot be captured by a camera of the first device, and an image cannot be formed in the first device based on an optical imaging principle. The virtual content may be, for example, cartoon images, physical images, text images, special effects (halo effect, smoke effect, etc.), and the like.
The first device and the second device can establish communication connection based on data interchange technology such as Bluetooth. Taking establishing communication connection based on bluetooth as an example, the first device may serve as a master device, the second device may serve as a slave device, the master device searches for the slave device, and initiates pairing to establish connection, and after the connection is established, the two devices can receive and transmit data. Of course, the second device may be a master device, and the first device may be a slave device.
In the embodiment of the application, the real content shot by the first device may be the real content displayed in a shooting preview interface after the first device starts a shooting function; or, the real content may be recorded and displayed on the first device in real time after the video recording function of the first device is started.
Specifically, the real content displayed in the photo preview interface may be the real content displayed in the view finder of the first device screen before the user clicks the photo button. The real content recorded and displayed in real time on the first device may be the real content in the recording process displayed in the view-finding frame of the screen of the first device after the user clicks the start video recording button and before the user clicks the end video recording button.
The first device receives the virtual content from the second device, and the second device sends the virtual content in the superimposed image presented by the second device to the first device under the condition that the communication connection is established between the first device and the second device.
Step 103, displaying a first target image containing the virtual content and the real content.
The first target image includes the real content and the virtual content, and may be understood as an overlay image of the real content and the virtual content.
In the embodiment of the present application, the display position of the first target image may be matched with the photographing state of the first device. For example, when the first device displays a photo preview interface, the first target image may be displayed in the photo preview interface. The corresponding effects that can be achieved are: and the user starts the photographing function of the first device, and the real content and the virtual content are displayed in the photographing preview interface of the first device. When the first device displays a real-time recording interface during video recording, the first target image may be displayed in the real-time recording interface. The corresponding effects that can be achieved are: and after the user clicks the video recording starting key, not only the real content but also the virtual content is displayed in the real-time recording interface of the first equipment.
Further, when the first device displays a photo preview interface and the first target image is displayed in the photo preview interface, the first device may output a photo corresponding to the first target image based on the received photo execution instruction. Namely, the user can obtain a photo in which the virtual content and the real content in the photographing preview interface are superposed by clicking the photographing key. When the first device displays a real-time recording interface in a video recording process and the first target image is displayed in the real-time recording interface, the first device may output a corresponding video based on receiving a video recording ending instruction. Namely, the user can click the video recording ending button to obtain the video with the virtual content overlapped with the real content in the real-time recording interface.
In this embodiment of the present application, the virtual content of the second device is dynamically changeable, and then the first device may acquire the virtual content of the second device in real time and update and display the virtual content. Further, the virtual content displayed on the first device may change as the virtual content of the second device dynamically changes. Then, for a scene in which the first device displays a photo preview interface and the first target image is displayed in the photo preview interface, the virtual content may change in the photo preview interface along with the dynamic change of the virtual content of the second device. In the changing process, when the user sees the target virtual content of interest, the user can click the photographing key to obtain a photo containing the real content and the target virtual content. The method includes that a real-time recording interface in the video recording process is displayed on a first device, a first target image is displayed in a scene in the real-time recording interface, and virtual content can change along with dynamic change of virtual content of a second device in the real-time recording interface. When the user clicks the video recording ending button, a video containing real content and dynamically changed virtual content can be obtained.
It can be understood that, with the image display method based on augmented reality provided in the embodiment of the present application, in a case where the first device establishes a communication connection with the second device, the first device acquires real content captured by itself, receives virtual content from the second device, and displays a first target image including the virtual content and the real content. In this way, by acquiring the virtual content from another device through the established communication connection and displaying the virtual content while displaying the real content, it can be ensured that the real content and the virtual content are simultaneously displayed in the shot content, thereby solving the problem that the shot content including the virtual content is difficult to obtain because the virtual content is not included in the shot content in the related art.
In a specific implementation, before displaying the first target image including the virtual content and the real content in step 103, as shown in fig. 3, the image displaying method may further include: 102, acquiring a first position relation between the real content and the virtual content; step 103 displays a first target image including the virtual content and the real content, which may specifically include step 1031, displaying the first target image including the virtual content and the real content based on the first positional relationship.
Wherein step 102 may be performed after step 101. Displaying a first target image including the virtual content and the real content based on a first positional relationship, which may be understood as the virtual content and the real content satisfying the first positional relationship in the first target image.
In this embodiment, the first positional relationship may include a target display position of the virtual content in real content captured by the first device itself.
For example, when the real content is the real content displayed in the photographing preview interface after the photographing function of the first device is started, the first positional relationship may be a target display position of the virtual content in the photographing preview interface of the first device. When the real content is real content recorded and displayed on the first device in real time after the video recording function of the first device is started, the first position relationship may be a target display position of the virtual content in a recording interface of the first device.
More specifically, the first positional relationship may also correspond to a positional relationship of virtual content and real content in an overlay image presented by the second device itself. For example, the real content shot by the first device itself includes a target object, and the real content in the superimposed image presented by the second device itself also includes the target object; the position relationship between the virtual content in the superimposed image presented by the second device and the real content is that the virtual content is displayed below the target object, and the first position relationship may be that the virtual content is displayed below the target object in the real content shot by the first device. In this way, the first target image displayed by the first device and the superimposed image presented by the second device itself can be matched.
In practical applications, the obtaining, by the first device, the first positional relationship between the real content and the virtual content may include at least two ways:
the method I comprises the following steps: the first device sends the real content to the second device; and receiving, from a second device, a first positional relationship between the real content and the virtual content.
In this first manner, the first location relationship may be determined by the second device and sent to the first device. After the first device sends the real content captured by itself to the second device, the second device may determine the first positional relationship based on the real content and a positional relationship between the target content in the image obtained by the second device and the virtual content (the positional relationship may be referred to as a second positional relationship). The image obtained by the second device may specifically be a superimposed image of the presented virtual content and the real content obtained by the second device, and the target content is the real content obtained by the second device.
Specifically, the second device may determine the second positional relationship using a SLAM (Simultaneous Localization and Mapping) algorithm, and then further determine the first positional relationship. Since obtaining the position relationship between the virtual content and the real content in the superimposed image presented by the second device according to the SLAM algorithm is a mature technology in the related art, it is not described herein again.
After the second positional relationship is determined, the determining of the first positional relationship based on the real content and the second positional relationship may specifically be that the second positional relationship may include a target image element located around the virtual image in the real content obtained by the second device itself, the position of an image element matched with the target image element in the real content sent by the first device is determined, and then the first positional relationship is determined based on the position. It should be understood that the above-mentioned schemes for determining the second position relationship and determining the first position relationship are only examples, and do not represent limitations to the scheme of the present application.
Alternatively, the second mode: the method comprises the steps that a first device receives a second position relation from a second device, wherein the second position relation is the position relation between target content and virtual content in an image obtained by the second device, and the target content is real content obtained by the second device; the first device determines a first positional relationship between the real content and the virtual content based on the second positional relationship.
In this second approach, the first positional relationship may be determined by the first device. After determining the second location relationship between the real content and the virtual content, the second device sends the second location relationship to the first device, and the first device may determine the first location relationship based on the real content photographed by the first device and the second location relationship. Specifically, the second positional relationship may include target image elements around the virtual image in the real content of the second device, and the first device may determine the position of an image element matching the target image element in the real content captured by the first device, and then determine the first positional relationship based on the position.
It can be understood that, by the scheme of the first mode or the second mode, the first positional relationship between the virtual content and the real content captured by the first device itself can be accurately determined, and then the virtual content from the second device can be accurately displayed in the real content captured by the first device itself, so that the difference between the captured content of the first device and the image seen by the user through the second device can be further reduced.
It is considered that after obtaining the first target image including real content and virtual content, the user may have a need not want others to see the virtual content in the first target image. Therefore, in an implementation manner, after the first target image is displayed in step 103, as shown in fig. 4, the augmented reality-based image display method provided in the embodiment of the present application further includes: step 104, in the case that the first device is in the privacy mode, the first device hides the virtual content in the first target image (as shown in fig. 5-1, the first device only displays real content C), or the first device hides the virtual content and the real content in the first target image (as shown in fig. 5-2, the first device displays blank).
The hiding of the virtual content in the first target image may specifically include: when the first device outputs the photo corresponding to the first target image, the virtual content in the photo is hidden. When the first device outputs a video corresponding to the first target image, the virtual content in the video is hidden. In a specific implementation, the virtual content in the photo and the video may be hidden when the user calls the photo and the video for browsing.
The hiding of the virtual content and the real content in the first target image may specifically include: when the photo corresponding to the first target image is output by the first device, the virtual content and the real content in the photo are hidden, namely no content is displayed, and a blank effect is presented. When the first device outputs a video corresponding to the first target image, the virtual content and the real content in the video are hidden, that is, no content is displayed, and a blank effect is presented. In a specific implementation, when the user calls the photo and the video to browse, the virtual content and the real content in the photo and the video can be hidden.
In an embodiment of the application, the privacy mode of the first device may be turned on in response to receiving a privacy mode turn-on instruction. And if the user calls the photo and the video for browsing, performing corresponding operation to start the privacy mode of the first equipment. Alternatively, the privacy mode of the first device may also be automatically turned on in response to recognizing the presence of virtual content in the photograph or the video. For example, when the user calls the photo and the video to be browsed, the first device identifies whether the photo and the video contain virtual content, and if the photo and the video contain virtual content, the privacy mode is automatically started.
It is understood that by hiding the virtual content in the first target image, or hiding the virtual content and the real content in the first target image, others can be prevented from seeing the virtual content in the first target image, so that the privacy of the user can be protected.
When the virtual content in the first target image is hidden in order to avoid others from seeing the virtual content in the first target image, or the virtual content and the real content in the first target image are hidden, the user may have a need to view the first target image. Thus, in an implementation manner, after the first target image is displayed in step 103, as shown in fig. 6, the augmented reality-based image display method provided in the embodiment of the present application further includes: step 105, sending the first target image to the second device, so that the first target image is included in the image presented by the second device.
The first target image sent by the first device to the second device may specifically be a photo or a video corresponding to the first target image. In a specific implementation, the first device may send the photo or the video corresponding to the first target image to the second device after outputting the photo or the video corresponding to the first target image. The first device may send the photo or the video corresponding to the first target image to the second device after performing the hiding operation.
In the embodiment of the present application, the first target image is included in the image presented by the second device, and it is understood that the first target image can be reproduced on the second device. For example, a user wearing an augmented reality device may view a first target image with the aid of the augmented reality device. After the user wears the augmented reality device, the first target image can be seen no matter where the visual angle of the user faces, the first device screen or other places except the first device screen. As shown in fig. 7, the user wearing the augmented reality device can see a first target image including real content C and virtual content B.
It can be understood that, with the above-mentioned scheme, the first target image is sent to the second device by the first device, so that even after the virtual content in the first target image is hidden by the first device or the virtual content and the real content in the first target image are hidden, the user can still see the first target image through the second device, for example, the user can see the first target image by wearing the augmented reality device. Thereby, it is made possible to satisfy both the privacy protection demand of the user and the demand for viewing the first target image.
In practical application, in order to make the shot content clear, a user often performs zoom processing on a shooting interface of the first device, and the zoom processing often causes the real content shot by the first device to zoom, so that the real content in the superposed image of the first device and the second device is different. The zoom information of the virtual content in the second device overlay image is often adapted to the real content in the overlay image, and if the virtual content is directly overlaid with the real content shot by the first device, the problems of incompatibility and unnaturalness in combination may occur. Therefore, in order to enable the virtual content to be still better fused with the real content when the real content shot by the first device is zoomed, in an embodiment, the image display method based on augmented reality further includes: the method comprises the steps that a first device receives a zooming operation of a user on real content; responding to the zooming operation, acquiring real content after zooming processing, and acquiring virtual content after zooming processing; the first equipment displays a second target image based on the real content after the zooming processing and the virtual content after the zooming processing; the second target image comprises the real content after the zooming processing and the virtual content after the zooming processing.
The real content may be zoomed before the first device displays the first target image or after the first device displays the first target image.
Specifically, when the real content is zoomed before the first device displays the first target image, the first device displays the second target image, which is the first target image, based on the zoomed real content and the zoomed virtual content. The corresponding effects may be: before a user zooms real content displayed on first equipment, a shooting interface of the first equipment does not contain the virtual content; and after zooming the real content displayed on the first device, directly displaying the virtual content subjected to zooming and the real content subjected to zooming in the shooting interface of the first device by the user.
When the real content is zoomed after the first device displays the first target image, the first device may display a second target image that may correspond to an image further adjusted based on the first target image based on the zoomed real content and the zoomed virtual content. The corresponding effect may be: after a first target image is displayed in a shooting interface of first equipment, responding to a user to carry out zooming processing on real content in the first target image, zooming the real content in the first target image, and correspondingly zooming virtual content in the first target image, so as to obtain a second target image containing the zoomed real content and the zoomed virtual content.
The positions of the zoom-processed real content and the zoom-processed virtual content may satisfy the first positional relationship.
By adopting the above scheme, the virtual content after the zooming processing is obtained by obtaining the real content after the zooming processing, and the second target image including the real content after the zooming processing and the virtual content after the zooming processing is displayed, so that the virtual content and the real content in the shooting interface of the first device can be combined more harmoniously, and the superposed image can be more natural.
In practical applications, the acquiring, by the first device, the virtual content after the zoom processing may include at least two ways:
the first method is as follows: the first equipment acquires zooming information of the real content; and the first equipment performs zooming processing on the virtual content from the second equipment according to the zooming information to obtain the virtual content after zooming processing.
In the first mode, the first device may perform zoom processing on the virtual content from the second device. The virtual content from the second device, that is, the object of the zoom processing performed by the first device, may be the virtual content that is not displayed in the shooting interface of the first device before the first target image is displayed; the first target image may be displayed, and then the virtual image in the first target image may be displayed.
In a specific implementation, the first device performs zoom processing on the virtual content from the second device according to the zoom information, which may be performing synchronous zoom on the virtual content from the second device. The zoom parameter category and the degree of change of various zoom parameters of the virtual content from the second device are the same as the degree of change of the zoom parameter category and various zoom parameters of the real content in the shooting interface of the first device.
Alternatively, the second mode: the first equipment acquires zooming information of the real content; the first device sends the zoom information to the second device; the first device receives zoom processed virtual content from the second device.
In the second mode, the second device may perform zoom processing on the virtual content. Specifically, after receiving the zoom information sent by the first device, the second device performs zoom processing on the virtual content displayed in the second device based on the zoom information to obtain the virtual content after zoom processing, and sends the virtual content after zoom processing to the first device.
The second device performs zoom processing on the virtual content based on the zoom information, which may be performing synchronous zoom on the virtual content. The zoom parameter category and the degree of change of various zoom parameters of the virtual content are the same as the zoom parameter category and the degree of change of various zoom parameters of the real content in the shooting interface of the first device.
It can be understood that, by the scheme of the first mode or the second mode, the first device can accurately obtain the virtual content after the zoom processing, so that the virtual content and the real content in the shooting interface of the first device can be combined more harmoniously, and the superposed image can be more natural.
In practical applications, in order to make the photographed content clear, a user may turn on a fill light or a flash light to adjust the photographing environment of the first device, and the adjustment may change the brightness and/or color temperature of the real content photographed by the first device, so that there is a difference with the real content in the superimposed image of the second device. However, the brightness and color temperature of the virtual content in the superimposed image of the second device are often adapted to the real content in the superimposed image, and if the virtual content is directly superimposed on the real content photographed by the first device, the problems of incompatibility and unnaturalness in combination may occur. Therefore, in order to enable the virtual content to be still better fused with the real content captured by the first device under the above circumstances, in an embodiment, the method for displaying an image based on augmented reality according to the embodiment of the present application further includes: the method comprises the steps that a first device receives adjustment operation of a user on target parameters of a real environment corresponding to real content; responding to the adjustment operation, acquiring real content after the target parameter is changed, and acquiring virtual content after the target parameter is changed; the first device displays a third target image based on the real content after the target parameter is changed and the virtual content after the target parameter is changed; the third target image includes real content after the target parameter is changed and virtual content after the target parameter is changed.
Wherein the target parameter may comprise a brightness and/or a color temperature. The target parameter of the real environment corresponding to the real content may be changed before the first target image is displayed by the first device or after the first target image is displayed by the first device.
Specifically, when the target parameter of the real environment corresponding to the real content changes before the first device displays the first target image, the first device displays a third target image, which is the first target image, based on the real content after the target parameter changes and the virtual content after the target parameter changes. The corresponding effect may be: before the shooting environment of the first device is adjusted, the shooting interface of the first device does not contain the virtual content; after the shooting environment of the first device is adjusted, the virtual content of the changed target parameters and the real content of the changed target parameters are directly displayed in the shooting interface of the first device.
When the target parameter of the real environment corresponding to the real content changes after the first device displays the first target image, the first device may display a third target image, which is equivalent to an image obtained by further adjusting on the basis of the first target image, based on the real content of which the target parameter has changed and the virtual content of which the target parameter has changed. The corresponding effect may be: after a first target image is displayed in a shooting interface of first equipment, in response to adjustment of a shooting environment of the first equipment, a target parameter of real content in the first target image changes, a target parameter of virtual content in the first target image also changes correspondingly, and then a third target image containing the real content after the target parameter changes and the virtual content after the target parameter changes is obtained.
The positions of the real content after the target parameter is changed and the virtual content after the target parameter is changed can satisfy the first position relationship.
By adopting the scheme, the real content of the changed target parameters is obtained, the virtual content of the changed target parameters is obtained, and the third target image containing the real content of the changed target parameters and the virtual content of the changed target parameters is displayed, so that the virtual content and the real content in the shooting interface of the first device can be combined more harmoniously, and the superposed image can be more natural.
In practical application, the obtaining, by the first device, the virtual content with the changed target parameter may include at least the following two ways:
the first method is as follows: the first equipment acquires the target parameters; and the first equipment processes the virtual content from the second equipment according to the target parameter to obtain the virtual content with the changed target parameter.
In the first mode, the first device may process the virtual content from the second device. The virtual content from the second device, namely the object processed by the first device, may be the virtual content that is not displayed in the shooting interface of the first device before the first target image is displayed; or may be a virtual image in the first target image after the first target is displayed.
In a specific implementation, the first device processes the virtual content from the second device according to the target parameter, which may be to perform synchronization processing on the virtual content from the second device. The change degree of the target parameter of the virtual content from the second device is the same as the change degree of the target parameter of the real content in the shooting interface of the first device.
Alternatively, the second mode: the first equipment acquires the target parameters; the first device sends the target parameters to the second device; and the first equipment receives the virtual content with the changed target parameters from the second equipment.
In the second mode, the second device may process the virtual content. Specifically, after receiving the target parameter sent by the first device, the second device processes the virtual content displayed in the second device based on the target parameter to obtain the virtual content with the changed target parameter, and sends the virtual content with the changed target parameter to the first device.
The second device processes the virtual content based on the target parameter, which may be synchronous processing of the virtual content. The change degree of the target parameter of the virtual content is the same as that of the target parameter of the real content in the shooting interface of the first device.
It can be understood that, by the scheme of the first mode or the second mode, the first device can accurately obtain the virtual content after the target parameter is changed, so that the virtual content and the real content in the shooting interface of the first device can be combined more harmoniously, and the superposed image can be more natural.
Based on the augmented reality-based image display method provided by the above embodiment of the present application, the following further explains the scheme provided by the embodiment of the present application in combination with the interaction between the first device and the second device. As shown in fig. 8, the image display method based on augmented reality provided in the embodiment of the present application may specifically include:
step 201, under the condition that the first device establishes communication connection with the second device, the first device obtains the real content shot by itself and sends the real content to the second device.
As shown in fig. 9, the first device and the second device may establish a communication connection based on bluetooth H, and complete data transmission.
Step 202, the second device determines a first position relationship between the virtual content and the real content shot by the first device based on the real content and a second position relationship between the target content in the image obtained by the second device and the virtual content, and sends the virtual content and the first position relationship to the first device.
Wherein the target content is real content obtained by the second device itself.
Step 203, the first device displays a first target image based on the real content, the virtual content and the first position relation shot by the first device, and sends the first target image to the second device; wherein the real content and the virtual content are included in the first target image, and the positions of the real content and the virtual content satisfy the first position relationship.
As shown in fig. 9, a superimposed image F containing virtual content and real content photographed by itself is displayed on the first device.
Step 204, in the case that the first device is in the privacy mode, the first device hides the virtual content in the first target image, or the first device hides the virtual content and the real content in the first target image.
Step 205, the second device presents the first target image.
By adopting the image display method based on augmented reality provided by the embodiment of the application, under the condition that the first device and the second device are in communication connection, the first device acquires real content shot by the first device, receives virtual content from the second device, and displays a first target image containing the virtual content and the real content. In this way, by acquiring the virtual content from another device through the established communication connection and displaying the virtual content while displaying the real content, it can be ensured that the real content and the virtual content are simultaneously displayed in the shot content, thereby solving the problem that the shot content including the virtual content is difficult to obtain because the virtual content is not included in the shot content in the related art.
In the augmented reality-based image display method provided by the embodiment of the present application, the execution subject may be an augmented reality-based image display device, or a control module in the augmented reality-based image display device for executing the augmented reality-based image display method. In the embodiment of the present application, an image display apparatus based on augmented reality performs an image display method based on augmented reality as an example, and the image display apparatus based on augmented reality provided in the embodiment of the present application is described.
An embodiment of the present application further provides an augmented reality-based image display apparatus 300, as shown in fig. 10, the augmented reality-based image display apparatus 300 includes:
an obtaining module 310, configured to obtain the real content photographed by the apparatus itself and receive the virtual content from the second device when the apparatus establishes a communication connection with the second device.
The augmented reality-based image display device may be a camera-equipped device with a shooting function, including but not limited to a mobile phone, a tablet computer, a digital camera, and the like. The second device may be an augmented reality device capable of presenting an overlay image of the virtual content and the real content within its own perspective; the second device may include, but is not limited to, AR glasses or the like.
The acquisition module acquires real content shot by the device, wherein the real content can be displayed in a shooting preview interface after the device starts a shooting function; or, the real content may be recorded and displayed in real time on the device after the video recording function is started.
The obtaining module receives the virtual content from the second device, which may be based on the apparatus establishing a communication connection with the second device, and the second device sends the virtual content in the overlay image presented by itself to the apparatus.
A display module 320 for displaying a first target image including the virtual content and the real content.
The first target image includes the real content and the virtual content, and may be understood as an overlay image of the real content and the virtual content.
The display position of the first target image may be matched with a photographing state of the apparatus. For example, when the apparatus displays a photo preview interface, the first target image may be displayed in the photo preview interface. The corresponding effects that can be achieved are: and starting the photographing function of the device by the user, wherein the real content and the virtual content are displayed in a photographing preview interface of the device. When the apparatus displays a real-time recording interface during video recording, the first target image may be displayed in the real-time recording interface. The corresponding effects that can be achieved are: and after the user clicks a video recording starting button, not only real content but also the virtual content is displayed in a real-time recording interface of the device.
Further, when the device displays a photographing preview interface and a first target image is displayed in the photographing preview interface, the device may output a photo corresponding to the first target image based on the received photographing instruction. Namely, the user can obtain a photo containing the virtual content and the real content superposed in the photo preview interface by clicking the photo button. When the device displays a real-time recording interface in the video recording process and the first target image is displayed in the real-time recording interface, the device can output a corresponding video based on the received video recording ending instruction. Namely, the user can obtain a video containing virtual content and superimposed real content in the real-time recording interface by clicking the video recording ending button.
In this embodiment of the application, the virtual content of the second device is dynamically changeable, and then the apparatus may acquire the virtual content of the second device in real time and update and display the virtual content. Further, the virtual content displayed on the apparatus may change as the virtual content of the second device dynamically changes. Then, for a scene in which the first target image is displayed in a photo preview interface displayed by the apparatus, the virtual content may change along with the dynamic change of the virtual content of the second device in the photo preview interface; in the changing process, when the user sees the target virtual content of interest, the user can click the photographing key to obtain a photo containing the real content and the target virtual content. Aiming at the situation that the device displays a real-time recording interface in the video recording process, a first target image is displayed in the scene of the real-time recording interface, and virtual content can change along with the dynamic change of the virtual content of second equipment in the real-time recording interface; when the user clicks the video recording ending button, a video containing real content and dynamically changed virtual content can be obtained.
By adopting the image display device based on augmented reality provided by the embodiment of the application, under the condition that the device establishes communication connection with the second equipment, the real content shot by the device is acquired, the virtual content from the second equipment is received, and the first target image containing the virtual content and the real content is displayed. In this way, by acquiring the virtual content from another device through the established communication connection and displaying the virtual content while displaying the real content, the real content and the virtual content can be ensured to be simultaneously displayed in the shot content, thereby solving the problem that the shot content including the virtual content is difficult to obtain because the virtual content is not included in the shot content in the related art.
In one embodiment, the obtaining module is further configured to obtain the first positional relationship between the real content and the virtual content before the display module displays the first target image including the virtual content and the real content. The display module is specifically configured to display a first target image that includes the virtual content and the real content based on the first positional relationship.
In one embodiment, the apparatus further comprises a sending module, configured to send the real content to the second device; the obtaining module is configured to receive, from the second device, a first positional relationship between the real content and the virtual content;
or, the obtaining module is configured to receive a second location relationship from the second device, and determine, based on the second location relationship, a first location relationship between the real content and the virtual content; the second positional relationship is a positional relationship between target content in the image obtained by the second device and the virtual content, wherein the target content is real content obtained by the second device itself.
In one embodiment, after the displaying the first target image, the displaying module is further configured to hide the virtual content in the first target image or hide the virtual content and the real content in the first target image if the apparatus is in a privacy mode.
In one embodiment, the apparatus further comprises a sending module, after the displaying the first target image, for sending the first target image to the second device such that the first target image is included in an image presented by the second device.
In one embodiment, the obtaining module is configured to receive a zoom operation performed by a user on real content; responding to the zooming operation, acquiring real content after zooming processing, and acquiring virtual content after zooming processing;
the display module is used for displaying a second target image based on the real content after the zooming processing and the virtual content after the zooming processing; wherein the second target image includes the real content after the zoom processing and the virtual content after the zoom processing.
In an embodiment, the obtaining module is configured to obtain zoom information of the real content, and perform zoom processing on the virtual content from the second device according to the zoom information to obtain a virtual content after zoom processing;
or, the device also comprises a sending module,
the acquisition module is used for acquiring zoom information of the real content;
the sending module is configured to send the zoom information to the second device;
the acquisition module is used for receiving the virtual content after zooming processing from the second equipment.
In one embodiment, the obtaining module is configured to receive an adjustment operation of a user on a target parameter of a real environment corresponding to real content; responding to the adjustment operation, acquiring real content after the target parameter is changed, and acquiring virtual content after the target parameter is changed;
the display module is used for displaying a third target image based on the real content of the changed target parameters and the virtual content of the changed target parameters; the third target image includes real content of the changed target parameter and virtual content of the changed target parameter.
In an embodiment, the obtaining module is configured to obtain the target parameter, and process the virtual content from the second device according to the target parameter to obtain a virtual content with a changed target parameter;
or, the device also comprises a sending module,
the acquisition module is used for acquiring the target parameters;
the sending module is used for sending the target parameters to the second equipment;
the acquisition module is used for receiving the virtual content with the changed target parameters from the second equipment; wherein the target parameter comprises a brightness and/or a color temperature.
The augmented reality-based image display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The augmented reality-based image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image display device based on augmented reality provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 9, and is not described herein again to avoid repetition.
According to the image display device based on augmented reality provided by the embodiment of the application, under the condition that the device is in communication connection with a second device, real content shot by the device is obtained, virtual content from the second device is received, and a first target image containing the virtual content and the real content is displayed. In this way, by acquiring the virtual content from another device through the established communication connection and displaying the virtual content while displaying the real content, it can be ensured that the real content and the virtual content are simultaneously displayed in the shot content, thereby solving the problem that the shot content including the virtual content is difficult to obtain because the virtual content is not included in the shot content in the related art.
Optionally, as shown in fig. 11, an electronic device 400 is further provided in this embodiment of the present application, and includes a processor 410, a memory 409, and a program or an instruction stored in the memory 409 and executable on the processor 410, where the program or the instruction is executed by the processor 410 to implement each process of the embodiment of the image display method based on augmented reality, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing the embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 410 is configured to obtain real content photographed by itself, and receive virtual content from the second device; displaying a first target image including the virtual content and the real content.
According to the electronic device provided by the embodiment of the application, under the condition that the communication connection is established with the second device, the real content shot by the first device is obtained, the virtual content from the second device is received, and the first target image containing the virtual content and the real content is displayed. In this way, by acquiring the virtual content from another device through the established communication connection and displaying the virtual content while displaying the real content, it can be ensured that the real content and the virtual content are simultaneously displayed in the shot content, thereby solving the problem that the shot content including the virtual content is difficult to obtain because the virtual content is not included in the shot content in the related art.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a process of sending and receiving information or a call, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the electronic apparatus 400 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 410 to receive and execute commands sent by the processor 410. In addition, the touch panel 4071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again. Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 11, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, and this is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further comprise a power supply 411 (e.g. a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail herein.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image display method based on augmented reality, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image display method based on augmented reality, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image display method based on augmented reality is applied to a first device, and is characterized in that the first device is in communication connection with a second device, and the image display method comprises the following steps:
acquiring real content shot by the user, and receiving virtual content from the second equipment;
displaying a first target image including the virtual content and the real content.
2. The image display method according to claim 1, wherein before the displaying of the first target image containing the virtual content and the real content, the method further comprises:
acquiring a first position relation between the real content and the virtual content;
the displaying a first target image containing the virtual content and the real content includes:
displaying a first target image including the virtual content and the real content based on the first positional relationship.
3. The image display method according to claim 2, wherein the acquiring the first positional relationship between the real content and the virtual content includes:
transmitting the real content to the second device;
receiving, from the second device, a first positional relationship between the real content and the virtual content;
alternatively, the first and second electrodes may be,
receiving a second positional relationship from the second device, where the second positional relationship is a positional relationship between target content in an image obtained by the second device and the virtual content, and the target content is real content obtained by the second device itself;
determining a first positional relationship between the real content and the virtual content based on the second positional relationship.
4. The image display method according to claim 1, wherein after said displaying the first target image, the image display method further comprises:
the first device hides the virtual content in the first target image or the virtual content and the real content in the first target image if the first device is in a privacy mode.
5. The image display method according to claim 4, wherein after said displaying the first target image, the image display method further comprises:
sending the first target image to the second device such that the first target image is included in an image presented by the second device.
6. The image display method according to claim 1, characterized in that the image display method further comprises:
receiving a zoom operation of a user on real content;
responding to the zooming operation, acquiring real content after zooming processing, and acquiring virtual content after zooming processing;
displaying a second target image based on the real content after the zooming processing and the virtual content after the zooming processing;
wherein the second target image includes the real content after the zoom processing and the virtual content after the zoom processing.
7. The image display method according to claim 6, wherein the acquiring the virtual content after the zoom processing includes:
acquiring zooming information of the real content;
zooming the virtual content from the second equipment according to the zooming information to obtain zoomed virtual content;
alternatively, the first and second electrodes may be,
acquiring zooming information of the real content;
sending the zoom information to the second device;
receiving zoom-processed virtual content from the second device.
8. The image display method according to claim 1, characterized in that the image display method further comprises:
receiving the adjustment operation of a user on the target parameters of the real environment corresponding to the real content;
responding to the adjustment operation, acquiring real content after the target parameter is changed, and acquiring virtual content after the target parameter is changed;
the first device displays a third target image based on the real content after the target parameter is changed and the virtual content after the target parameter is changed;
the third target image includes real content of the changed target parameter and virtual content of the changed target parameter.
9. The image display method according to claim 8, wherein the acquiring of the virtual content after the target parameter is changed comprises:
acquiring the target parameter;
processing the virtual content from the second equipment according to the target parameter to obtain the virtual content with the changed target parameter;
alternatively, the first and second electrodes may be,
acquiring the target parameter;
sending the target parameters to the second device;
receiving virtual content with changed target parameters from the second device;
wherein the target parameter comprises a brightness and/or a color temperature.
10. An augmented reality-based image display device, comprising:
the acquisition module is used for acquiring real content shot by the device under the condition that the device establishes communication connection with second equipment and receiving virtual content from the second equipment;
a display module for displaying a first target image containing the virtual content and the real content.
CN202211385265.9A 2022-11-07 2022-11-07 Image display method and device based on augmented reality Pending CN115766981A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211385265.9A CN115766981A (en) 2022-11-07 2022-11-07 Image display method and device based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211385265.9A CN115766981A (en) 2022-11-07 2022-11-07 Image display method and device based on augmented reality

Publications (1)

Publication Number Publication Date
CN115766981A true CN115766981A (en) 2023-03-07

Family

ID=85357120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211385265.9A Pending CN115766981A (en) 2022-11-07 2022-11-07 Image display method and device based on augmented reality

Country Status (1)

Country Link
CN (1) CN115766981A (en)

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
US11451706B2 (en) Photographing method and mobile terminal
US20220279116A1 (en) Object tracking method and electronic device
CN110365907B (en) Photographing method and device and electronic equipment
CN110809115B (en) Shooting method and electronic equipment
CN109639969B (en) Image processing method, terminal and server
CN111031398A (en) Video control method and electronic equipment
CN110933306A (en) Method for sharing shooting parameters and electronic equipment
CN109660723B (en) Panoramic shooting method and device
CN109474786B (en) Preview image generation method and terminal
CN110213485B (en) Image processing method and terminal
WO2019184947A1 (en) Image viewing method and mobile terminal
CN108459788B (en) Picture display method and terminal
CN109922294B (en) Video processing method and mobile terminal
CN109618218B (en) Video processing method and mobile terminal
CN108924422B (en) Panoramic photographing method and mobile terminal
CN110650367A (en) Video processing method, electronic device, and medium
CN108174110B (en) Photographing method and flexible screen terminal
CN108174109B (en) Photographing method and mobile terminal
CN111083374B (en) Filter adding method and electronic equipment
CN111125800B (en) Icon display method and electronic equipment
CN110086998B (en) Shooting method and terminal
CN110290263B (en) Image display method and mobile terminal
CN109104573B (en) Method for determining focusing point and terminal equipment
CN110865752A (en) Photo viewing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination