CN112037314A - Image display method, image display device, display equipment and computer readable storage medium - Google Patents

Image display method, image display device, display equipment and computer readable storage medium Download PDF

Info

Publication number
CN112037314A
CN112037314A CN202010898669.2A CN202010898669A CN112037314A CN 112037314 A CN112037314 A CN 112037314A CN 202010898669 A CN202010898669 A CN 202010898669A CN 112037314 A CN112037314 A CN 112037314A
Authority
CN
China
Prior art keywords
display
image
real scene
information
display object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010898669.2A
Other languages
Chinese (zh)
Inventor
侯欣如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010898669.2A priority Critical patent/CN112037314A/en
Publication of CN112037314A publication Critical patent/CN112037314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The embodiment of the disclosure discloses an image display method, which comprises the following steps: acquiring at least one frame of real scene image through at least one image acquisition device; determining attribute information of a display object in the at least one frame of real scene image, and acquiring virtual effect data corresponding to the display object; determining a first coordinate conversion relation between a first coordinate system where the real scene image is located and a second coordinate system where the image displayed on the display screen is located based on the attribute information; determining display information of the virtual effect data based on the first coordinate conversion relation; and rendering to obtain a virtual effect image based on the virtual effect data and the display information, and displaying the augmented reality effect of the display object superposed with the virtual effect image on display equipment. The embodiment of the disclosure also discloses an image display device, a display device and a computer readable storage medium.

Description

Image display method, image display device, display equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image display method and apparatus, a display device, and a computer-readable storage medium.
Background
At present, for some large-scale exhibitions, such as historical relic exhibition, automobile exhibition, building exhibition on a construction site, building planning sand table exhibition and other scenes, exhibitors can only see the exhibition objects per se of the current completion degree, and the exhibition of further detailed information related to the exhibits and the expected complete exhibition effect mostly depend on the explanation of an interpreter or the exhibition of a single promo, so that the exhibition effect is not flexible and rich.
Disclosure of Invention
The embodiment of the disclosure provides an image display method and device, a display device and a computer readable storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the present disclosure provides an image display method, including:
acquiring at least one frame of real scene image through at least one image acquisition device;
determining attribute information of a display object in the at least one frame of real scene image, and acquiring virtual effect data corresponding to the display object;
determining a first coordinate conversion relation between a first coordinate system where the real scene image is located and a second coordinate system where the image displayed on the display screen is located based on the attribute information;
determining display information of the virtual effect data based on the first coordinate conversion relation;
and rendering to obtain a virtual effect image based on the virtual effect data and the display information, and displaying the augmented reality effect of the display object superposed with the virtual effect image on display equipment.
An embodiment of the present disclosure provides an image display device, the device including:
the acquisition unit is used for acquiring at least one frame of real scene image through at least one image acquisition device;
the determining unit is used for determining attribute information of a display object in the at least one frame of real scene image;
the acquisition unit is used for acquiring virtual effect data corresponding to the display object;
the first processing unit is used for determining a first coordinate conversion relation between a first coordinate system where the real scene image is located and a second coordinate system where the image displayed on the display screen is located based on the attribute information;
the second processing unit is used for determining the display information of the virtual effect data based on the first coordinate conversion relation and performing rendering processing to obtain a virtual effect image based on the virtual effect data and the display information;
and the display unit is used for displaying the augmented reality effect of the display object and the virtual effect image which are superposed.
The disclosed embodiments provide a display device comprising a camera, a display, a processor and a memory for storing a computer program capable of running on the processor;
the camera, the display, the processor and the memory are connected through a communication bus;
the processor, in combination with the camera and the display, implements the method provided by the embodiments of the present disclosure when running the computer program stored in the memory.
The disclosed embodiments also provide a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the methods provided by the disclosed embodiments.
The embodiment of the disclosure has the following beneficial effects:
the image display method provided by the embodiment of the disclosure acquires at least one frame of real scene image through at least one image acquisition device; determining attribute information of a display object in at least one frame of real scene image, and acquiring virtual effect data corresponding to the display object; determining a first coordinate conversion relation between a first coordinate system where the real scene image is located and a second coordinate system where the image displayed on the display screen is located based on the attribute information; determining display information of the virtual effect data based on the first coordinate conversion relation; and rendering to obtain a virtual effect image based on the virtual effect data and the display information, and displaying the augmented reality effect of the display object superposed with the virtual effect image on display equipment. Therefore, the display information of the virtual effect data is determined through the first coordinate conversion relation, and the virtual effect is superposed around the display object based on the display information, so that the position of the virtual effect can be matched with the position of the display object in the real scene, the display effect of the image is enhanced, and the flexibility of image display is improved.
Drawings
FIG. 1-1 is a schematic diagram of an alternative configuration of an image display system provided by an embodiment of the present disclosure;
fig. 1-2 are schematic diagrams of an application scenario provided by an embodiment of the present disclosure;
fig. 2 is a first schematic flow chart of an image display method according to an embodiment of the present disclosure;
FIG. 3-1 is a first schematic diagram of a display device provided in an embodiment of the present disclosure;
fig. 3-2 is a schematic diagram ii of a display device provided in an embodiment of the present disclosure;
FIG. 4-1 is a first schematic diagram of a display effect provided by an embodiment of the disclosure;
fig. 4-2 is a schematic diagram of a display effect provided by the embodiment of the disclosure;
fig. 5 is a schematic flowchart illustrating an image display method according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a display device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clearly understood, the present disclosure is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not intended to limit the disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
An Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information and a real world, and a user can view a virtual effect superimposed in a real scene through an AR device, for example, can view a virtual treelet superimposed on a real campus playground and view a virtual flying bird superimposed in the sky, how to better fuse the virtual effects of the virtual treelet and the virtual flying bird with the real scene, and realize the presentation effect of the virtual effect in the real scene, which will be explained in the following with reference to the following specific embodiments.
The embodiments of the present disclosure provide an image display method, an apparatus, a display device, and a computer-readable storage medium, which can improve flexibility and richness of image display, where the image display method provided by the embodiments of the present disclosure is applied to a display device, and an exemplary application of the display device provided by the embodiments of the present disclosure is described below. In a disclosed embodiment, the display device comprises a display screen, wherein the display screen is implemented as a movable display screen, for example, the display screen can be moved on a preset sliding track, or moved on a movable sliding support, or moved by a user holding the display device to implement the movement of the display screen.
Next, an exemplary application when the display device is implemented as a terminal will be explained. When the display device is implemented as a terminal, virtual effect data of a real scene object can be acquired from a preset three-dimensional virtual scene in an internal storage space of the terminal based on a display object (such as a historical relic, a sand table building body and the like) in the real scene image, and an AR image effect combined with the virtual reality superposed with the display object in the real scene is presented according to the virtual effect data; the terminal can also interact with the cloud server, and virtual effect data are obtained through a preset three-dimensional virtual scene prestored in the cloud server. In the following, the description of the image display system is performed by taking the AR image effect as an example, in combination with a scenario in which a display object is displayed, in which a terminal acquires virtual effect data in an interactive manner with a server.
Referring to fig. 1-1, fig. 1-1 is an alternative architecture diagram of an image display system 100 provided by an embodiment of the present disclosure, in order to support a presentation application, a terminal 400 (which exemplarily shows a terminal 400-1 and a terminal 400-2) is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two. In a real display scene, such as historical relic display, sand table display, building display at a construction site, etc., the terminal 400 may be a display device arranged on a preset slide rail, or a mobile phone with a camera, wherein the mobile phone can be moved by being held by hand.
The terminal 400 is configured to acquire a real scene image at a current moving position through an image acquisition unit; determining virtual effect data matched with a display object based on the display object included in the real scene image; rendering a virtual effect corresponding to the virtual effect data at a display position associated with the display object in the real scene image by using the virtual effect data; the augmented reality AR effect is shown in the graphical interface 410 with the real scene image superimposed with the virtual effect.
For example, when the terminal 400 is implemented as a mobile phone, a preset display application on the mobile phone may be started, a camera is called through the preset display application to collect a real scene image, and a data request is initiated to the server 200 based on a display object included in the real scene image, and after receiving the data request, the server 200 determines virtual effect data matched with the display object from a preset virtual three-dimensional scene model prestored in the database 500; and transmits the virtual effect data back to the terminal 400. After the terminal 400 obtains the virtual effect data fed back by the server, the virtual effect is rendered according to the virtual effect data through the rendering tool, and the virtual effect is superimposed on the target area of the display object in the real scene image, so that an AR effect image in a virtual-real combination manner is obtained, and finally the AR effect image is displayed on the graphical interface of the terminal 400.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present disclosure is not limited thereto.
By way of example, the following illustrates an application scenario to which the embodiments of the present disclosure are applicable.
Fig. 1-2 are schematic diagrams of an application scenario provided by an embodiment of the present disclosure, as shown in fig. 1-2, a display device may include a movable display screen 101, where the movable display screen 101 may be disposed around a plurality of display objects in an exhibition, an image capturing device is disposed on the movable display screen 101, and may be configured to capture images of the display objects, and the display objects and/or virtual effects related to the display objects may be displayed on the movable display screen 101. The virtual effect of the display object may be at least one of introduction information of the display object, internal detail display information of the display object, a contour line of the display object, and a virtual interpreter of the display object.
In the embodiment of the present disclosure, the display screen of the display device may be a transparent display screen or a non-transparent display screen.
In some embodiments of the present disclosure, when the display screen of the display device is a non-transparent display screen, an image collecting device may be disposed on a back surface of the non-transparent display screen (i.e., a surface on which the display screen is not disposed) for collecting a display object placed on the back surface of the non-transparent display screen, and an augmented reality AR effect in which a real scene image corresponding to the display object and a virtual effect are superimposed is displayed through a display screen on a front surface of the non-transparent display screen.
In addition, in other embodiments of the present disclosure, when the display screen of the display device is a transparent display screen, an image capturing device may be disposed on one side of the transparent display screen, and is used to capture a display object located on one side of the transparent display screen. The display device displays the virtual effect corresponding to the display object on the transparent display screen by identifying the collected display object. Therefore, the user can view the display object behind the transparent display screen through the transparent display screen and view the virtual effect superposed on the display object from the transparent display screen, and therefore the augmented reality AR effect of the real scene and the virtual effect are superposed.
In practice, the image capturing device typically captures images with non-vertical viewing angles (e.g., top view, side view, etc.). However, for the non-transparent display screen, in order to improve the visual experience of the user in the scene, the display device needs to display a diagram of the vertical viewing angle of the real scene (i.e. a front view) in the non-transparent display screen; in the case of a transparent display, the user views the front view of the display object through the transparent display. When the virtual effect is superimposed, the display position of the current display object needs to be determined according to the acquired real scene image, and then the virtual effect corresponding to the virtual effect data is rendered at the display position associated with the display object. Therefore, determining the display position of the virtual effect in the vertical viewing angle based on the real scene image of the non-vertical viewing angle may cause a problem that the display position of the virtual effect data is not matched with the position of the display object in the real scene.
In order to solve the above problem, an image display method provided by the embodiment of the present disclosure is described below with reference to the image display system shown in fig. 1-1 and the application scenario shown in fig. 1-2, and with reference to the first flowchart of the image display method shown in fig. 2, the image display method provided by the embodiment of the present disclosure includes steps S210 to S250. Wherein the content of the first and second substances,
s210, collecting at least one frame of real scene image through at least one image collecting device.
The image display method provided in the embodiments of the present disclosure may be applied to a display device, and the display device may include a movable display screen. The display screen of the display device may move on a preset sliding track as shown in fig. 3-1, or may slide by being fixed on a movable sliding support as shown in fig. 3-2.
In the embodiment of the present disclosure, the display screen of the display device may be a transparent display screen or a non-transparent display screen. The embodiments of the present application are not limited herein.
In the embodiment of the present disclosure, the display screen may be provided with at least one image capturing device, and the display device may control the at least one image capturing device to capture an image of a real scene where the display screen is located, so as to obtain at least one frame of real scene image.
The real scene may be a building indoor scene, a street scene, a specific object, and the like, which can be superimposed with a virtual effect, and the augmented reality effect is presented by superimposing a virtual object in the real scene, which is not limited in the embodiment of the present disclosure.
In some embodiments of the present disclosure, when the display screen of the display device is a non-transparent display screen, an image collecting device may be disposed on a back surface of the non-transparent display screen (i.e., a surface on which the display screen is not disposed) for collecting a display object placed on the back surface of the non-transparent display screen, and an augmented reality AR effect in which a real scene image corresponding to the display object and a virtual effect are superimposed is displayed through a display screen on a front surface of the non-transparent display screen.
In other embodiments of the present disclosure, when the display screen of the display device is a transparent display screen, an image capturing device may be disposed on one side of the transparent display screen for capturing a display object located on one side of the transparent display screen. The display device displays the virtual effect corresponding to the display object on the transparent screen by identifying the collected display object. Therefore, the user can view the display object behind the transparent display screen through the transparent display screen and view the virtual effect superposed on the display object from the transparent display screen, and therefore the augmented reality AR effect of the real scene and the virtual effect are superposed.
It should be noted that the image capturing device in the embodiment of the present disclosure may be implemented as a fixed camera, and may also be implemented as a movable camera. The embodiment of the present disclosure does not limit the type of the image capturing apparatus.
S220, determining attribute information of the display object in at least one frame of real scene image, and acquiring virtual effect data corresponding to the display object.
In the embodiment of the disclosure, the display device may perform image processing, analysis and understanding on the at least one acquired frame of real scene image, and identify the display object in the at least one frame of real scene image. Furthermore, in the process of identifying the display object in the real scene image, the display device can also determine the attribute information of the display object; here, the attribute information of the presentation object can represent a characteristic unique to the presentation object.
In the embodiment of the present disclosure, the attribute information of the display object may include at least one of the following information:
the display object comprises position information of the display object, outline information of the display object, material information of the display object and texture information of the display object.
The position information of the display object refers to a position of the display object in the real scene image. The contour information of the display object refers to an edge contour line of the display object in the real scene image. The material information of the display object refers to the material of the display object, such as plastic material and metal material. The texture information of the display object refers to the texture features of the surface of the display object.
In addition, after the display device identifies the display object in the real scene, the display device can also acquire virtual effect data corresponding to the display object according to the identified display object.
In the embodiment of the present disclosure, the virtual effect data is a set of virtual image data, which may be rendering parameters for rendering a virtual effect by a rendering tool. A virtual effect may be understood as a virtual object that is represented in an image of a real scene.
In an embodiment of the present disclosure, the virtual effect data may include at least one of:
virtual interactive objects, virtual object outline models, virtual object detail models, virtual tags.
The virtual interactive object refers to a virtual object that can interact with a real user located in front of the display device, such as a virtual interpreter, a virtual robot, and the like. For example, referring to the display interface schematic of an exemplary display device shown in fig. 4-1, the virtual interactive object may be a virtual interpreter 402 that interprets a presentation object 401 in an image of a real scene.
The virtual object outline model is a virtual image which displays the outline of an object displayed in a real scene image in a key manner. For example, referring to the display interface diagram of an exemplary display device shown in fig. 4-1, the virtual object outline model may be a virtual contour line 403 outlining a presentation object 401 in the real scene image 400.
The virtual object detail model refers to the virtual detail display of the display object in the real scene image; for example, referring to the display interface schematic of an exemplary display device shown in fig. 4-2, the virtual object detail model may be a virtual detail presentation 405 inside a cultural relic 404 presented in the real scene image 400.
The virtual tag is used for displaying additional information of a display object in a real scene image; for example, referring to the display interface schematic of an exemplary display device shown in fig. 4-2, the virtual tag may be detailed introduction information 406 corresponding to a cultural relic 404 displayed in the real scene image, wherein the detailed introduction information may be "caliber 75.6 cm".
In the embodiment of the present disclosure, the virtual effect data corresponding to the display object may be preset, or may be generated in real time according to the real scene image.
In some embodiments of the present disclosure, a virtual effect database, that is, a preset virtual effect database, may be pre-constructed, and the preset virtual effect database may pre-store mapping relationships between a plurality of display objects in a real scene and virtual effect data. In this way, after the display device identifies the display object in the real scene image, the display device may search and acquire the virtual effect data corresponding to the display object from the preset virtual effect database, and acquire the virtual effect data.
In some embodiments of the present disclosure, the virtual effect data model may be trained in advance by way of machine learning. Here, the neural network model may be trained according to a large amount of sample data of the display object and the virtual effect data corresponding to the sample data of the display object, so as to obtain a trained virtual effect data model. In this way, after the display device identifies the display object in the real scene image, the display object is input into the virtual effect data model, and the virtual effect data is generated in real time through the virtual effect data model.
In this embodiment of the present disclosure, the display device may obtain the virtual effect data corresponding to the display object from the local storage space, or may request the virtual effect data corresponding to the display object from the third-party device. The embodiment of the present disclosure does not limit the manner of acquiring the virtual effect data.
And S230, determining a first coordinate conversion relation between a first coordinate system where the real scene image is located and a second coordinate system where the image displayed on the display screen is located based on the attribute information.
In the embodiment of the present disclosure, the first coordinate system refers to a planar coordinate system where each pixel point in the real scene image is located, and the second coordinate system refers to a planar coordinate system where each pixel point in the displayed image is located when the image is displayed on the display screen.
It should be noted that, when the display device acquires a frame of real scene image, the first coordinate system is the coordinate system in which the real scene image is located. Under the condition that multiple frames of real scene images are collected, the first coordinate system is the coordinate system where any one frame of real scene image in the multiple frames of real scene images is located.
In the embodiment of the present disclosure, the first coordinate transformation relationship refers to a coordinate of the same pixel point in the first coordinate system and a coordinate transformation in the second coordinate system when the real scene image is mapped to the display space of the display screen.
Here, the attribute information can represent unique features of the display object in the real scene image, and therefore, in the embodiment of the present disclosure, the first coordinate conversion relationship between the first coordinate system and the second coordinate system may be determined according to the attribute information of the display object acquired in the real scene image.
In some embodiments of the present disclosure, the display device may pre-construct point cloud information of a plurality of display objects in the current real scene; and performing characteristic point matching through the attribute information and preset point cloud information to obtain a first coordinate conversion relation.
In other embodiments of the present disclosure, the display device may also pre-train the process model by way of machine learning. Here, the neural network model may be trained according to a large amount of sample data of the attribute information and a first coordinate transformation relation corresponding to the sample data of the attribute information, so as to obtain a trained preset processing model. In this way, after the display device obtains the attribute information of the display object, the attribute information is input into the preset processing model, and the first coordinate conversion relation is obtained through the preset processing model.
And S240, determining the display information of the virtual effect data based on the first coordinate conversion relation.
In practical applications, the image of the real scene acquired by the image acquisition device may be an image with a non-vertical viewing angle, and the image displayed on the display screen is an image with a vertical viewing angle. According to the embodiment of the disclosure, the real scene image and the image displayed by the display screen can be unified into the same coordinate system through the first coordinate conversion relation; namely, the coordinate alignment between the image of the non-vertical viewing angle and the image of the vertical viewing angle is realized, so that the real scene image of the non-vertical viewing angle can be accurately mapped into the display screen of the vertical viewing angle.
In the embodiment of the present disclosure, the virtual effect corresponding to the display object needs to be displayed in the display screen through a vertical viewing angle. Therefore, the display device may determine the display information of the virtual effect data corresponding to the presentation object in the second coordinate system through the above-determined first coordinate transformation relationship. Here, the display information may include at least one of: display position, display area, display angle.
That is to say, the display device may determine, according to the real scene image acquired by the image acquisition device, the position information and the edge information of the display object in the first coordinate system where the real scene image is located, and further, the display device determines, through the first coordinate transformation relationship, the position information and the edge information in the second coordinate system when the display object is mapped to the second coordinate system. Finally, the display device may determine a display position, a display area, and a display angle of the virtual effect data corresponding to the display object based on the position information and the edge information of the display object in the second coordinate system, so that the virtual effect data is displayed at a position associated with the display object.
And S250, based on the virtual effect data and the display information, rendering to obtain a virtual effect image, and displaying the augmented reality effect of the display object superposed with the virtual effect image on the display equipment.
In some embodiments of the present disclosure, the display device may perform rendering processing on the virtual effect data based on the display information of the virtual effect data, so that when the virtual effect after the rendering processing is superimposed on the real scene image, the display position of the virtual effect matches the display position of the display object in the real scene.
Therefore, the image display method provided by the embodiment of the disclosure acquires at least one frame of real scene image through at least one image acquisition device; determining attribute information of a display object in at least one frame of real scene image, and acquiring virtual effect data corresponding to the display object; determining a first coordinate conversion relation between a first coordinate system where the real scene image is located and a second coordinate system where the image displayed on the display screen is located based on the attribute information; determining display information of the virtual effect data based on the first coordinate conversion relation; and rendering to obtain a virtual effect image based on the virtual effect data and the display information, and displaying the augmented reality effect of the display object superposed with the virtual effect image on display equipment. Therefore, the display information of the virtual effect data is determined through the first coordinate conversion relation, and the virtual effect is superposed around the display object based on the display information, so that the position of the virtual effect can be matched with the position of the display object in the real scene, the display effect of the image is enhanced, and the flexibility of image display is improved.
Based on the foregoing embodiments, in some embodiments of the present disclosure, referring to the second flowchart of the image display method shown in fig. 5, the step S240 determines the display information of the virtual effect data based on the first coordinate transformation relationship, and may also be implemented by:
s2401, obtaining a second coordinate conversion relation between a third coordinate system and a second coordinate system of the virtual effect data in the virtual scene;
s2402, determining display information of the virtual effect data based on the first coordinate conversion relation and the second coordinate conversion relation.
In the embodiment of the present disclosure, the third coordinate system refers to a coordinate system of the virtual effect data in the virtual scene. The virtual scene here may be a virtual coordinate system in which a rendering camera of the three-dimensional engine is located.
In practical application, when rendering is performed in the rendering camera, the virtual effect data is rendered according to a coordinate system in the current virtual scene. For example, the size of the virtual effect data in the coordinate system in the virtual scene is 1m, and the size is actually 1cm in the second coordinate system in which the image displayed on the display screen is located. Therefore, when the virtual effect data is to be superimposed on the display object of the real scene and displayed on the display screen, the coordinate alignment between the third coordinate system where the virtual effect data is located and the second coordinate system where the display screen is located is required.
In the embodiment of the present disclosure, the second coordinate transformation relationship refers to coordinate transformation of the same virtual effect data point in the second coordinate system and the third coordinate system when the virtual effect data is mapped to the display space of the display screen. Here, the second coordinate conversion relationship between the third coordinate system and the second coordinate system may be set in advance.
Further, after the first coordinate conversion relationship and the second coordinate conversion relationship are determined, the display device may unify the real scene image, the image displayed on the display screen, and the virtual effect data into the same coordinate system, thereby implementing accurate display of the virtual effect data corresponding to the display object.
In the embodiment of the present disclosure, the display information may include a display size of the virtual effect and/or a motion trajectory of the virtual effect data, in addition to at least one of a display position, a display area, and a display angle.
Specifically, the display device may determine, according to the real scene image acquired by the image acquisition apparatus, position information and edge information of the display object in the first coordinate system where the real scene image is located in the real scene image. Then, the display device determines, through the first coordinate transformation relationship, position information and edge information of the display object in the second coordinate system when the display object is mapped to the second coordinate system. At this time, the display apparatus may determine the display position, the display area, and the display angle of the virtual effect data corresponding to the presentation object based on the position information and the edge information of the presentation object in the second coordinate system.
Meanwhile, the display device may map the virtual effect data corresponding to the display object from the third coordinate system in which the virtual scene is located to the second coordinate system, thereby determining the size of the display of the virtual effect and the trajectory of the motion.
Therefore, the display information of the virtual effect data is determined through the first coordinate conversion relation and the second coordinate conversion relation, the virtual effect is superposed around the display object based on the display information, the position of the virtual effect can be matched with the position of the display object in the real scene, the accuracy of virtual effect display is improved, the display effect of the image is enhanced, and the flexibility of image display is improved.
Based on the foregoing embodiments, in some embodiments of the present disclosure, there are various ways to determine the first coordinate transformation relationship between the first coordinate system where the real scene image is located and the second coordinate system where the image displayed on the display screen is located in step S230 based on the attribute information, and two ways are exemplarily described below: mode one and mode two.
In a first way,
In some embodiments of the present disclosure, the step S230 determines, based on the attribute information, a first coordinate transformation relationship between a first coordinate system in which the real scene image is located and a second coordinate system in which the image displayed on the display screen is located, including:
s2301, acquiring preset point cloud information matched with the attribute information;
s2302, performing feature matching on the attribute information and preset point cloud information to obtain matched feature points;
s2303, determining a first coordinate conversion relation between the first coordinate system and the second coordinate system based on the matched feature points.
In the embodiment of the disclosure, the display device may pre-construct the point cloud information of the plurality of display objects in the current real scene to obtain the preset point cloud information corresponding to the plurality of display objects. And performing characteristic point matching through the attribute information and preset point cloud information to obtain a first coordinate conversion relation.
It should be noted that the display device may construct point cloud information of a plurality of display objects in advance based on the second coordinate system. That is, the preset point cloud information is point cloud data matched with the second coordinate system.
Specifically, after obtaining the attribute information of the display object in the real scene image, the display device searches the preset point cloud information matched with the current attribute information from the plurality of preset point cloud information. Further, feature matching is carried out on the feature points in the attribute information and preset point cloud information to obtain a plurality of matched feature points.
It can be understood that, if the feature point in the preset point cloud information matches with the feature point in the attribute information, it indicates that the point belongs to the same point. In this way, the display apparatus can determine the first coordinate conversion relationship between the first coordinate system and the second coordinate system based on the coordinates of the plurality of matched feature points in the first coordinate system and the coordinates in the second coordinate system, respectively.
According to the method and the device for determining the coordinate conversion relationship, the preset point cloud information of the plurality of display objects is constructed in advance, the first conversion relationship is determined in a mode that the attribute information is matched with the preset point cloud information, and the first coordinate conversion relationship can be determined accurately.
The second way,
In other embodiments of the present disclosure, the step S230 determines, based on the attribute information, a first coordinate transformation relationship between a first coordinate system in which the real scene image is located and a second coordinate system in which the image displayed on the display screen is located, including:
step S2301', identification processing is carried out on the attribute information through a preset processing model, and a first coordinate conversion relation between a first coordinate system and a second coordinate system is obtained;
the preset processing model is used for determining a first coordinate conversion relation between a first coordinate system and a second coordinate system corresponding to the attribute information.
That is, the display device determines the first coordinate conversion relationship by way of machine learning. Specifically, the display device may pre-train the processing model, and here, the neural network model may be trained according to a large amount of sample data of the attribute information and the first coordinate transformation relationship corresponding to the sample data of the attribute information, so as to obtain a trained preset processing model. In this way, after the display device obtains the attribute information of the display object, the attribute information is input into the preset processing model, and the first coordinate conversion relation is obtained through the preset processing model.
According to the method and the device, the preset processing model is established through machine learning, and the attribute information is input into the preset processing model, so that the first coordinate conversion relation can be output; therefore, point cloud model construction does not need to be carried out on each display object in the real scene, processing complexity is reduced, and the preset processing model can be applied to different scenes, so that flexibility of determining the first coordinate conversion relation is improved.
Based on the foregoing embodiments, in some embodiments of the present disclosure, the image acquisition device is a movable image acquisition device; that is, the capturing position and/or the shooting angle of the image capturing apparatus can be adjusted as required.
Correspondingly, step S210 acquires at least one frame of real scene image through at least one image acquisition device, including:
controlling at least one image acquisition device to acquire at least one frame of real scene image under the condition that the change of the shooting position and/or shooting angle of the first image acquisition device is detected;
wherein the first image acquisition device is any one of the at least one image acquisition devices.
That is, when the display device detects that the shooting position and/or shooting angle of any one image capturing device changes, at least one image capturing device needs to be controlled to capture the real scene image again. And then, according to the newly acquired real scene image, adjusting the first coordinate transformation relation, and re-determining the display information of the virtual effect data to be superimposed, so that the display device can adjust the currently superimposed virtual effect according to the newly determined display information.
In the embodiment of the disclosure, the position and/or the angle of the image acquisition device relative to the display screen is adjustable, and the corresponding first coordinate conversion relation can be adaptively changed based on the adjusted position and/or angle of the image display device, so that the display information of the virtual effect data can be adjusted along with the change of the position and/or the angle of the image acquisition device, and the accuracy and the flexibility of the virtual effect display are improved.
Based on the foregoing embodiments, in some embodiments of the present disclosure, a display apparatus may include a plurality of image capturing devices.
Correspondingly, step S210 acquires at least one frame of real scene image through at least one image acquisition device, including:
and acquiring multiple frames of real scene images through a plurality of image acquisition devices.
In some embodiments of the present disclosure, a plurality of image capturing devices may correspond to a plurality of frames of real scene images one to one, that is, one image capturing device may capture one frame of real scene image. In addition, in the embodiment of the present disclosure, each image capturing device in the plurality of image capturing devices may also correspond to a plurality of frames of real scene images, that is, one image capturing device may capture a plurality of frames of real scene images.
Based on this, step S220 determines attribute information of the display object in at least one frame of real scene image, including:
identifying a display object in a plurality of frames of real scene images;
adopting a display object in a plurality of frames of real scene images to determine the depth information of the display object;
and determining attribute information of the display object based on the depth information and the display object of at least one frame of real scene image.
In the embodiment of the disclosure, the display device can control the plurality of image acquisition devices to acquire real scene images simultaneously and identify display objects in each frame of real scene image; furthermore, the display device may determine, according to a binocular vision principle or a multi-ocular vision principle, a distance between the display object and a plane where the display screen is located through the display object in the multiple frames of real scene images, so as to obtain depth information of the display object. Furthermore, the display device can accurately determine the attribute information of the display object by combining the depth information of the display object and the display object in the real scene image.
In some embodiments of the present disclosure, determining attribute information of a display object based on depth information and the display object of at least one frame of real scene image may be implemented by:
selecting a target real scene image from a plurality of frames of real scene images based on a specific selection condition;
and determining attribute information of the display object based on the depth information and the target real scene image.
In the embodiment of the present disclosure, the specific selection condition may include that the quality of the real scene image is greater than a first threshold, and/or that the definition of the display object in the real scene image is greater than a second threshold.
Therefore, the display device can select the target real scene image with the best quality and/or the clearest display object from the multiple frames of real scene images, and determine the attribute information of the display object according to the selected target real scene image and the depth information of the display object, so that the obtained attribute information of the display object has higher accuracy.
Based on the foregoing embodiments, in some embodiments of the present disclosure, the augmented reality effect includes:
and displaying the virtual effect object corresponding to the virtual effect data in the preset range of the display object, wherein the display position of the virtual effect is matched with the display position of the display object in the real scene.
That is to say, the embodiment of the present disclosure can increase a virtual effect for the display object in the display screen, thereby enhancing the display effect of the display object. Meanwhile, the display position of the virtual effect data can be matched with the display position of the display object under the vertical viewing angle, so that the user can watch the best display effect, and the viewing experience of the user is improved.
Based on the foregoing embodiments, an embodiment of the present disclosure provides an image display apparatus, which may be applied to the display device described above, and fig. 6 is a schematic diagram of a composition structure of the image display apparatus provided in the embodiment of the present disclosure, as shown in fig. 6, where the apparatus 600 includes:
an acquisition unit 601, configured to acquire at least one frame of real scene image through at least one image acquisition device;
a determining unit 602, configured to determine attribute information of a display object in the at least one frame of real scene image;
an obtaining unit 603, configured to obtain virtual effect data corresponding to the display object;
a first processing unit 604, configured to determine, based on the attribute information, a first coordinate transformation relationship between a first coordinate system in which the real scene image is located and a second coordinate system in which an image displayed on a display screen is located;
a second processing unit 605, configured to determine display information of the virtual effect data based on the first coordinate transformation relationship, and perform rendering processing based on the virtual effect data and the display information to obtain a virtual effect image;
a display unit 606, configured to display an augmented reality effect in which the display object and the virtual effect image are superimposed.
In some embodiments of the present disclosure, the second processing unit 605 is specifically configured to obtain a second coordinate transformation relationship between a third coordinate system in which the virtual effect data is located in the virtual scene and the second coordinate system; and determining display information of the virtual effect data based on the first coordinate conversion relation and the second coordinate conversion relation.
In some embodiments of the present disclosure, the first processing unit 604 is specifically configured to obtain preset point cloud information matched with the attribute information; performing feature matching on the attribute information and the preset point cloud information to obtain matched feature points; and determining the first coordinate conversion relation between the first coordinate system and the second coordinate system based on the matched feature points.
In some embodiments of the present disclosure, the first processing unit 604 is configured to perform identification processing on the attribute information through a preset processing model, so as to obtain the first coordinate transformation relationship between the first coordinate system and the second coordinate system; the preset processing model is used for determining a first coordinate conversion relation between a first coordinate system and a second coordinate system corresponding to the attribute information.
In some embodiments of the present disclosure, the image acquisition device is a mobile image acquisition device;
the acquisition unit 601 is configured to control the at least one image acquisition device to acquire at least one frame of real scene image when detecting that a shooting position and/or a shooting angle of the first image acquisition device changes; the first image acquisition arrangement is any one of the at least one image acquisition arrangement.
In some embodiments of the present disclosure, the display apparatus comprises a plurality of image capture devices;
the acquisition unit 601 is configured to acquire multiple frames of real scene images through the multiple image acquisition devices;
the determining unit 602 is configured to identify a display object in the plurality of frames of real scene images; determining depth information of the display object by adopting the display object in the multiple frames of real scene images; determining attribute information of the display object based on the depth information and the display object of the at least one frame of real scene image.
In some embodiments of the present disclosure, the display screen is a transparent display screen or a non-transparent display screen.
In some embodiments of the present disclosure, the display screen moves on a preset slide rail.
In some embodiments of the present disclosure, the attribute information includes at least one of:
the display object comprises position information of the display object, outline information of the display object, material information of the display object and texture information of the display object.
In some embodiments of the present disclosure, the augmented reality effect comprises:
and displaying a virtual effect object corresponding to the virtual effect data in the preset range of the display object, wherein the display position of the virtual effect is matched with the display position of the display object in the real scene.
It should be noted that the above description of the embodiment of the apparatus, similar to the above description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be noted that, in the embodiment of the present disclosure, if the image display method is implemented in the form of a software functional module and sold or used as a standalone product, the image display method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a terminal, a server, etc.) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware and software.
Accordingly, the embodiment of the present disclosure further provides a computer storage medium, where computer-executable instructions are stored on the computer storage medium, and the computer-executable instructions are used to implement the steps of the information display method provided by the foregoing embodiment.
Accordingly, an embodiment of the present disclosure provides a display device, fig. 7 is a schematic structural diagram of the display device in the embodiment of the present disclosure, and as shown in fig. 7, the display device 700 includes: a camera 701 and a display 702;
a memory 703 for storing a computer program;
the processor 704 is configured to implement the steps of the image display method provided in the foregoing embodiment in combination with the camera 701 and the display 702 when executing the computer program stored in the memory 703.
The display device 700 further includes: a communication bus 705. The communication bus 705 is configured to enable connective communication between these components.
In the embodiment of the present disclosure, the display 702 includes, but is not limited to, a liquid crystal display, an organic light emitting diode display, a touch display, and the like, and the disclosure is not limited herein.
The above description of the computer device and storage medium embodiments is similar to the description of the method embodiments above, with similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the computer apparatus and storage medium of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present disclosure.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. An image display method, characterized in that the method comprises:
acquiring at least one frame of real scene image through at least one image acquisition device;
determining attribute information of a display object in the at least one frame of real scene image, and acquiring virtual effect data corresponding to the display object;
determining a first coordinate conversion relation between a first coordinate system where the real scene image is located and a second coordinate system where the image displayed on the display screen is located based on the attribute information;
determining display information of the virtual effect data based on the first coordinate conversion relation;
and rendering to obtain a virtual effect image based on the virtual effect data and the display information, and displaying the augmented reality effect of the display object superposed with the virtual effect image on display equipment.
2. The method according to claim 1, wherein the determining display information of the virtual effect data based on the first coordinate conversion relationship comprises:
acquiring a second coordinate conversion relation between a third coordinate system of the virtual effect data in the virtual scene and the second coordinate system;
and determining display information of the virtual effect data based on the first coordinate conversion relation and the second coordinate conversion relation.
3. The method according to claim 1 or 2, wherein the determining a first coordinate transformation relationship between a first coordinate system in which the real scene image is located and a second coordinate system in which the image displayed on the display screen is located based on the attribute information comprises:
acquiring preset point cloud information matched with the attribute information;
performing feature matching on the attribute information and the preset point cloud information to obtain matched feature points;
determining the first coordinate transformation relationship between the first coordinate system and the second coordinate system based on the matching feature points.
4. The method according to claim 1 or 2, wherein the determining a first coordinate transformation relationship between a first coordinate system in which the real scene image is located and a second coordinate system in which the image displayed on the display screen is located based on the attribute information comprises:
identifying the attribute information through a preset processing model to obtain the first coordinate conversion relation between the first coordinate system and the second coordinate system;
the preset processing model is used for determining a first coordinate conversion relation between a first coordinate system and a second coordinate system corresponding to the attribute information.
5. The method according to any one of claims 1-4, wherein the image acquisition device is a mobile image acquisition device;
the acquiring of at least one frame of real scene image by at least one image acquisition device comprises:
controlling the at least one image acquisition device to acquire at least one frame of real scene image under the condition that the change of the shooting position and/or the shooting angle of the first image acquisition device is detected; the first image acquisition arrangement is any one of the at least one image acquisition arrangement.
6. The method according to any one of claims 1-5, wherein the display device comprises a plurality of image acquisition apparatuses;
the acquiring of at least one frame of real scene image by at least one image acquisition device comprises:
acquiring multiple frames of real scene images through the multiple image acquisition devices;
the determining attribute information of the display object in the at least one frame of real scene image includes:
identifying a display object in the plurality of frames of real scene images;
determining depth information of the display object by adopting the display object in the multiple frames of real scene images;
determining attribute information of the display object based on the depth information and the display object of the at least one frame of real scene image.
7. The method of any one of claims 1-6, wherein the display screen is a transparent display screen or a non-transparent display screen.
8. The method according to any one of claims 1-7, wherein the display screen moves on a preset sliding track.
9. The method according to any of claims 1-8, wherein the attribute information comprises at least one of:
the display object comprises position information of the display object, outline information of the display object, material information of the display object and texture information of the display object.
10. The method of any one of claims 1-9, wherein the augmented reality effect comprises:
and displaying a virtual effect object corresponding to the virtual effect data in the preset range of the display object, wherein the display position of the virtual effect is matched with the display position of the display object in the real scene.
11. An image display device characterized by comprising:
the acquisition unit is used for acquiring at least one frame of real scene image through at least one image acquisition device;
the determining unit is used for determining attribute information of a display object in the at least one frame of real scene image;
the acquisition unit is used for acquiring virtual effect data corresponding to the display object;
the first processing unit is used for determining a first coordinate conversion relation between a first coordinate system where the real scene image is located and a second coordinate system where the image displayed on the display screen is located based on the attribute information;
the second processing unit is used for determining the display information of the virtual effect data based on the first coordinate conversion relation and performing rendering processing to obtain a virtual effect image based on the virtual effect data and the display information;
and the display unit is used for displaying the augmented reality effect of the display object and the virtual effect image which are superposed.
12. A display device comprising a camera, a processor and a memory for storing a computer program operable on the processor;
wherein the processor is adapted to perform the steps of the method of any one of claims 1 to 10 when running the computer program.
13. A computer-readable storage medium, on which a computer program is stored which is executed by a processor for carrying out the steps of the method according to any one of claims 1 to 10.
CN202010898669.2A 2020-08-31 2020-08-31 Image display method, image display device, display equipment and computer readable storage medium Pending CN112037314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010898669.2A CN112037314A (en) 2020-08-31 2020-08-31 Image display method, image display device, display equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010898669.2A CN112037314A (en) 2020-08-31 2020-08-31 Image display method, image display device, display equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112037314A true CN112037314A (en) 2020-12-04

Family

ID=73585913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010898669.2A Pending CN112037314A (en) 2020-08-31 2020-08-31 Image display method, image display device, display equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112037314A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634773A (en) * 2020-12-25 2021-04-09 北京市商汤科技开发有限公司 Augmented reality presentation method and device, display equipment and storage medium
CN113625872A (en) * 2021-07-30 2021-11-09 深圳盈天下视觉科技有限公司 Display method, system, terminal and storage medium
CN113885703A (en) * 2021-09-30 2022-01-04 联想(北京)有限公司 Information processing method and device and electronic equipment
CN114398132A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Scene data display method and device, computer equipment and storage medium
CN114625468A (en) * 2022-03-21 2022-06-14 北京字跳网络技术有限公司 Augmented reality picture display method and device, computer equipment and storage medium
CN114661386A (en) * 2020-12-22 2022-06-24 腾讯科技(深圳)有限公司 Point cloud window presenting method and device, computer readable medium and electronic equipment
WO2022160406A1 (en) * 2021-01-29 2022-08-04 深圳技术大学 Implementation method and system for internet of things practical training system based on augmented reality technology
WO2022188305A1 (en) * 2021-03-11 2022-09-15 深圳市慧鲤科技有限公司 Information presentation method and apparatus, and electronic device, storage medium and computer program
WO2022222689A1 (en) * 2021-04-21 2022-10-27 青岛小鸟看看科技有限公司 Data generation method and apparatus, and electronic device

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789514A (en) * 2012-04-20 2012-11-21 青岛理工大学 Induction method of three-dimensional (3D) online induction system for mechanical equipment dismounting
CN106127552A (en) * 2016-06-23 2016-11-16 北京理工大学 A kind of virtual scene display method, Apparatus and system
CN107204031A (en) * 2017-04-27 2017-09-26 腾讯科技(深圳)有限公司 Information displaying method and device
CN109700550A (en) * 2019-01-22 2019-05-03 雅客智慧(北京)科技有限公司 A kind of augmented reality method and device for dental operation
CN109893247A (en) * 2019-04-11 2019-06-18 艾瑞迈迪科技石家庄有限公司 The navigation in surgical instrument operation path and display methods, device
CN109993823A (en) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 Shading Rendering method, apparatus, terminal and storage medium
CN110415358A (en) * 2019-07-03 2019-11-05 武汉子序科技股份有限公司 A kind of real-time three-dimensional tracking
CN110443898A (en) * 2019-08-12 2019-11-12 北京枭龙科技有限公司 A kind of AR intelligent terminal target identification system and method based on deep learning
CN110568923A (en) * 2019-07-09 2019-12-13 深圳市瑞立视多媒体科技有限公司 unity 3D-based virtual reality interaction method, device, equipment and storage medium
WO2020010977A1 (en) * 2018-07-13 2020-01-16 腾讯科技(深圳)有限公司 Method and apparatus for rendering virtual channel in multi-world virtual scenario
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110874868A (en) * 2018-09-03 2020-03-10 广东虚拟现实科技有限公司 Data processing method and device, terminal equipment and storage medium
WO2020072985A1 (en) * 2018-10-05 2020-04-09 Magic Leap, Inc. Rendering location specific virtual content in any location
CN111243103A (en) * 2020-01-07 2020-06-05 青岛小鸟看看科技有限公司 Method and device for setting safety area, VR equipment and storage medium
CN111311756A (en) * 2020-02-11 2020-06-19 Oppo广东移动通信有限公司 Augmented reality AR display method and related device
CN111510701A (en) * 2020-04-22 2020-08-07 Oppo广东移动通信有限公司 Virtual content display method and device, electronic equipment and computer readable medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789514A (en) * 2012-04-20 2012-11-21 青岛理工大学 Induction method of three-dimensional (3D) online induction system for mechanical equipment dismounting
CN106127552A (en) * 2016-06-23 2016-11-16 北京理工大学 A kind of virtual scene display method, Apparatus and system
CN107204031A (en) * 2017-04-27 2017-09-26 腾讯科技(深圳)有限公司 Information displaying method and device
WO2020010977A1 (en) * 2018-07-13 2020-01-16 腾讯科技(深圳)有限公司 Method and apparatus for rendering virtual channel in multi-world virtual scenario
CN110874868A (en) * 2018-09-03 2020-03-10 广东虚拟现实科技有限公司 Data processing method and device, terminal equipment and storage medium
WO2020072985A1 (en) * 2018-10-05 2020-04-09 Magic Leap, Inc. Rendering location specific virtual content in any location
CN109700550A (en) * 2019-01-22 2019-05-03 雅客智慧(北京)科技有限公司 A kind of augmented reality method and device for dental operation
CN109893247A (en) * 2019-04-11 2019-06-18 艾瑞迈迪科技石家庄有限公司 The navigation in surgical instrument operation path and display methods, device
CN109993823A (en) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 Shading Rendering method, apparatus, terminal and storage medium
CN110415358A (en) * 2019-07-03 2019-11-05 武汉子序科技股份有限公司 A kind of real-time three-dimensional tracking
CN110568923A (en) * 2019-07-09 2019-12-13 深圳市瑞立视多媒体科技有限公司 unity 3D-based virtual reality interaction method, device, equipment and storage medium
CN110443898A (en) * 2019-08-12 2019-11-12 北京枭龙科技有限公司 A kind of AR intelligent terminal target identification system and method based on deep learning
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN111243103A (en) * 2020-01-07 2020-06-05 青岛小鸟看看科技有限公司 Method and device for setting safety area, VR equipment and storage medium
CN111311756A (en) * 2020-02-11 2020-06-19 Oppo广东移动通信有限公司 Augmented reality AR display method and related device
CN111510701A (en) * 2020-04-22 2020-08-07 Oppo广东移动通信有限公司 Virtual content display method and device, electronic equipment and computer readable medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SAN GÜNES等: "Augmented Reality Tool for Markerless Virtual Try-on around Human Arm", 2015 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY - MEDIA, ART, SOCIAL SCIENCE, HUMANITIES AND DESIGN *
武雪玲;任福;杜清运;: "混合硬件跟踪定位的空间信息虚实配准", 地理与地理信息科学, no. 03 *
邢书宝: "人脸识别关键技术研究", 西安交通大学出版社, pages: 5 *
黄业桃;刘越;翁冬冬;王涌天;: "基于随动控制的数字圆明园增强现实系统注册方法", 计算机研究与发展, no. 06 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661386A (en) * 2020-12-22 2022-06-24 腾讯科技(深圳)有限公司 Point cloud window presenting method and device, computer readable medium and electronic equipment
WO2022134962A1 (en) * 2020-12-22 2022-06-30 腾讯科技(深圳)有限公司 Method and apparatus for presenting point cloud window, computer-readable medium, and electronic device
CN112634773A (en) * 2020-12-25 2021-04-09 北京市商汤科技开发有限公司 Augmented reality presentation method and device, display equipment and storage medium
CN112634773B (en) * 2020-12-25 2022-11-22 北京市商汤科技开发有限公司 Augmented reality presentation method and device, display equipment and storage medium
WO2022160406A1 (en) * 2021-01-29 2022-08-04 深圳技术大学 Implementation method and system for internet of things practical training system based on augmented reality technology
WO2022188305A1 (en) * 2021-03-11 2022-09-15 深圳市慧鲤科技有限公司 Information presentation method and apparatus, and electronic device, storage medium and computer program
WO2022222689A1 (en) * 2021-04-21 2022-10-27 青岛小鸟看看科技有限公司 Data generation method and apparatus, and electronic device
CN113625872A (en) * 2021-07-30 2021-11-09 深圳盈天下视觉科技有限公司 Display method, system, terminal and storage medium
CN113885703A (en) * 2021-09-30 2022-01-04 联想(北京)有限公司 Information processing method and device and electronic equipment
CN114398132A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Scene data display method and device, computer equipment and storage medium
CN114625468A (en) * 2022-03-21 2022-06-14 北京字跳网络技术有限公司 Augmented reality picture display method and device, computer equipment and storage medium
CN114625468B (en) * 2022-03-21 2023-09-22 北京字跳网络技术有限公司 Display method and device of augmented reality picture, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112037314A (en) Image display method, image display device, display equipment and computer readable storage medium
CN107045844B (en) A kind of landscape guide method based on augmented reality
EP3798801A1 (en) Image processing method and apparatus, storage medium, and computer device
CN105981076B (en) Synthesize the construction of augmented reality environment
WO2022022036A1 (en) Display method, apparatus and device, storage medium, and computer program
CN111833458B (en) Image display method and device, equipment and computer readable storage medium
CN106170978B (en) Depth map generation device, method and non-transitory computer-readable medium
US10147399B1 (en) Adaptive fiducials for image match recognition and tracking
CN109887003A (en) A kind of method and apparatus initialized for carrying out three-dimensional tracking
CN108594999B (en) Control method and device for panoramic image display system
CN106650723A (en) Method for determining the pose of a camera and for recognizing an object of a real environment
CN105023266A (en) Method and device for implementing augmented reality (AR) and terminal device
US20150169987A1 (en) Method and apparatus for semantic association of images with augmentation data
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN112684894A (en) Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN104781849A (en) Fast initialization for monocular visual simultaneous localization and mapping (SLAM)
US20200257121A1 (en) Information processing method, information processing terminal, and computer-readable non-transitory storage medium storing program
KR20150075532A (en) Apparatus and Method of Providing AR
CN108829250A (en) A kind of object interaction display method based on augmented reality AR
CN111815780A (en) Display method, display device, equipment and computer readable storage medium
CN110473293A (en) Virtual objects processing method and processing device, storage medium and electronic equipment
US20180239514A1 (en) Interactive 3d map with vibrant street view
CN112308977B (en) Video processing method, video processing device, and storage medium
Schütt et al. Semantic interaction in augmented reality environments for microsoft hololens
JP2021136017A (en) Augmented reality system using visual object recognition and stored geometry to create and render virtual objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination